Ensure continuous delivery by deploying industry solutions to a cloud platform

Using a real customer scenario, explore the challenges and how they are resolved to ensure continuous deployment. Continuous delivery has become a common pattern for solutions delivered on a cloud platform. To make continuous delivery possible, all of the build, deployment, test, and release stages need to be automated to deliver frequent iterations so that developers and operations professionals can continuously pick up the pipeline of code changes and defect fixes. Increased automation of processes leads to increased efficiency in cloud delivery. Apply the installation and deployment techniques in this article to improve the solution installer and make it better suited for continuous deployment and delivery. This article demonstrates an approach to altering solution installation and configuration to support continuous deployment.

Share:

Zhao Zhuo (zhuozhao@cn.ibm.com ), Staff Software Engineer, IBM

Photograph of Zhao ZhuoZhuo Zhao is a staff software engineer on the Industry Solution development team. She works at the China development lab at IBM. She has many years experience on Industry Solution development, and currently works on automated deployment for Continuous Delivery.



13 May 2014

Also available in Russian

Introduction

As cloud computing technology evolves, the cloud environment is regarded as the most promising way to deliver industry solutions to clients. To ensure continuous delivery of software, the development, test, and operations teams must collaborate and work effectively together. The cloud environment suits this type of interaction. However, because the deployment phase involves complex and distributed topology, it is error-prone and usually requires manual troubleshooting. In many cases, the deployment design supports a single deployment, rather than continuous deployment. The deployment stage for a product applying the principle of continuous delivery often bottlenecks and negatively affects the efficiency of the DevOps process.

Using a real customer scenario, explore the challenges and how they are resolved to ensure continuous deployment.

Software engineers who work on deployment automation for industry will find this article useful to help implement continuous delivery on the cloud. These instructions assume you have skills in deploying industry solutions and developing scripts.


The continuous delivery process

The goal of continuous delivery is to ensure that software can be developed, tested, deployed, and delivered to production using the most efficient and safe methods. Changes to any part of the software system, from the infrastructure level, application level, or customization data level, are continuously applied to the production environment through a specific delivery pipeline. This approach builds confidence among users that the production environment has access to the latest release-ready code.

Common model and framework

The generic model shown in Figure 1 is widely applied in the delivery of most IBM industry solutions.

Figure 1. Common continuous delivery model
Architecture of generic model for continuous delivery

Figure 1 demonstrates an end-to-end automation channel that connects solution development and production environment. It uses Jenkins as an automation engine that can:

  • Detect code changes and trigger the continuous build.
  • Install the build to the Blue zone. The Blue zone is considered the staging environment where the DevOps team does build verification locally.
  • Start the automation test on the Blue zone and verify the test results.
  • Switch the staging environment to the live production environment (the Red zone) after the Blue zone is ready and verified.

Complicating factors in continuous delivery

Step 2 is the major focus of this article. In the continuous delivery process, the most challenging aspect is how to implement continuous deployment efficiently. The deployment phase must accommodate complex and distributed topology, a changing infrastructure, and configuration changes. These complicating factors make it easy to lose customer data. Traditional solution deployment design supports a single deployment, rather than continuous deployment.

To reduce errors, make the process more efficient, and save DevOps time and effort, maximize the automation of solution deployment and troubleshooting. The first step is to understand the challenges encountered in real solution deployment scenarios.


Sample industry solution deployment case

Consider the scenario in which an industry solution is delivered on the cloud platform every month. Because the DevOps team must deploy each monthly build on the production environment, the installer needs to be able to run on the production environment at the outset of the deployment process.

Common problems in the continuous delivery process

The following examples illustrate common problems from an installation perspective:

  • File replacement
  • Resource update
  • Database configuration

File replacement

Often configuration files need to be updated or replaced. In Listing 1, the installer replaces the placeholder in Line 3 at installation time. The URL in Line 4 is set according to user input at runtime.

A new requirement calls for the installer to add a new URL someURL at installation time in a later iteration. A traditional solution is to append <someURL>@url@<someURL> to Line 4 and do a complete replacement of the placeholders, but that approach causes errors and loses the user input for the customizationURL.

Listing 1. Script with sample placeholders for host and URL
1 <servers>
2 <server id="appServer">
3  <host>@host@</host>
4  <customizationURL>@cusUrl@</customizationURL>
5 </servers>

Resource updates

Assume you have an application server scheduler created with Java Naming and Directory Interface (JNDI) and named test/schedulerA. The related table is created with the prefix testA_. After a time, you need to rename the table prefix of this scheduler to testB_. Typically, you might invoke the WebSphere Application Server API AdminControl to update the scheduler reference to new tables. However for continuous delivery, the old tables are already created and, even, if they are useless they are not dropped. When another scheduler uses the testA_ prefix the future, errors result.

Database configuration

Database configuration changes create the most challenging type of problem. As the solution is updated, the database goes through changes to table structure, user privileges, stored data, and other changes. One option to accommodate the changes is to a SAMPLE.TEST_TAB table and insert a new COL_C column into it. However, in the next iteration, the table structure changes; for example, the COL_B column needs to be removed. A traditional deployment adjusts to the change by adding Line 6 to the database script shown in Listing 2.

Listing 2. Script with new line to drop a column in the database table
1 CREATE TABLE SAMPLE.TEST_TAB (
2 "COL_A" INTEGER NOT NULL , 
3  "COL_B" VARCHAR(100) ) 
4 IN "USERSPACE1" ; 
5 ALTER TABLE SAMPLE.TEST_TAB
6 DROP COLUMN COL_B
7  ADD COLUMN COL_C INTEGER;

However, in continuous delivery, adding a line results in an error, because:

  • SAMPLE.TEST_TAB already exists when the script runs a second time.
  • Column COL_B is already dropped. Therefore, the SQL processing stops without adding COL_C.

Principles to apply to address continuous delivery problems

To overcome the problems that result from file changes, resource updates, and configuration changes, apply the following principles.

Minimize the development effort required of the solution installer. Rather than developing different installers for each delivery, make sure that any changes in the deployment phase can be rapidly and safely delivered to production. As you develop the new deployment, inherit as much as possible from past iterations. For example, assume you have deployed Resource A in a previous iteration. In the current iteration, there are no changes to Resource A but you need to add a new resource, Resource B. Add scripts to create Resource B, but also keep the previous script that created Resource A even though there are no new requirements for Resource A.

Make use of automated troubleshooting for the installation phase. The efficiency of continuous delivery heavily depends on how much of the process is automated. Because of differences in environments, mistakes introduced in manual operations, and other similar factors, you must provide mechanisms to handle troubleshooting and installation failures automatically.

Support the ability to repeatedly deploy the solution. This principle is the most critical one to apply and the most difficult to implement. To inherit as much as possible from past iterations, you must determine how to:

  • Manage repeated deployments.
  • Use the existing infrastructure and resources to deploy new code.
  • Update the previous configuration or remove obsolete deployments.

Elements of a solution installer

Improve solution deployment by making changes to the following aspects of a typical solution installer:

File artifacts

For continuous delivery, the installation process must be able to:

  • Copy and replace artifacts
  • Integrate the artifacts
  • Merge the artifacts.

To prevent the loss of customized data mentioned in the Problem of file replacement, merge the customization into the files of the new version, rather than simply copying and replacing files. Listing 3 shows the files from past and current iterations, old.xml and new.xml.

Listing 3. Comparison between old.xml and new.xml
[root@server diff]# cat old.xml
<servers>
 <server id="appServer">
 <host>appserver.test.com</host>
 <customizationURL>http://abc.com</customizationURL>
</servers>
[root@server diff]# cat new.xml
<servers>
 <server id="appServer">
 <host>appserver.test.com</host>
 <customizationURL>@cusUrl@</customizationURL>
 <someURL>http://someOtherUrl<someURL>
</servers>

To programmatically merge files, add scripts (Shell scripts for Linux) to the installer to find the differences between the versions. Record the differences in a file to be handled as a patch. Then fetch the customization information from the old version of the file by searching the patch, then merge each change into the current file, as shown in Listing 4.

Listing 4. Program to merge files
[root@server diff]# diff old.xml new.xml > patch.txt
[root@server diff]# cat patch.txt
4c4,5
< <customizationURL>http://abc.com</customizationURL>
---
> <customizationURL>@cusUrl@</customizationURL>
> <someURL>http://someOtherUrl<someURL>
[root@server diff]#

Enterprise applications

Throughout the continuous delivery process, enterprise applications change. The changes occur in code, in resources, and in the EAR structure. Because the changes affect installation scripts, you must be aware of all changes to models and changes to the mapping relationship between EAR modules and target deployment servers. To manage the changes, use an automatic program to generate EAR deployment code.

First, extract the defined model information in EAR from the application.xml file during the build phase. Listing 5 shows an example of a target application.xml file.

Listing 5. Sample application descriptor
<?xml version="1.0" encoding="UTF-8"?>
<application xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee version="5" 
http://java.sun.com/xml/ns/javaee/application_5.xsd">
 <display-name>sample_rest_ear</display-name>
 <module id="Module_1361859780281">
      <web>
<web-uri>sample_test_rest.war</web-uri>
 <context-root>/ibm/test/api/services</context-root>
 </web>
 </module>
 <module id="Module_1364467704309">
 <ejb>sample_test_ejb.jar</ejb>
 </module>
</application>

Next, generate the deployment descriptor automatically. The descriptor is based on fetched model information. Listing 6 shows a sample deployment descriptor properties file, which describes the display name, the EAR file location, and the modules that map to the application to be deployed. By using a properties file, you ensure the application deployment parameters are more flexible and easier to automate.

Listing 6. Sample properties file describes application deployment
application.0.name=sample_rest_ear
application.0.earfile=content/config/apps/sample_rest_ear.ear
application.0.module.0.name=sample_test_rest
application.0.module.0.moduleFile=sample_test_rest.war
application.0.module.0.deployment=WEB-INF/web.xml
application.0.module.0.cluster.0.name=~{PORTAL_CLUSTER_NAME}
application.0.module.1.name=sample_test_ejb
application.0.module.1.moduleFile=sample_test_ejb.jar
application.0.module.1.deployment=META-INF/ejb-jar.xml
application.0.module.1.cluster.0.name=~{PORTAL_CLUSTER_NAME}

The back-end code parses the properties and invokes an application server API to deploy it. The parameter deploymentInstruction_attrs, shown in Listing 7, is generated according to the properties in Listing 6.

Listing 7. Sample value of deploymentInstruction_attrs
[ -operation update -contents 
sample_rest_ear.ear 
-installed.ear.destination 
$(APP_INSTALL_ROOT)/cell1 -distributeApp  
-MapModulesToServers [[sample_test_rest sample_test_rest.war,WEB-INF/web.xml 
WebSphere:cell=cell1,cluster=ClusterA+WebSphere:cell=cell1, node=ihsnode1,server=ihsserver1 ] 
[sample_test_ejb sample_test_ejb.jar META-INF/ejb-jar.xml 
WebSphere:cell=cell1,cluster=ClusterA]]]

Listing 8 shows how to invoke the application server API to perform a repeatable installation.

Listing 8. Sample script to invoke an application server API to perform a repeatable installation
if (exist == "true"): # Existing application found.
 AdminApp.update(appNametrim, 
'app', 
deploymentInstruction_attrs)
else:              # Existing application not found. Create it if requested             
AdminApp.install(earFile, deploymentInstruction_attrs) 
#endIf

AdminApp provides the functions update(), install(), and uninstall()to configure the EAR deployment. Use the update function instead of the delete function so that the application can be deployed repeatedly.

Enterprise application resources

The configuration to deploy application resources is similar to the configuration for the deployment of applications. Use the update function instead of the delete and re-create functions to avoid losing any custom settings in the configuration. Because the deployment for application resources includes relationships between resources, use a method similar to the garbage-collection method in Java to maintain those relationships.

Listing 9 shows the sample deployment descriptor properties for application server scheduler schedulerA, mentioned in the Problem of resources updates section. Choose whether to create relative database tables. If you choose to create relative tables, the application server generates database tables with the schema defined by the data source (in this case, jdbc/ds) and the prefix (in this case, testA.)

Listing 9. Sample properties file describes scheduler deployment
scheduler.0.name=schedulerA
scheduler.0.jndiname=test/schedulerA
scheduler.0.description=Scheduler for Task A
scheduler.0.datasourceJNDIName=jdbc/ds
scheduler.0.datasourceAlias=dbuser
scheduler.0.workManagerInfoJNDIName=wm/default
scheduler.0.tablePrefix=testA_
scheduler.0.createTables=true
scheduler.0.target.cluster=~{PORTAL_CLUSTER_NAME}

The relative tables created with DBUSR and prefix TESTA_ are shown in Figure 2.

Figure 2. List of tables created
Tables created with TESTA prefix

Any time the prefix of the scheduler table or the data source references are changed, you need to recycle the referred tables before you create new ones. To do this, use a deployment script similar to Listing 10.

Listing 10. Sample script to invoke application server API to perform a repeatable scheduler installation
if (existScheduler):
 if (createTables):
      #drop existing old tables if new tables required
           cellNameStr = 'cell=' + getCellName()
           nodeNameStr = 'node=' + getNodeName()
 Scheduler_Config_Helper_str = AdminControl.completeObjectName 
('WebSphere:name=Scheduler_Config_Helper,process=dmgr,platform=dynamicproxy,' + nodeNameStr + 
',type=WASSchedulerCfgHelper,mbeanIdentifier=Scheduler_Config_Helper,' + cellNameStr + ',*')
 AdminControl.invoke(Scheduler_Config_Helper_str, 'dropTables', scheduler, '[java.lang.String]')
 #endIf
 AdminConfig.modify(scheduler, deploymentInstruction_attrs)
#endIf 
else: 
      attrs.append(["name", name])
          scheduler = AdminConfig.create("SchedulerConfiguration", schedulerProvider, 
deploymentInstruction_attrs) 
#endIf

Database

Database configuration is the most complicated aspect in continuous delivery to handle. Frequent configuration changes to the database and frequent code updates make the installation process error-prone. Consider the following examples of errors that occur when database changes are populated into the test or production environment.

Error 1: Create table error

In traditional installers, each SQL instruction is run only once. However in continuous delivery situations, a SQL instruction might be run repeatedly to accommodate a monthly, weekly, or daily delivery. In Listing 11, the table creation fails because the script doesn't accommodate repeated runs. You can ignore this type of error.

Listing 11. Sample table creation error
CREATE TABLE SAMPLE.TEST_TAB(
      COLA INTEGER NOT NULL, 
          COLB VARCHAR(200) NOT NULL UNIQUE ) ;
SQL failed with: The name of the object to be created is identical to the existing name 
"SAMPLE.TEST_TAB" of type "TABLE".. SQLCODE=-601, SQLSTATE=42710, DRIVER=3.64.106

Error 2: Alter table error

When you alter tables, you encounter a similar situation as in Error 1. If a drop code is run more than one time, it fails, as shown in Listing 12. Unlike Error 1, however, you cannot ignore this error because the remaining code for COL_C cannot be run.

Listing 12. Sample alter table error
CREATE TABLE SAMPLE.TEST_TAB(
      COLA INTEGER NOT NULL, 
          COLB VARCHAR(200) NOT NULL UNIQUE ) ;
SQL failed with: The name of the object to be created is identical to the existing name 
"SAMPLE.TEST_TAB" of type "TABLE".. SQLCODE=-601, SQLSTATE=42710, DRIVER=3.64.106

Error 3: Data error

Assume that COL_A has been set to the primary key in TEST_TAB. The previous SQL inserted data into the database using the SQL instruction shown in Listing 13.

Listing 13. SQL executed in previous delivery
INSERT INTO SAMPLE.TEST_TAB (COL_A, COL_B) VALUES ('max_number_to_display', '20');

When the business logic changes, the requirements change and the SQL instructions are updated, as shown in Listing 14.

Listing 14. Updated SQL in current delivery
INSERT INTO SAMPLE.TEST_TAB (COL_A, COL_B) VALUES ('max_number_to_display', '40');;
DB2 SQL error: SQLCODE: -803, SQLSTATE: 23505, SQLERRMC: 1;SAMPLE.TEST_TAB, DRIVER=4.12.55

This change works well for a traditional installer but it does not take effect in continuous delivery because the previous code is already deployed in a previous delivery.

These sample errors show the necessity of implementing installation repeatability in the database configuration process for continuous delivery deployments. Installers for continuous delivery deployment must offer increased repeatability. Apply the following patterns to implement installation repeatability for database configuration.

Pattern A: Make SQL scripts safe to run more than once

To prevent errors similar to Error 2 and to ensure the validity of SQL scripts when the scripts are run more than once, prohibit the ability to check in existing SQL statements that have been directly updated.

Avoid directly updating SQL scripts during code development and code review. Instead, deliver code changes to an existing configuration (regardless of the whether the change is to existing table structure or existing data) by first splitting the changes into the smallest unit, and then appending the code pieces at the end of existing scripts. To avoid errors similar to Error 2, use the SQL script in Listing 15. This script ensure that COL_C is added successfully when the script is run more than once.

Listing 15. Optimized alter table script
ALTER TABLE SAMPLE.TEST_TAB
 DROP COLUMN COL_B;
ALTER TABLE SAMPLE.TEST_TAB
      ADD COLUMN COL_C INTEGER;

Pattern B: Apply selective installation validation

When you validate a traditional installation, the installation validator checks the database installation log and records all SQL errors that have been detected. However, in continuous delivery, SQL scripts can be run many times. It is nearly impossible to avoid every SQL error. Filter out the unimportant errors and those that can be ignored, such as:

  • Create duplicate tables, columns, schemas, keys, and so on
  • Drop non-existing objects
  • Insert duplicate data

Selective validation ensures that important errors receive the most attention and can be diagnosed and fixed.

Pattern C: Simplify and accelerate the running of SQL scripts

As database-related code is continuously added, the installer becomes larger and larger. To improve deployment success and performance apply the following process.

  1. Archive scripts periodically at regular intervals. For example, assume that code is delivered to production every month. The scripts are archived monthly, as shown in Figure 3.
Figure 3. File structure of archived scripts
File structure of archived scripts

In addition to the folders, the content-spec_<time stamp> identifies the commands to invoke the scripts. As shown in Listing 16, the script is from the content-spec_201311.xml file and it invokes the scripts that are in the November archive folder.

Listing 16. Content of content-spec_201311.xml
	<SYS command="db2 connect to DB" />
<SQL file="content/config/script/201311/delta_
update.ddl" />
<SQL file="content/config/script/201311/delta_sample_data_
update.ddl" />
<SYS command="db2 commit work;db2 CONNECT RESET;db2 TERMINATE;" />
  1. Map the time stamp to archived folder. After you archive scripts, set up an automatic method to match the time stamp with the archived artifacts, such as the mapping file shown in Listing 17. In the listing, the timestamp parameter indicates the specific date when the scripts are archived. The code changes made up to that date are included in the archived folder.
Listing 17. Sample mapping file
	<buildHistory projectName="Sample_Project">
 <build timestamp="20131130" folder="201311" />
 <build timestamp="20131230" folder="201312" />
 <build timestamp="20140130" folder="201401" />
 <build timestamp="20140228" folder="201402" />
</buildHistory>
  1. Generate scripts queue automatically. After mapping time stamps with archived folders, the installer consolidates target scripts and generates the scripts execution queue automatically. First it detects the current build level and compares the build level with the defined timestamp parameter in Listing 17. For each entry that has a time stamp later than the current build level, the scripts in the mapped folder are run, as shown in Listing 18.
Listing 18. Consolidate script to manage archived dbscripts

Click to see code listing

Listing 18. Consolidate script to manage archived dbscripts

	scriptLocation=$(cd -P -- "$(dirname -- "$0")" && pwd -P)|
# build level get from user's current environment
startVersion=$1
if [[ $startVersion == "" ]]; then
 $startVersion="00000000"
fi
# mapping file containing relationship between time stamp and archived folders
fname="$scriptLocation/buildHistory.xml"
 exec<$fname
 while read line
 do
 if grep -q timestamp <<<$line;
 then
 timestamp=`echo $line | awk '{print $2}' | sed 's/timestamp=//' | sed 's/.\(.*\)/\1/' | sed 's/\(.*\)./\1/'`
 if (( "$timestamp" > "$startVersion" ));then
 folder=`echo $line | awk '{print $3}' | sed 's/folder=//' | sed 's/.\(.*\)/\1/' | sed 's/\(.*\)./\1/'`
 FOLDERARRAY[$index]="$folder"
 index=$(($index+1))
 fi
 fi
 done

For example, if the build level on the current user's environment is 20140205, only the scripts archived in folder 201402 are invoked because the folder has the mapping time stamp 20140228. For the others, installer assumes they have been run after they were deployed with the build at build level 20140205. The consolidated cont-spec file looks similar to Listing 19. It contains all of the delta scripts from last month.

Listing 19. Delta scripts generated by consolidate script
	<SYS command="db2 connect to DB" />
<SQL file="content/config/script/201311/delta_
update.ddl" />
<SQL file="content/config/script/201311/delta_sample_data_
update.ddl" />
<SQL file="content/config/script/201312/delta_
update.ddl" />
<SQL file="content/config/script/201401/delta_
update.ddl" /
<SQL file="content/config/script/201401/delta_sample_data_
update.ddl" />
<SQL file="content/config/script/201402/delta_
update.ddl" />
<SYS command="db2 commit work;db2 CONNECT RESET;db2 TERMINATE;" />

Tips to improve continuous delivery deployments

Apply the following tips when you deploy in continuous delivery environments:

  • Treat customer data with appropriate care.
  • Do not damage existing data.
  • Always use update commands instead of drop and recreate to avoid loss of customer data and settings.
  • Use a backup and restore mechanism in cases of complex troubleshooting.

Conclusion

This article describes how to improve the solution installer for continuous delivery environments to reduce deployment errors and make the deployment and integration tasks run more smoothly.

Resources

Learn

Get products and technologies

  • Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, or use a product in a cloud environment.

Discuss

  • Participate in DevOps for the masses, a community of IT practitioners interested in knowledge about DevOps.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into DevOps on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=DevOps
ArticleID=971130
ArticleTitle=Ensure continuous delivery by deploying industry solutions to a cloud platform
publish-date=05132014