Migration of WebSphere Deployment from IBM WebSphere Application Server Network Deployment V6 to V7 on AIX 6.1

This article provides insight to a migration exercise in clustered production environment and covers issues and resolutions on the way to successfully achieving the migration of complex network deployment configuration from IBM WebSphere® Application Server Network Deployment (ND) V6 to V7 on AIX® 6.1 platform. The WebSphere and AIX system administrators would benefit most from this article as these migration issues may occur in one form or the other.


Vivek Arora (viarora@au1.ibm.com), Senior WebSphere Accelerated Value Leader, IBM Software Group Services

Vivek Arora is a Senior WebSphere Accelerated Value Leader with IBM Software Group Services in Canberra, Australia. In his current role he provides strategic and value-adding services to customer on architecture, integration and migration, problem management and technical support focused on Infrastructure, Message Oriented Connectivity and Business Process Management and Service Oriented Architecture (SOA).

01 April 2012

Also available in Chinese Russian

Overview of WebSphere Network Deployment migration

WebSphere migration can be performed with the migration wizard or with migration commands. Although the migration wizard provides a standard way of migrating profiles to a default location, the migration commands are useful for migrating profiles to a location outside of the installation tree.

Figure 1. Migration wizard
Figure 1- Migration wizard.

In clustered production environments,the migration commands viz WASPreUpgrade and WASPostUpgrade would be used by a System Administrator as the preferred way of migrating applications and configurations in automated scripts. These tools copy existing configurations including older defaults and settings such as ports, JVM parameters etc from WebSphere Application Server Network Deployment V6 and merge them in the new WebSphere Application Server Network Deployment V7 configurations.

The WASPreUpgrade command creates a backup of the WebSphere Application Server V6 configuration information and the WASPostUpgrade command then take the backup created by WASPreUpgrade and move the previous configuration up to the WebSphere Application Server V7. The WASPostUpgrade tool also creates a backup of the WebSphere Application V7 environment prior to making any changes as it would attempt to rollback these changes, if errors occurred.

Figure 2. Migration process with WASPreUpgrade and WASPostUpgrade commands
Figure 2- Migration process with WASPreUpgrade and WASPostUpgrade commands.

The migration process synchronizes data in the migrated node with the deployment manager during which the contents of the new profile's configuration are uploaded to the deployment manager, one file at a time.

In migration, by default the same port values would be mapped over from the V6.0 Deployment Manager to the V7.0 Deployment Manager, including the SOAP connector. During the process, any port conflicts that might occur can be rectified with the WASPostUpgrade by using the -replacePorts parameter.

Large WebSphere network deployment topologies

A WebSphere network deployment topology consists of a deployment manager with one or more nodes on which the application server(s) reside to run the applications. The large deployment topologies usually contain many hundred applications servers or many nodes. An example of the large network deployment topology with a balanced mix of nodes and servers could be 30 nodes with 20 servers per node, or with more nodes and fewer servers it could be 60 nodes with 20 servers per node, managed by one deployment manager. Another example of a largish deployment topology with a deployment manager could be one in which there are 25 nodes with each node having at least an application server. However for the purpose of this exercise, a relatively small but complex type of topology in clustered environment is covered in the succeeding paragraph.

Complex topology for this migration exercise

A complex topology was set up for this migration exercise in production as per below:

  • One deployment manager in the clustered environment that manages 8 AIX server nodes with each node managing 2 WebSphere Application Server instances. In other words, there would be 1 deployment manager, 8 Node Agents and 16 WebSphere Application Server instances in the cluster. The deployment manager server and application servers are virtual servers.
  • There is also a managed WebSphere Application Server residing on the same AIX server as the deployment manager. The J2EE applications with Message Driven Beans (MDB) are deployed on this independent application server - WAS(MDB).
Figure 3. Complex WebSphere Deployment Topology for Migration Exercise
Figure 3- Complex WebSphere Deployment Topology for Migration Exercise.

Operational management of a complex topology

A management request from the deployment manager (e.g. DMGR1) to a single application server flows through the deployment manager to the node agent on the same node where the server resides, and finally to the application server itself. The deployment manager communicates with a node agent only, and each node agent communicates with its respective application servers, such as WAS1 or WAS(MDB).


  • WebSphere Application Server Network Deployment V6.0.2 (existing).
  • WebSphere Application Server Network Deployment V7.0.0.13 (new).
  • AIX 6.1
  • IBM CICS® 3.2
  • LDAP
  • IBM DB2®
  • WebSphere MQ V6
  • The Global security is turned off in WebSphere Application Server V6.
  • The security on the WebSphere Application Server V7.0 Deployment Manager is disabled for migration.

Migration plan

  • Create the profiles on WebSphere Application Server Network Deployment V7:
    • Deployment Manager profile – a deployment manager which administers the application servers federated in its cell.
    • Application Server profiles:
      • Application Servers in the clustered environment federated into the cell.
      • Application Server, independent of the clustered application servers and managed by the deployment manager.
  • Migrate the deployment manager from WebSphere Application Server Network Deployment V6 to V7.
  • Migrate node(s) for clustered application servers from WebSphere V6 to the new V7 and federate it with the new deployment manager of WebSphere Application Server Network Deployment V7.
  • Migrate independent application server along with MDB J2EE application(s).

Premigration considerations for network deployment

  • Enough file system space needs to be allocated to cater for the WebSphere Application Server V7 binaries and profiles in addition to the WebSphere Application Server V6 binaries and profiles.
  • Permission needs to be set to allow read, write or create on the WebSphere and Update Installer binaries file system.
  • WebSphere Application Server Network Deployment V7 product need to be installed and profiles (deployment manager and application servers) to be created as per the topology.
  • The deployment manager WebSphere V7 needs to be up and running, as the federated node migration requires an active connection to the deployment manager.
  • Any aborted migration attempts on the WebSphere V7 application server profile need to be cleaned up by either restoring the profile backup or by recreating the profile prior to next migration attempt. Please refer to section on How to clean up failed migration.

Migration steps

The utilities and commands required for the migration are found in the WebSphere V7 bin directory, such as '/usr/Websphere/AppServer/v7.0_app1/bin'. When using the WASPreUpgrade and WASPostUpgrade commands, the -traceString parameter may be specified in the command line syntax to trace the code. The migration commands executed with parameters for this exercise were:

Listing 1. WASPreUpgrade command
/usr/WebSphere/AppServer/v7.0_MNQ/bin >>  ./WASPreUpgrade.sh
-oldProfile ABCD_MNQ_app01
-traceString "*=all=enabled"   
-traceFile /waslogs/was6_to_was7/trace/WASPreUpgrade_trace.log

The pre-upgrade step would either be completed successfully or fail with some issues (as covered separately in the succeeding paragraph).

Listing 2. manageprofiles command
/usr/WebSphere/AppServer/v7.0_MNQ/bin >>   ./manageprofiles.sh 
-create -profileName ABCD_MNQ_temp 
-profilePath   /var/opt/websphere/profiles/ABCD_MNQ _temp 
-cellName ABCD_MNQ_cell01      
-portsFile /tmp/ports_file.txt -hostName 026 -nodeName    ABCD_MNQ_app01_026

The manageprofiles command would create a profile.

Listing 3. WASPostUpgrade command
/usr/WebSphere/AppServer/v7.0_MNQ/bin >> ./WASPostUpgrade.sh    
/waslogs/was6_to_was7/migration/migration_ ABCD_MNQ_app01_026   
-oldProfile ABCD_MNQ_app01 
-profileName ABCD_MNQ_temp 
-traceString   "*=all=enabled" 
-traceFile  /waslogs/was6_to_was7/trace/WASPostUpgrade_trace.log

The post-upgrade step would either be completed successfully or fail with some issues(as covered separately in the succeeding paragraph).

Issues and their resolutions

Issue: The WASPreUpgrade migration command failed when migrating an application server with MIGR0484E/MIGR0272E.

Listing 4. Migration failed with MIGR0484E/MIGR0272E
IBM WebSphere Application Server, Release 7.0
Product Upgrade PreUpgrade tool, Version 1.0
Copyright IBM Corp., 1997-2008
MIGR0300I: The migration function is starting to save the existing 
Application Server environment.
MIGR0302I: The existing files are being saved.
MIGR0484E: No profiles or instances found with name ABCD_MNQ_app01.
MIGR0001I: The class name of the WASPreUpgrade command is WASPreUpgrade
MIGR0272E: The migration function cannot complete the command.

The steps performed prior to the migration failure are:

  • The WebSphere Application Server V7 Deployment Manager was up and running, and WebSphere Application Server V6 node agent(s) and application servers were stopped.
  • The Deployment Manager had been successfully migrated from WebSphere Application Server V6 to V7.

Resolution: To troubleshoot this failure, make sure that there was no wasprofile command for WebSphere Application Server V6 running with the help of the ps-ef|grep java command. It was also needed to ensure that the profile ABCD_MNQ_app01 was being referenced in the profile registry, which was verified in the file profileRegistry.xml, located in '/usr/WebSphere/AppServer/v6.0_MNQ/properties'.

Listing 5. ProfileRegistry.xml file
profile isDefault="true" name="ABCD_MNQ_app01"           

It was also checked that there was no 'profileRegistry.xml_LOCK file present.

Having ensured all of the above, it was noticed that the ABCD_MNQ_app01 profile was not referenced in fsdb directory and due to this, the migration had failed. The following script was required to be copied to the directory WAS_HOME Directory /properties/fsdb.

Listing 6. Script copied to fsdb directory

After performing this step, the migration command ran successfully.

Issue: The WASPostUpgrade migration command failed with MIGR0286E due to Illegal State Exception.

Listing 7. Migration failed with MIGR0286E due to java.lang.IllegalStateException
DSRA7602I: Attempting to delete newly created Derby database 

java.lang.IllegalStateException: java.lang.IllegalStateException:
Depth value 3 must be set
at com.ibm.ws.runtime.component.VariableMapImpl.reload(VariableMapImpl.java:238)
at com.ibm.ws.runtime.component.VariableMapImpl.refresh(VariableMapImpl.java:152)...
at com.ibm.ws.migration.postupgrade.WASPostUpgrade.restore(WASPostUpgrade.java:246)
at com.ibm.ws.migration.postupgrade.WASPostUpgrade.main(WASPostUpgrade.java:539)
Caused by: com.ibm.websphere.management.exception.RepositoryException: 
Error occurred during upload to: upload/cells/ABCD_MNQ _cell01/nodegroups/
Exception: java.io.IOException: Read error

Caused by: java.io.IOException: Read error
at java.io.FileInputStream.read(FileInputStream.java:191)
at com.ibm.ws.management.repository.TempFileInputStream.read
at com.ibm.websphere.management.repository.RepositoryInputStream.read
at com.ibm.ws.management.filetransfer.client.FileTransferClientImpl.uploadFile
.. 30 more

MIGR0286E: The migration failed to complete.

Resolution: The synchronization process in the migration had failed because the system user had run out of file handles in AIX environment. The AIX nofiles limit setting(i.e ulimit -n) was increased from default 2000 to 10000 to resolve this issue.

Issue: When migrating the clustered application servers from WebSphere Application Server ND V6 to V7, the deployment manager appeared to look for additional application server configurations that it could not find, and the migration failure occurred.

Resolution: In the complex topology peculiar to this exercise, the migration of the independent application server located on the same AIX Server as the deployment manager(DMGR1) needed to be undertaken after the migration of the clustered application servers. A decision was taken to perform migration in this particular order, which resolved this migration issue.

Issue: Migration failed due to apparent 'network connection reset' or 'network read error'. During the upload of one of the files in the migration process, the network connection was reset which caused the file upload to fail. The synchronization process reported the failure to the migration tool, and the migration tool aborted the migration operation. At other times, the synchronization process reported a network read error, which caused the migration operation to abort. This occurred while processing a different file than the first attempt. It seemed that the network connection between the node and deployment manager was being disrupted as the migration took place.

Resolution: This appeared to be a network issue at a first glance as the deployment manager was cutting-off the connection, and the migrating node could only perceive that its connection was aborted somehow. The network issue was discounted because the deployment manager server and failing application server were the virtual servers in the hypervisor. As a matter of fact, the migration tool saturated the deployment manager's connections with incoming data, and the deployment manager reached the cap placed on the number of allowed open connections on that channel. It was noticed in the deployment manager's SystemOut log that the TCP Channel 'TCP_1' had exceeded the maximum number of open connections that were configured to be 100.

The following figures illustrate this setting:

Figure 4. Maximum open connections for TCP channel on WC_adminhost port
Figure 4- Max open connections for TCP channel on WC_adminhost port.
Figure 5. Max open connections for TCP channel on WC_adminhost_secure port
Figure 5- Max open connections for TCP channel on WC_adminhost_secure port.

This issue was resolved by increasing the number of max open connections for TCP Channel from 100 to 20000 for WC_adminhost and WC_adminhost_secure ports during the migration exercise.

Issue: As part of migration verification, the application server showed up problems in the startup with 'ibmasyncrsp' process failed to start.

Listing 8. The system application 'ibmasyncrsp' failed to start
00000021 ApplicationMg A   WSVR0200I: Starting application: ibmasyncrsp 
00000021 ApplicationMg A   WSVR0203I: 
Application: ibmasyncrsp  Application build level: 1 [2] 
00000020 ApplicationMg A   WSVR0200I: Starting application: MNQ_v3.30_HF11 
ApplicationMg A   WSVR0204I: Application: MNQ_v3.30_HF11  
Application build level: Unknown 
00000021 FfdcProvider  W com.ibm.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: 
FFDC Incident emitted 
00000021 DeployedAppli W   WSVR0206E: Module, ibmasyncrsp.war, of application, 
ibmasyncrsp.ear/deployments/ibmasyncrsp, failed to start 
00000021 ApplicationMg W   WSVR0101W: An error occurred starting, ibmasyncrsp 
00000021 ApplicationMg A   WSVR0217I: Stopping application: ibmasyncrsp 
00000021 FfdcProvider  W  com.ibm.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: 
FFDC Incident   emitted

Resolution: Before commencing the migration, the “default_host” definition had been changed via Admin Console from 'default_host' to 'MNQ_default_host' in this exercise, and the Virtual Host mapping had been updated for the application. However, after migration it seemed that the system application still referenced 'default_host' instead of 'MNQ_default_host' and the Application Server startup trace showed “open for e-business, problems occurred during startup”.

It was determined that the system installed application 'ibmasyncrsp.ear which is used to receive asynchronous messages in the SIBus and web service run time had been mapped to the default host 'default_host' instead of desired 'MNQ_default_host' virtual host.

Listing 9. Mapping of system installed application 'ibmasyncrsp.ear

 xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" 
 xmi:id="WebAppBinding_1155152745161" virtualHostName="default_host"

As the WebSphere application ('ibmasyncrsp.ear') was not accessible from the Admin Console, the file ibm-web-bnd.xml was therefore updated to point to the selected virtual host 'MNQ_default_host'.

With this change, the issue was resolved and the application server started without any problem.

Issue: Upon migration, the TLS (Transport Layer Security) cipher being used under WebSphere Application Server V6 couldn't be used under WebSphere Application Server V7 without enabling FIPS (the United States Federal Information Processing Standard algorithms), to re-establish the MDB application server connection to WebSphere MQ V6.

Resolution: It was required to identify a cipher that could be used by both WebSphere Application Server V7 and WebSphere MQ V6, and didn't require FIPS to be enabled. The SSL (Secure Sockets Layer) CipherSpecs available in WebSphere MQ V6 uses different names to the ciphers in WebSphere Application Server V7 so a comparison of compatible ciphers was performed. The WebSphere Application Server V7 configuration change was required to “Quality of protection (QoP) settings” in the “SSL configurations”. The WebSphere MQ V6 SSL configuration on the channels also needed to be modified to use the new cipher.

How to clean up a failed migration?

Any aborted migration would leave the WebSphere application server profile in a partially migrated state, which is useful for the purpose of data collection and troubleshooting. Following actions were taken to clean up the failed migration:

WebSphere Application Server V7

We had to delete the new profiles(e.g. deployment manager and application server profiles) that were created in the aborted migration with the help of 'manageprofiles' command.

Listing 10. command 'manageprofiles' to delete a profile
./manageprofiles.sh -delete -profileName NewProfileNameForVersion7

After the profile was deleted, any remaining directories for that profile need to be manually removed.

Listing 11. Check profile sub directories to be deleted
/install_root/profiles/NewProfileNameForVersion7/>> ls -la

The existing profile directory also needs to be removed. Only the logs sub directory should be left.

Listing 12. Delete profile directories
/install_root /profiles/NewProfileNameForVersion7>>cd ..
/install_root /profiles/>> rm -r NewProfileNameForVersion7

WebSphere Application Server V6

We had to delete the backup directory (such as /waslogs/was6_to_was7/migration) that had been created with WASPreUpdgrade command. Alternatively, we could specify another backup directory (such as /waslogs/was6_to_was7/migration1) when running the migration command again.

Post-migration step and verification

  • Stop and start the deployment manager of WebSphere V7. Launch the Admin console and click System administration > nodes to verify that new node has been federated to the WebSphere V7.
  • Remove the old JVM settings from WebSphere Application Server V7 profiles - The JVM arguments carried across from 1.4.2 JVM are not required for the 1.6 JVM and should be removed. For example:
    • -Xloratio0.2
    • -Xp32K,4K
    • -Xminf0.25
    • -Xpartialcompactgc
    • -Xk64000
  • Revert the TCP channel's maximum open connections on WC_adminhost and WC_adminhost_secure ports to 100.
  • Monitor any performance impact from the compressed references feature introduced in the 1.6 JVM. This can be monitored by removing the "-Xcompressedrefs" JVM command line parameter.
  • Verify the security for WebSphere Application Server V7 Deployment Manager is turned 'on' and the service integration bus security is disabled for this topology.
  • Verify the SSL configuration on WebSphere Application Server V7.
  • Verify that the application servers starts and the deployed applications function properly.


This articles covers some of the WebSphere migration issues noticed in the clustered environment and steps taken to overcome such migration problems on AIX platform. It might be of assistance to some one on a similar migration path.



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into WebSphere on developerWorks

ArticleTitle=Migration of WebSphere Deployment from IBM WebSphere Application Server Network Deployment V6 to V7 on AIX 6.1