Returning colonies to the production environment

After upgrading the colonies in the upgrade environment, return the colonies to the production environment.

About this task

To return the upgraded colonies to the production environment, follow these steps:

Procedure

  1. In sharded mode, install a new run time in the version to which you are upgrading. For purposes of describing the new run time, let us refer to it as Production_V2.

    For example, if you are running a sharded deployment on Release 9.5 (Production_V1) and upgrading some colonies to Release 10.0, perform a complete installation of Release 10.0 in sharded mode (Production_V2). When you perform a complete installation in sharded mode, a DEFAULT colony is created with new METADATA, SYSTEM CONFIGURATION, STATISTICS, and TRANSACTION/MASTER shards.

    Note: The sharded installation process requires that you provide database information for the METADATA, STATISTICS, SYSTEM CONFIGURATION, and TRANSACTION/MASTER shards. Ensure that you specify database parameters that correspond to the METADATA shard from Production_V1. However, you should specify new shards for SYSTEM CONFIGURATION, STATISTICS, TRANSACTION/MASTER.
    Note: As part of creating Production_V2, perform the following tasks:
    • Copy all extensions from Upgrade_V2 to Production_V2.
    • If you installed any PCAs on Upgrade_V2, install the same PCAs on Production_V2.
    • Ensure the database tables in Production_V2 are identical to the database tables in Upgrade_V2 by rebuilding the resources.jar and entities.jar files on Production_V2.
    Note: You can use the Production_V2 run time each time you upgrade the colonies in your sharded environment. For example, if colonies 1 and 2 were upgraded six months ago, and you are now upgrading colonies 3 and 4, you can use the Production_V2 run time from six months ago. If any fix packs were added to Upgrade_V2 within the six months period, you must add the same fix packs to Production_V2 before Production_V2 can be used again.
  2. Move the upgraded colonies from Upgrade_V2 to Production_V2 by performing the following tasks:
    1. Copy the multischema.xml file from Upgrade_V2's Migration/9.5/database/scripts directory to Production_V2's Migration/9.5/database/scripts directory.
    2. In the Migration/9.3/database/scripts/multischema.xml file on Production_V2, perform the following edits:
      • Set each colony's status to "".
      • Set each colony's new version to the upgrade version.
      • Change references for the METADATA shard to point to the Production_V1 shard.
      • Change references for all other colony-specific shards, such as CONFIGURATION and STATISTICS, to point to the Production_V2 shards.
    3. In the INSTALL_DIR/Migration/9.5 directory for Production_V2, run the following command, where INSTALL_DIR_OLD corresponds to Upgrade_V2:
      
      ${ANT_HOME}/bin/ant -Druntime=INSTALL_DIR -Druntime.old=INSTALL_DIR_OLD
       -f buildmigration.xml -logfile logfile -Dtarget=updatecolonypool migrate
      

      This command updates the colony parameters in Production_V2 to refer to the TRANSACTION and MASTER shards for each colony you are moving.

      The *.done file created in the 9.5 status folder of Production_V2's Migration/9.5 directory for the updatecolonypool task is ant_updatecolonypool.xml.done.

      Note: If the status folder already contains the ant_updatecolonypool.xml.done file, you must delete the file before running the updatecolonypool target.
    4. In the INSTALL_DIR/Migration/9.5 directory for Production_V2, run the following command, where INSTALL_DIR_OLD corresponds to Upgrade_V2:
      
      ${ANT_HOME}/bin/ant -Druntime=INSTALL_DIR -Druntime.old=INSTALL_DIR_OLD
      -f buildmigration.xml -logfile logfile 
      -Dtarget=delete-stale-colony-pool migrate
      

      This command deletes stale records from the PLT_DB_COLONY_POOL table and the PLT_DB_POOL table.

      The *.done file created in the 9.5 status folder of Production_V2's Migration/9.5 directory for the delete-stale-colony-pool task is ant_deletecolonypools.done.

  3. Use the CDT to move the configuration data for the colonies you are upgrading and the DEFAULT colony from Upgrade_V2 to Production_V2.
    Note:

    You can use the Production_V2 run time each time you upgrade colonies in your sharded environment. If this is not your first upgrade on Production_V2 and the CDT results in conflicts, you must resolve all the conflicts manually.

    If you have already upgraded one or more colonies, ensure you pass the -AppendOnly Y flag when running the CDT to move data from Upgrade_V2 to Production_V2. If this flag is not passed, all existing configuration data is deleted.

  4. Production_V2 is your new version upgrade environment. All customizations and postmigration activities should be performed on this new upgrade environment. When performing postmigration activities, Production_V2 is INSTALL_DIR and Production_V1 is INSTALL_DIR_OLD.