Migrating history data before transaction data for all colonies

You can use the yfs.api.history.disable property to migrate history data for all colonies in a sharded deployment before migrating the transaction data.

About this task

This property lets you migrate data without completely shutting down the production environment. The history data for the colonies is migrated when the application is running on the transaction data.

If you are using the yfs.api.history.disable property to migrate history data for all colonies before the transaction data, follow these steps:

Procedure

  1. Create an upgrade environment, Upgrade_V1.
  2. In sharded mode, install a new run time in the version to which you are upgrading. For purposes of describing the new run time, let us refer to it as Production_V2.
    Note: The sharded installation process requires that you provide database information for the METADATA, STATISTICS, SYSTEM CONFIGURATION, and TRANSACTION/MASTER shards. Ensure that you specify database parameters that correspond to the METADATA shard from Upgrade_V1.
  3. Configure Production_V2's database parameters in the sandbox.cfg file to refer to Production_V1's METADATA by performing the following tasks:
    1. Create a backup of the sandbox.cfg file that is located in Production_V2's <INSTALL_DIR>/properties directory.
    2. In the <INSTALL_DIR>/properties/sandbox.cfg file for Production_V2, configure the following properties to match the corresponding properties in the <INSTALL_DIR>/properties/sandbox.cfg file for Production_V1:
      • DB_PASS
      • DB_USER
      • DB_SCHEMA_OWNER
      • DB_DATA
      • DB_PORT
      • YANTRA_DB_PASS
      • YANTRA_DB_USER

      On Oracle:

      • ORA_PASS
      • ORA_HOST
      • ORA_USER

      On Db2:

      • DB2_PASS
      • DB2_HOST
      • DB2_USER
    3. From Production_V2, run the <INSTALL_DIR>/bin/setupfiles.sh script (Linux® or UNIX) or the <INSTALL_DIR>\bin\setupfiles.cmd script (Windows).
  4. In the <INSTALL_DIR>/Migration/9.5 directory for Production_V2, run the following command, where <INSTALL_DIR_OLD> corresponds to Production_V1:
    
    ${ANT_HOME}/bin/ant -Druntime=<INSTALL_DIR> -Druntime.old=<INSTALL_DIR_OLD>
     -f buildmigration.xml -logfile <logfile> -Dtarget=initcolonypool migrate
    

    This command creates the <INSTALL_DIR>/Migration/9.5/database/scripts/multischema.xml file in Production_V2. The XML file contains a list of colonies on Production_V1.

    The *.done file that is created in the 9.5 status folder for the initcolonypool task is ant_initcolonypool.xml.done.

  5. Update the database parameters in Production_V2 to point to Upgrade_V1’s metadata, by performing the following tasks:
    1. In the Migration/9.5/database/scripts/multischema.xml file on Production_V2, perform the following edits:
      • Change the references for colony-specific shards, such as METADATA, CONFIGURATION, and STATISTICS, to point to the Upgrade_V1 shards.
      • Change the references for the DEFAULT colony to point to the Upgrade_V1 shards.
    2. In the <INSTALL_DIR>/Migration/9.5 directory for Production_V2, run the following command, where <INSTALL_DIR_OLD> corresponds to Upgrade_V1:
      
      ${ANT_HOME}/bin/ant -Druntime=<INSTALL_DIR> -Druntime.old=<INSTALL_DIR_OLD>
       -f buildmigration.xml -logfile <logfile> -Dtarget=updatecolonypool migrate
      

      This command updates the colony parameters in Production_V2 to refer to the TRANSACTION and MASTER shards for the colonies you are upgrading.

      The *.done file created in the 9.5 status folder of Production_V2's Migration/9.5 directory for the updatecolonypool task is ant_updatecolonypool.xml.done.

  6. In Production_V1, use the customer_overrides.properties file to set the yfs.api.history.disable to True.
    Note: When you set the yfs.api.history.disable property to True, the application stops writing data to the history tables across all colonies in the production environment.
  7. Run a complete upgrade in sharded mode for the history data, where <INSTALL_DIR> corresponds to Production_V2, and <INSTALL_DIR_OLD> corresponds to Upgrade V1.
  8. Bring down Production_V1.
  9. Configure Production_V2's database parameters in the sandbox.cfg file to refer to Production_V1's METADATA by performing the following tasks:
    1. In the <INSTALL_DIR>/properties/sandbox.cfg file for Production_V2, configure the following properties to match the corresponding properties in the <INSTALL_DIR>/properties/sandbox.cfg file for Production_V1:
      • DB_PASS
      • DB_USER
      • DB_SCHEMA_OWNER
      • DB_DATA
      • DB_PORT
      • YANTRA_DB_PASS
      • YANTRA_DB_USER

      On Oracle:

      • ORA_PASS
      • ORA_HOST
      • ORA_USER

      On Db2:

      • DB2_PASS
      • DB2_HOST
      • DB2_USER
    2. From Production_V2, run the <INSTALL_DIR>/bin/setupfiles.sh script (Linux or UNIX) or the <INSTALL_DIR>\bin\setupfiles.cmd script (Windows).
  10. In Production_V2, rename the *.done file in the 9.5 status folder for the update-metadata-tables task from transaction_ant_colonyversionmigrator.xml.done to transaction_ant_colonyversionmigrator.xml.done.bak, and then run the following command:

    ${ANT_HOME}/bin/ant -Druntime=<INSTALL_DIR> -Druntime.old=<INSTALL_DIR_OLD> -f buildmigration.xml -logfile <logfile> -Dtarget=update-metadata-tables migrate

    The *.done file created in the 9.5 status folder for the update-metadata-tables task is transaction_ant_colonyversionmigrator.xml.done.

  11. In Production_V2, run a complete sharded upgrade on the transaction data. Production_V2 is your new version production environment. All post migration activities should be performed on Production_V2.