DB2 10.5 for Linux, UNIX, and Windows

Performing rolling updates in an automated high availability disaster recovery (HADR) environment

Use this procedure to perform a rolling update in an automated HADR environment because additional steps are required to update the DB2® database software, upgrade operating system software, upgrade hardware, or change database configuration parameters.

Before you begin

You must have the following prerequisites ready to perform the steps described in the procedures section:
  • Two DB2 instances (in this example, named stevera on each node).
  • Two DB2 servers (grom04 and grom03). The grom04 computer is initially hosting the HADR Primary database.
  • The instances are originally running at Version 10.5 GA code.
  • The instances are configured with IBM® Tivoli® System Automation for Multiplatforms (SA MP) controlling HADR failover. The cluster domain is named test.
Note: All DB2 fix pack updates, hardware upgrades, and software upgrades should be implemented in a test environment before being applied to your production system.

The HADR pair should be in peer state before starting the rolling update.

Restrictions

Use this procedure to perform a rolling update on your DB2 database system and update the DB2 database product software from one modification level to another in an automated HADR environment . For example, applying a fix pack to a DB2 database product software.

A rolling update cannot be used to upgrade a DB2 database system from an earlier version to a later version. For example, you cannot use this procedure to upgrade from Version 9.7 to Version 10.1.

You cannot use this procedure to update the DB2 HADR configuration parameters. Updates to HADR configuration parameters should be made separately. Because HADR requires the parameters on the primary and standby to be the same, this might require both the primary and standby databases to be deactivated and updated at the same time.

Procedure

  1. On the standby database, stop all DB2 processes:
    • deactivate db <database-name> (this will stop HADR, but retain the role)
    • db2stop force
  2. stoprpnode <standby node> (as root)
  3. <apply fixpack>
  4. startrpnode <standby node> (as root)
  5. On the standby database, start all DB2 processes:
    • db2start
    • activate db <database-name> (this will resume HADR but retain the role)
    • verify HADR pair has established PEER state: db2pd -hadr db <database-name>
  6. Perform a role-switch:
    • On the standby node issue: takeover hadr on db <database-name>
    • verify standby is now the primary and the HADR pair is still in PEER state: db2pd -hadr db <database-name>
  7. On the old primary node, repeat steps 1-5 to apply the fix pack and bring the node back online.
  8. Perform a fail-back to locate the HADR roles on the same nodes as before the fix pack install.
    • On the standby (old primary) node issue: takeover hadr on db <database-name>
    • verify the original primary node (before starting fix pack install process) is the PRIMARY and the HADR pair is still in PEER state: db2pd -hadr db <database-name>
  9. Migrate the TSA domain (if required)
    • TSA domain migration is only required if the new DB2 fix pack includes a new TSA version (which is not always the case)
    • The way to figure-out if a TSA domain migration is required, is to compare the active version number (AVN) matches the installed version number (IVN) for the HA manager: lssrc ls IBM.RecoveryRM |grep VN
    • To migrate TSA domain (as root):
      export CT_MANAGEMENT_SCOPE=2
      runact -c IBM.PeerDomain CompleteMigration Options=0
      samctrl -m	# Type 'Y' to confirm migration
    • Verify that AVN and IVN values match now for the HA manager: lssrc ls IBM.RecoveryRM |grep VN
  10. Ensure that MixedVersions is no longer set to Yes for the cluster component (as root): lsrpdomain