Replacing an existing Tivoli SA MP-managed Db2 instance with a Pacemaker-managed HADR Db2 instance

If you are running Db2 on a Linux cluster that is managed by IBM Tivoli System Automation for Multiplatforms (SA MP), you can use the db2cm utility to replace cluster management with Pacemaker, and enable the HADR high availability option.

About this task

Important: In Db2® 11.5.8 and later, Mutual Failover high availability is supported when using Pacemaker as the integrated cluster manager. In Db2 11.5.6 and later, the Pacemaker cluster manager for automated fail-over to HADR standby databases is packaged and installed with Db2. In Db2 11.5.5, Pacemaker is included and available for production environments. In Db2 11.5.4, Pacemaker is included as a technology preview only, for development, test, and proof-of-concept environments.
Note:
Important: In Db2 11.5.8 and later, Mutual Failover high availability is supported when using Pacemaker as the integrated cluster manager. In Db2 11.5.6 and later, the Pacemaker cluster manager for automated fail-over to HADR standby databases is packaged and installed with Db2. In Db2 11.5.5, Pacemaker is included and available for production environments. In Db2 11.5.4, Pacemaker is included as a technology preview only, for development, test, and proof-of-concept environments.
Take note of the following information before you start your conversion:
  • Db2 clients can be disconnected during this operation if the original cluster is configured to use a virtual IP address (VIP).
  • The Db2 HDDR hosts remain online throughout the operation but the automated failover to the HADR standby is disabled during the conversion. No takeover is required..
  • When running db2cm, ensure that you run the command as the root user.
The following placeholders are used in the command statements throughout this procedure:
  • <exportedFile> is the name of the file to which the instance owner backs up their existing SA MP cluster configuration.
  • <hostname1> and <hostname2> are the host names for the primary and standby network interfaces in the cluster.
  • <network_interface_name> is the name of the device on the cluster.
  • <partition_number> is a unique number that identifies the database partition server in the Db2 Pacemaker cluster. For more information, see dbpartitionnum.
  • <database_name> is the name of the Db2 database.
  • <instance_name> is the name of the Db2 instance on the cluster.
  • <domain_name> is the domain name of the cluster.

Procedure

  1. As an instance owner, backup the existing Tivoli SA MP cluster configuration by exporting it to an xml file:
    db2haicu -o <exportedFile>.xml
  2. As an instance owner, delete the Tivoli SA MP resource model on both the primary and standby hosts:
    db2haicu -delete
  3. As an instance owner, validate on each database that the HADR databases pairs are in PEER state:
    db2pd -db <database_name> -hadr | grep HADR_STATE
  4. Install Pacemaker and all its dependent software packages as documented. For more information, see Installing the Pacemaker cluster software stack and Configuring high availability with the Db2 cluster manager utility (db2cm).
  5. Ensure that the db2hadr, db2inst and db2ethmon agent scripts have been copied to /usr/lib/ocf/resource.d/heartbeat on all hosts in the cluster.
  6. As the root user, run the db2cm command to create the cluster. For example:
    INSTANCE_HOME/sqllib/bin/db2cm -create -cluster -domain <domain_name>    
        -host <hostname1> -publicEthernet <network_interface_name> 
        -host <hostname2> -publicEthernet <network_interface_name>
  7. As the root user, run the db2cm command to create the instance resources for the two HADR hosts. For example:
    INSTANCE_HOME/sqllib/bin/db2cm -create -instance <instance name> -host <hostname1> 
    INSTANCE_HOME/sqllib/bin/db2cm -create -instance <instance name> -host <hostname2>
  8. As the root user, run the db2cm command for each HADR database pair to create the HADR database resources. For example:
    INSTANCE_HOME/sqllib/bin/db2cm -create -db <database name> -instance <instance name>
  9. As the root user, run the db2cm command to create the primary VIP resource for the specified database. For example:
    INSTANCE_HOME/sqllib/bin/db2cm -create -primaryVIP <IPv4 Address> -db <database name> -instance <instance name>
  10. Validate that the resource model contains an instance resource, a resource for the public network equivalency, and one for each HADR database. For example, it could look like this:

Examples

The following example shows the db2cm command syntax for creating an HADR cluster (see step 6).
INSTANCE_HOME/sqllib/bin/db2cm -create -cluster -domain hadom 
                         -host ip-172-31-15-79 -publicEthernet eth1
                         -host ip-172-31-10-145 -publicEthernet eth1
The following example shows the output from running the db2cm -list command to validate the existence of a partition resource, a VIP resource, and a mount resource in the resource model (see step 10):
crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: draping1 (version 2.0.4-2.db2pcmk.el8-2deceaa3ae) - partition with quorum
  * Last updated: Fri Apr  9 08:06:39 2021
  * Last change:  Fri Apr  9 08:06:24 2021 by root via cibadmin on talkers1
  * 2 nodes configured
  * 7 resource instances configured

Node List:
  * Online: [ draping1 talkers1 ]

Full List of Resources:
  * db2_talkers1_eth1   (ocf::heartbeat:db2ethmon):      Started talkers1
  * db2_draping1_eth1   (ocf::heartbeat:db2ethmon):      Started draping1
  * db2_talkers1_gerry_0        (ocf::heartbeat:db2inst):        Started talkers1
  * db2_draping1_gerry_0        (ocf::heartbeat:db2inst):        Started draping1
  * Clone Set: db2_gerry_gerry_SAMPLE-clone [db2_gerry_gerry_SAMPLE] (promotable):
    * Masters: [ talkers1 ]
    * Slaves: [ draping1 ]
  * db2_gerry_gerry_SAMPLE-primary-VIP  (ocf::heartbeat:IPaddr2):        Started talkers1