Restore from a saved Pacemaker cluster configuration

In situations where the cluster needs to be recreated, a saved Pacemaker configuration that is based on the current hardware can be restored.

Before you begin

Important: In Db2® 11.5.8 and later, Mutual Failover high availability is supported when using Pacemaker as the integrated cluster manager. In Db2 11.5.6 and later, the Pacemaker cluster manager for automated fail-over to HADR standby databases is packaged and installed with Db2. In Db2 11.5.5, Pacemaker is included and available for production environments. In Db2 11.5.4, Pacemaker is included as a technology preview only, for development, test, and proof-of-concept environments.

About this task

You can redeploy a Pacemaker cluster configuration from a saved configuration, using the -import option.

Note: A backup configuration cannot be imported onto a new set of hosts where any of the following details are different from the original cluster:
  • Host names
  • Domain name
  • Interface names
  • Instance names
  • Database names
  • Primary/Standby virtual IP addresses
  • Qdevice host
To import a configuration on a new set of hosts, follow the example in this technote.

Procedure

  1. As a root user, prepare the current cluster for the restoration:
    Warning: Any virtual IPs (VIPs) that were configured by the db2cm utility are removed when running the db2cm -delete -cluster command. This can result in database clients becoming disconnected temporarily.
    ./sqllib/bin/db2cm -delete -cluster
  2. As the root user, ensure that the cluster's resources and domain have been removed successfully:
    ./sqllib/bin/db2cm -list
  3. Import the previous cluster configuration from the host that the shared mount is mounted on:
    ./sqllib/bin/db2cm -import <path to backup file>
  4. As a root user, verify that both nodes and all resources are online:
    ./sqllib/bin/db2cm -list
    Both nodes and all resources should show as Online under Node information and Resource Information respectively. If a Qdevice was configured at the time the configuration backup was taken, it should show up under Quorum Information.

Example

The following example shows the command syntax and output from verifying that a cluster's resources and domain are removed:
[root@host1] # db2cm -list
      Cluster Status
There is no cluster on this host.
The following example shows the command syntax and output from importing a saved cluster configuration:
[root@host1] # db2cm -import /tmp/backup.conf
Importing cluster configuration from /tmp/backup.conf...
Import completed successfully.
The following example shows the command syntax and output from verifying that both nodes are running:
[root@host1~] # db2cm -list
      		Cluster Status
 
Domain information:
Domain name             = testdomain
Pacemaker version       = 2.1.2-4.db2pcmk.el8
Corosync version        = 3.1.6-2.db2pcmk.el8
Current domain leader   = testdomain-srv-2
Number of nodes         = 2  
Number of resources     = 5
 
Node information:
Name name           State
----------------    --------
testdomain-srv-1     Online
testdomain-srv-2     Online

Resource Information:
 
Resource Name             = db2_testdomain-srv-1_eth0
  State                   = Online
  Managed                 = true
  Resource Type           = Network Interface
    Node                  = testdomain-srv-1
    Interface Name        = eth0
 
Resource Name             = db2_testdomain-srv-2_eth0
  State                   = Online
  Managed                 = true
  Resource Type           = Network Interface
    Node                  = testdomain-srv-2
    Interface Name        = eth0
 
Resource Name             = db2_db2inst_0
  State                   = Online
  Managed                 = true
  Resource Type           = Partition
  Instance                = db2inst
  Partition               = 0
  Current Host            = testdomain-srv-1
 
Resource Name             = db2_regress1_0-instmnt_testmnt
  State                   = Online
  Managed                 = true
  Resource Type           = File System
  Device                  = "/dev/sdb"
  Mount Point             = "/testmnt"
  File System Type        = ext3
  Mount Options           = "rw,relatime"
  Current Host            = testdomain-srv-1