Restore from a saved Pacemaker cluster configuration

In situations where the cluster needs to be recreated, a saved Pacemaker configuration that is based on the current hardware can be restored.

About this task

You can redeploy a Pacemaker cluster configuration from a saved configuration, using the -import option.

Note: The resulting backup configuration cannot be deployed on a new set of hosts where any of the following details are different from the original cluster:
  • Hostnames
  • Domain name
  • Interface names
  • Instance names
  • Database names
  • Primary/Standby virtual IP addresses
  • Qdevice host
To import a configuration on a new set of hosts, follow the example in this technote.

Procedure

  1. As a root user, prepare the current cluster for the restoration.
    Warning: Any virtual IPs (VIPs) that were configured by the db2cm utility are removed when running the db2cm -delete -cluster command. This can result in database clients becoming disconnected temporarily.
  2. As the root user, ensure that the cluster's resources and domain have been removed successfully:
    [root@host1]# db2cm -list
    HA Model: DPF Balanced
    
     DBT8242E  Failed to retrieve cluster information on host. Reason code: 1.
  3. Import the previous cluster configuration from any host:
    ./sqllib/bin/db2cm -import <path to backup file>
  4. As a root user, verify that both nodes and all resources are online:
    [root@host1]# db2cm -list

    All nodes and all resources should show as Online under Node information and Resource Information respectively. If a Qdevice was configured at the time the configuration backup was taken, it should show up under Quorum Information.

Example

The following example shows the command syntax and output from verifying that a cluster's resources and domain are removed:
[root@host1]# db2cm -list
HA Model: DPF Balanced

 DBT8242E  Failed to retrieve cluster information on host. Reason code: 1.

The following example shows the command syntax and output from importing a saved cluster configuration:

[root@host1]# db2cm -import /tmp/backup.conf
Domain created successfully.
Cluster configuration has been imported successfully.

The following example shows the command syntax and output from verifying that both nodes are running:

[root@host1]# db2cm -list

HA Model: DPF Roving Standby

Domain Information:
Domain name                     = hadomain
Cluster Manager                 = Corosync
  Cluster Manager Version       = 3.1.7
Resource Manager                = Pacemaker
  Resource Manager Version      = 2.1.6-4.db2pcmk.el9
Current domain leader           = dpf-srv-2
Number of nodes                 = 4
Number of resources             = 7

Host Information:
HOSTNAME                        STATE           MAXIMUM PARTITIONS              NUMBER OF PARTITIONS
------------------------        --------        ------------------------        --------------------------
dpf-srv-1                       ONLINE          1                               1
dpf-srv-2                       ONLINE          1                               1
dpf-srv-3                       ONLINE          1                               1
dpf-srv-4                       ONLINE          1                               0

Fencing Information:
Fencing Configured: Configured
Fencing Devices:
----------------
watchdog

Quorum Information:
Quorum Type: Majority
Total Votes: 4
Quorum Votes: 3
Quorum Nodes:
----------------
dpf-srv-1
dpf-srv-2
dpf-srv-3
dpf-srv-4

Resource Information:
Resource Name             = db2_ethmonitor_dpf-srv-1_eth0
  State                         = Online
  Managed                       = True
  Resource Type                 = Network Interface
    Node                        = dpf-srv-1
    Interface Name              = eth0

Resource Name             = db2_ethmonitor_dpf-srv-2_eth0
  State                         = Online
  Managed                       = True
  Resource Type                 = Network Interface
    Node                        = dpf-srv-2
    Interface Name              = eth0

Resource Name             = db2_ethmonitor_dpf-srv-3_eth0
  State                         = Online
  Managed                       = True
  Resource Type                 = Network Interface
    Node                        = dpf-srv-3
    Interface Name              = eth0

Resource Name             = db2_ethmonitor_dpf-srv-4_eth0
  State                         = Online
  Managed                       = True
  Resource Type                 = Network Interface
    Node                        = dpf-srv-4
    Interface Name              = eth0

Resource Name             = db2_partition_db2inst1_0
  State                         = Online
  Managed                       = True
  Resource Type                 = Partition
    Instance                    = db2inst1
    Partition Number            = 0
  Current Host                  = dpf-srv-1

Resource Name             = db2_partition_db2inst1_1
  State                         = Online
  Managed                       = True
  Resource Type                 = Partition
    Instance                    = db2inst1
    Partition Number            = 1
  Current Host                  = dpf-srv-2

Resource Name             = db2_partition_db2inst1_2
  State                         = Online
  Managed                       = True
  Resource Type                 = Partition
    Instance                    = db2inst1
    Partition Number            = 2
  Current Host                  = dpf-srv-3