Creating a DPF Db2 Roving Standby HA instance on a Pacemaker-managed Linux cluster
Db2 12.1 provides a DPF high availability option for Db2 when deployed on a Pacemaker-managed Linux cluster. You create the cluster using the Db2 cluster manager (db2cm) utility.
About this task
Important: Starting from Db2® 12.1, DPF high availability is supported when
using Pacemaker as the integrated cluster manager. The Pacemaker cluster manager is packaged and
installed with Db2.
Db2 DPF is a highly available deployment where multiple cluster nodes
have identical Db2 instance installations and a shared filesystem as the instance home directory.
The partitions can be activated on any cluster nodes within the Pacemaker cluster. With this
solution, the setup, configuration, and management of the deployment is handled with
the db2cm utility, and the automation of the solution is monitored and controlled by the Pacemaker
cluster manager. The following placeholders are used in the command statements throughout this procedure. These
represent values that you can change to suit your organization:
<host#>
is the hostname for the DPF cluster nodes in the Db2 Linux cluster<qdevice_host>
is the host name of the quorum device, which is used to make cluster management decisions when Pacemaker cannot.<partition_number>
is a unique number that identifies the database partition server in the Db2 Pacemaker DPF cluster.<database_name>
is the name of the Db2 database.
Procedure
Example
The following example shows the output from running the db2cm -list command to verify the creation of a Db2 Pacemaker DPF Roving Standby HA cluster ( see Step 9.)
$ db2cm -list
HA Model: DPF Roving Standby
Domain Information:
Domain name = hadomain
Cluster Manager = Corosync
Cluster Manager Version = 3.1.7
Resource Manager = Pacemaker
Resource Manager Version = 2.1.6-4.db2pcmk.el9
Current domain leader = dpf-srv-1
Number of nodes = 4
Number of resources = 4
Host Information:
HOSTNAME STATE MAXIMUM PARTITIONS NUMBER OF PARTITIONS
------------------------ -------- ------------------------ --------------------------
dpf-srv-1 ONLINE 2 2
dpf-srv-2 ONLINE 2 2
dpf-srv-3 ONLINE 2 2
dpf-srv-4 ONLINE 2 0
Fencing Information:
Fencing Configured: Not configured
Quorum Information:
Quorum Type: Majority
Total Votes: 4
Quorum Votes: 3
Quorum Nodes:
----------------
dpf-srv-1
dpf-srv-2
dpf-srv-3
dpf-srv-4
Resource Information:
Resource Name = db2_ethmonitor_dpf-srv-1_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-1
Interface Name = eth0
Resource Name = db2_ethmonitor_dpf-srv-2_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-2
Interface Name = eth0
Resource Name = db2_ethmonitor_dpf-srv-3_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-3
Interface Name = eth0
Resource Name = db2_ethmonitor_dpf-srv-4_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-4
Interface Name = eth0
The following example shows the output from running the db2cm -list command to verify the 3 active nodes and 1 standby node Pacemaker DPF Roving Standby HA cluster ( see Step 9.i )
$ db2cm -list
HA Model: DPF Roving Standby
Domain Information:
Domain name = hadomain
Cluster Manager = Corosync
Cluster Manager Version = 3.1.7
Resource Manager = Pacemaker
Resource Manager Version = 2.1.6-4.db2pcmk.el9
Current domain leader = dpf-srv-1
Number of nodes = 4
Number of resources = 17
Host Information:
HOSTNAME STATE MAXIMUM PARTITIONS NUMBER OF PARTITIONS
------------------------ -------- ------------------------ --------------------------
dpf-srv-1 ONLINE 2 2
dpf-srv-2 ONLINE 2 2
dpf-srv-3 ONLINE 2 2
dpf-srv-4 ONLINE 2 0
Fencing Information:
Fencing Configured: Configured
Fencing Devices:
----------------
watchdog
Quorum Information:
Quorum Type: Qdevice with FFSplit Algorithm
Total Votes: 5
Quorum Votes: 3
Quorum Nodes:
----------------
dpf-srv-1
dpf-srv-2
dpf-srv-3
dpf-srv-4
Resource Information:
Resource Name = db2_ethmonitor_dpf-srv-1_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-1
Interface Name = eth0
Resource Name = db2_ethmonitor_dpf-srv-2_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-2
Interface Name = eth0
Resource Name = db2_ethmonitor_dpf-srv-3_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-3
Interface Name = eth0
Resource Name = db2_ethmonitor_dpf-srv-4_eth0
State = Online
Managed = True
Resource Type = Network Interface
Node = dpf-srv-4
Interface Name = eth0
Resource Name = db2_partition_db2inst1_0
State = Online
Managed = True
Resource Type = Partition
Instance = db2inst1
Partition Number = 0
Current Host = dpf-srv-1
Resource Name = db2_VIP_db2_partition_db2inst1_0_10.11.221.222
State = Online
Managed = True
Resource Type = IP
Instance = db2inst1
Partition Number = 0
Ip Address = 10.11.221.222
Current Host = dpf-srv-1
Resource Name = db2_partition_db2inst1_1
State = Online
Managed = True
Resource Type = Partition
Instance = db2inst1
Partition Number = 1
Current Host = dpf-srv-1
Resource Name = db2_partition_db2inst1_2
State = Online
Managed = True
Resource Type = Partition
Instance = db2inst1
Partition Number = 2
Current Host = dpf-srv-2
Resource Name = db2_partition_db2inst1_3
State = Online
Managed = True
Resource Type = Partition
Instance = db2inst1
Partition Number = 3
Current Host = dpf-srv-2
Resource Name = db2_partition_db2inst1_4
State = Online
Managed = True
Resource Type = Partition
Instance = db2inst1
Partition Number = 4
Current Host = dpf-srv-3
Resource Name = db2_partition_db2inst1_5
State = Online
Managed = True
Resource Type = Partition
Instance = db2inst1
Partition Number = 5
Current Host = dpf-srv-3