Pacemaker base component

In the integrated high availability (HA) solution with Pacemaker, the cluster software stack is composed of various components which are all needed to run Pacemaker effectively.

Important: In Db2® 11.5.8 and later, Mutual Failover high availability is supported when using Pacemaker as the integrated cluster manager. In Db2 11.5.6 and later, the Pacemaker cluster manager for automated fail-over to HADR standby databases is packaged and installed with Db2. In Db2 11.5.5, Pacemaker is included and available for production environments. In Db2 11.5.4, Pacemaker is included as a technology preview only, for development, test, and proof-of-concept environments.

Resources

A set of Db2 defined entities where states are to be monitored, started, or stopped. This includes the Db2 member process, HADR capable databases (for HADR), mount points (for Mutual Failover), Ethernet network adapters, and virtual IP addresses.

Constraints

These are rules setup during cluster creation to augment the behavior of processes:

  • Location constraint - where resources can run.
  • Ordering constraint - the order in which certain resource actions must occur.
  • Co-location constraint - the dependency of one resource's location on that of another resource.
The following are examples of how these constraints function:
  • The following location constraint specifies that the instance resource db2_draping1_gerry_0 prefers to run on the draping1 host.
    location prefer-db2_draping1_gerry_0 db2_draping1_gerry_0 100: draping1
  • Location constraints can also be conditional, the following location constraint specifies that the database resource will only run if the Ethernet network adapter eth1 is healthy.
    location loc-rule-db2_gerry_gerry_SAMPLE-eth1-talkers1 db2_gerry_gerry_SAMPLE-clone \
            rule -inf: db2ethmon-eth1 eq 0
  • Ordering constraints ensure that the resources start in the correct order. The following ordering constraint ensures that the database resource starts before the primary VIP resource does.
    order order-rule-db2_gerry_gerry_SAMPLE-then-primary-VIP Mandatory: db2_gerry_gerry_SAMPLE-clone:start db2_gerry_gerry_SAMPLE-primary-VIP:start
  • Co-location constraints ensure that resources that need to be on the same host are currently active on the same host. The following co-location constraint ensures that the primary VIP is running on the same host as the primary HADR database.
    colocation db2_gerry_gerry_SAMPLE-primary-VIP-colocation inf: db2_gerry_gerry_SAMPLE-primary-VIP:Started db2_gerry_gerry_SAMPLE-clone:Master

Resource set

A group of resources under the effect of a specific constraint.

Resource model

A Pacemaker resource model for Db2 refers to the pre-defined relationship and constraints of all resources. The resource model is created as part of the cluster setup using the db2cm utility with the -create option. Any deviation or alteration of the model without approval from Db2 will render the model unsupported.

Resource agents

Resource agents in Pacemaker are the Db2 user exits which are a set of shell scripts developed and supported by Db2 to perform actions on the resources defined in the resource model.

A total of three resource agents are provided:
  • db2ethmon
    • The resource agent to monitor the defined Ethernet network adapter. This is at host level.
  • db2inst (HADR only)
    • The resource agent to monitor, start, and stop the Db2 member process. This is at the Db2 instance level.
  • db2hadr (HADR only)
    • The resource agent to monitor, start, and stop individual HADR-enabled databases. This is at the Db2 database level.
  • db2partition (Mutual Failover)
    • The resource agent to monitor, start, and stop the Db2 partition process. This is at the Db2 instance level.
  • db2fs (Mutual Failover)
    • The resource agent to monitor, start, and stop a file system.

Cluster topology and communication layer

All HA cluster manager software must have the capability to ensure each node has the same view of the cluster topology (or membership). Pacemaker utilizes the Corosync Cluster Engine, an open source group communication system software, to provide a consistent view of cluster topology, ensure reliable messaging infrastructure so that events are executed in the same order in each node, and to apply quorum constraints.

Cluster domain leader

One of the nodes in the cluster will be elected as the Domain Leader (also known as the Designated Controller (DC) in Pacemaker terms) where the Pacemaker controller daemon residing on the DC will assume the role to make all cluster decisions. A new domain leader will be elected if the current domain leader's host fails.

For more information on Pacemaker internal components and their interactions, refer to the Pacemaker architecture.