Install and configure a QDevice quorum

You can install and configure a QDevice quorum to provide assistance with cluster management for a Db2® instance that is running on a Pacemaker-managed Linux cluster.

You can install and configure a QDevice quorum to provide assistance with cluster management for a Db2® instance that is running on a Pacemaker-managed Linux cluster.

Before you begin

Important: Starting from Db2® 12.1, DPF high availability is supported when using Pacemaker as the integrated cluster manager. The Pacemaker cluster manager is packaged and installed with Db2.

The QDevice quorum requires an extra host that other hosts in the cluster can connect to via a TCP/IP network. However, the QDevice host does not need to be configured as a part of the cluster. You do not need to install the Db2 software for this operation; the only requirement for the QDevice quorum to function is to install a singular or multiple corosync-qnetd RPMs on it.

Note: The corosync-qnetd RPMs provided also contain debug versions. When you install v RPMs, you can choose to install the debug version, however, it is not required. The corosync-qnetd RPM must be the version validated by Db2 and downloaded from the IBM site.

About this task

Having a reliable quorum mechanism is essential to a highly available cluster. AQDevice quorum is the best solution for your Db2 instance.

The following placeholders are used in the command statements throughout this procedure. These represent values that you can change to suit your organization:

  • Use linuxamd64, linuxppc64le, or linux390x64.
  • <OS_distribution> is your Linux operating system (OS). Use either rhel or sles.
  • is the chipset you are using. Use x86_64, ppc64le, or s390x.

Procedure

  1. On the DPF active and standby hosts, ensure that the corosync-qdevice package is installed with the following command: rpm -qa | grep corosync-qdevice
  2. If the corosync-qdevice package is not installed, install it:
    1. On Red Hat Enterprise Linux (RHEL) systems:
      dnf install <Db2_image>/db2//pcmk/Linux/rhel//corosync-qdevice*
    2. On SUSE Linux Enterprise Server (SLES) systems:
      zypper install --allow-unsigned-rpm <Db2_image>/db2//pcmk/Linux/sles//corosync-qdevice*
  3. On the QDevice host, install the Corosync QNet software:
    1. On RHEL systems:
      dnf install <Db2_image>/db2//pcmk/Linux/rhel//corosync-qnetd*
    2. On SLES systems:
      zypper install --allow-unsigned-rpm <Db2_image>/db2//pcmk/Linux/sles//corosync-qnetd*
  4. As the root user, run the following db2cm command to setup the QDevice from one of the cluster nodes:
    <Db2_install_path>/bin/db2cm -create -qdevice
    1. Run the following corosync command on the primary and standby hosts to verify that the quorum was setup correctly:

      corosync-qdevice-tool -s
  5. Run the following corosync command on the QDevice host to verify that the quorum device is running correctly:
    corosync-qnetd-tool -l

Examples

The following example shows the command syntax and output from running the corosync-qdevice-tool command on the cluster nodes to verify that the quorum device is set up correctly (see step 5) :
[root@dpfhost1 ~]# corosync-qdevice-tool -s

Qdevice information

-------------------

Model: Net

Node ID: 1

Configured node list:

0 Node ID = 1

1 Node ID = 2

2 Node ID = 3

3 Node ID = 4

Membership node list: 1, 2, 3, 4

Qdevice-net information

----------------------

Cluster name: hadom

QNetd host: dpf-qdevice1:5403

Algorithm: Fifty-Fifty split

Tie-breaker: Node with lowest node ID

State: Connected
The following example shows the command syntax and output from running the corosync-qnetd-tool command on the Qdevice Host to verify that the quorum device is running correctly (see step 6):
[root@dpf-qdevice1 ~]# corosync-qnetd-tool -l

Cluster "db2_pcmk_v121domain":

    Algorithm: Fifty-Fifty split (KAP Tie-breaker)

    Tie-breaker: Node with lowest node ID

    Node ID 1:

        Client address: ::ffff:9.30.4.167:58956

        Configured node list: 1, 2, 3, 4

        Membership node list: 1, 2, 3, 4

        Vote: ACK (ACK)

    Node ID 2:

        Client address: ::ffff:9.30.4.168:58362

        Configured node list: 1, 2, 3, 4

        Membership node list: 1, 2, 3, 4

        Vote: ACK (ACK)

    Node ID 3:

        Client address: ::ffff:9.30.4.169:37284

        Configured node list: 1, 2, 3, 4

        Membership node list: 1, 2, 3, 4

        Vote: ACK (ACK)

    Node ID 4:

        Client address: ::ffff:9.30.4.104:59222

        Configured node list: 1, 2, 3, 4

        Membership node list: 1, 2, 3, 4

        Vote: No change (ACK)