Installing and configuring a quorum device
You can install and configure a quorum device for a Db2® instance that is running on a Pacemaker-managed Linux cluster.
Before you begin
The quorum device requires an extra host that other hosts in the cluster can connect to via a
TCP/IP network. However, the quorum device host does not need to be configured as a part of the
cluster. You do not need to install the Db2 software for this
operation; the only requirement for the quorum device to function is to install a singular or
multiple corosync-qnetd
RPMs on it.
corosync-qnetd
RPMs provided also contain debug versions. When you
install corosync-qnetd
RPMs, you can choose to install the debug version, however,
it is not required. The corosync-qnetd
RPM must be the version validated by Db2 and
downloaded from the IBM site.About this task
Having a reliable quorum mechanism is essential to a highly available cluster. A quorum device is required for Mutual Failover (MF) and High Availability Disaster Recovery (HADR) instances. For pureScale and Data Partitioning Feature (DPF) configurations, a quorum device is required if the cluster contains an even number of nodes. Alternatively, support for running a 2-node pureScale cluster configuration without a quorum device is permitted since the tie-breaker mechanism is provided by the cluster file system.
<platform>
Use linuxamd64, linuxppc64le, or linux390x64.<OS_distribution>
is your Linux operating system (OS). Use either RHEL or SLES.<architecture>
is the chipset you are using. Use x86_64, ppc64le, or s390x.
Procedure
- On each node in the cluster, ensure that the corosync-qdevice package is installed with the
following command:
rpm -qa | grep corosync-qdevice
- If the corosync-qdevice package is not installed, install it:
- On Red Hat Enterprise Linux (RHEL)
systems:
dnf install <Db2_image>/db2/<platform>/pcmk/Linux/rhel/<architecture>/corosync-qdevice*
- On SUSE Linux Enterprise Server (SLES)
systems:
zypper install --allow-unsigned-rpm<Db2_image>/db2/<platform>/pcmk/Linux/sles/<architecture>/corosync-qdevice*
- On Red Hat Enterprise Linux (RHEL)
systems:
- On the quorum device host, install the Corosync QNet software:
- On RHEL
systems:
dnf install<Db2_image>/db2/<platform>/pcmk/Linux/rhel/<architecture>/corosync-qnetd*
- On SLES
systems:
zypper install --allow-unsigned-rpm<Db2_image>/db2/<platform>/pcmk/Linux/sles/<architecture>/corosync-qnetd*
- On RHEL
systems:
- As the root user, set up the QDevice from one of the cluster
nodes.
<Db2_install_path>/bin/db2cm -create -qdevice <hostname>
Important: To run the db2cm utility as the root user, ensure the DB2INSTANCE environment variable is set to the instance owner.Note: Due to corosync requirements, you must configure passwordless root secure shell (SSH) access and enable secure copy protocol (SCP) for the cluster nodes and qdevice host. Ensure that you are able to use root passwordless SSH between all hosts, as well as locally on the current host. Passwordless root SSH is only needed for initial configuration, and further configuration changes. Passwordless root SSH is not needed for continued QDevice work. - Run the following corosync command on the cluster nodes to verify that the quorum was setup
correctly.
Alternatively, as root user, run the db2cm -status on the cluster nodes to verify that the quorum device is configured correctly.corosync-qdevice-tool -s
- Run the following corosync command on the quorum device host to verify that the quorum device is
running correctly:
corosync-qnetd-tool -l
Examples
The following examples show the command syntax and output from running the
corosync-qdevice-tool command, the db2cm -status command and
the corosync-qnetd-tool command to verify a quorum device has been correctly
configured. Alternatively, Alternatively, as root user, run db2cm -status on the
cluster nodes to verify that the quorum device is configured correctly. The
corosync-qnetd-tool command is run on the quorum device host to verify that the
quorum device is running correctly (see step 6). In the examples, frizzly1
is the
name of the quorum device host.
All Configurations
QDevice Information:
* Connected: [ frizzly1:5403 ]
HADR and MF
On all cluster nodes:
[root@cuisses1 ~]# corosync-qdevice-tool -s
Qdevice information
-------------------
Model: Net
Node ID: 1
Configured node list:
0 Node ID = 1
1 Node ID = 2
Membership node list: 1, 2
Qdevice-net information
----------------------
Cluster name: hadom
QNetd host: frizzly1:5403
Algorithm: LMS
Tie-breaker: Node with lowest node ID
State: Connected
On the quorum device host:
[root@frizzly1 ~]# corosync-qnetd-tool -l
Cluster "hadom":
Algorithm: LMS
Tie-breaker: Node with lowest node ID
Node ID 2:
Client address: ::ffff:9.21.110.42:55568
Configured node list: 1, 2
Membership node list: 1, 2
Vote: ACK (ACK)
Node ID 1:
Client address: ::ffff:9.21.110.22:51400
Configured node list: 1, 2
Membership node list: 1, 2
pureScale and DPF
On all cluster nodes:
[root@cuisses1 ~]# corosync-qdevice-tool -s
Qdevice information
-------------------
Model: Net
Node ID: 1
Configured node list:
0 Node ID = 3
1 Node ID = 4
2 Node ID = 2
3 Node ID = 1
Membership node list: 1, 2, 3, 4
Qdevice-net information
----------------------
Cluster name: db2domain
QNetd host: frizzly1:5403
Algorithm: Fifty-Fifty split
Tie-breaker: Node with lowest node ID
State: Connected
[root@frizzly1 ~]# corosync-qnetd-tool -l
Cluster "db2domain":
Algorithm: Fifty-Fifty split (KAP Tie-breaker)
Tie-breaker: Node with lowest node ID
Node ID 1:
Client address: ::ffff:9.30.119.73:36200
Configured node list: 3, 4, 2, 1
Membership node list: 1, 2, 3, 4
Vote: ACK (ACK)
Node ID 2:
Client address: ::ffff:9.30.182.240:37932
Configured node list: 3, 4, 2, 1
Membership node list: 1, 2, 3, 4
Vote: ACK (ACK)
Node ID 3:
Client address: ::ffff:9.30.183.140:51636
Configured node list: 3, 4, 2, 1
Membership node list: 1, 2, 3, 4
Vote: ACK (ACK)
Node ID 4:
Client address: ::ffff:9.30.160.40:52388
Configured node list: 3, 4, 2, 1
Membership node list: 1, 2, 3, 4
Vote: No change (ACK)