Prerequisites for an integrated solution using Pacemaker

The prerequisite software and environments needed before users can integrate Pacemaker into their systems.

Important: In Db2® 11.5.8 and later, Mutual Failover high availability is supported when using Pacemaker as the integrated cluster manager. In Db2 11.5.6 and later, the Pacemaker cluster manager for automated fail-over to HADR standby databases is packaged and installed with Db2. In Db2 11.5.5, Pacemaker is included and available for production environments. In Db2 11.5.4, Pacemaker is included as a technology preview only, for development, test, and proof-of-concept environments.

Hardware support and Linux distribution

The integrated Pacemaker high availability (HA) solution is available on the following Linux® distributions:
Intel Linux and Linux on IBM Z®
  • For Db2 11.5.8 and future fix packs in the same release:
    • Red Hat® Enterprise Linux (RHEL) 8.4 and up
    • SuSE Linux Enterprise Server (SLES) 15 SP3 and up
  • For Db2 11.5.7 and future fix packs in the same release:
    • Red Hat Enterprise Linux (RHEL) 8.1 and up
    • SuSE Linux Enterprise Server (SLES) 15 SP1 and up
  • For Db2 11.5.6, the level must be one of the following:
    • Red Hat Enterprise Linux (RHEL) 8.1 and 8.2
    • SuSE Linux Enterprise Server (SLES) 15 SP1 and SP2
  • For Db2 11.5.4 and 11.5.5, the level must be one of the following:
    • Red Hat Enterprise Linux (RHEL) 8.1
    • SuSE Linux Enterprise Server (SLES) 15 SP1
POWER® Linux
  • For Db211.5.8 and future fix packs in the same release:
    • Red Hat Enterprise Linux (RHEL) 8.4 and up
    • SuSE Linux Enterprise Server (SLES) 15 SP3 and up
  • For Db211.5.7 and future fix packs in the same release:
    • Red Hat Enterprise Linux (RHEL) 8.2 and up
    • SuSE Linux Enterprise Server (SLES) 15 SP3 and up
  • Prior to version 11.5.7:
    • Not supported

Host file setup

The hosts file is a Linux system file located in the /etc directory of each host. You need to enter the following information in the order displayed:
IP_Address  fully_qualified_domain_name  alias
The IP subnet of the IP address associated with the hostname in each of the HADR hosts must be unique. This IP address is typically used for Db2 log shipping between the two hosts as well as communication among the two cluster hosts and the third host acting as quorum arbitrator for quorum voting communication.

User and group ID

If the Db2 setup wizard is not used, users must ensure that the instance owner, fenced user, other users and their associated groups are created according to the information in Creating group and user IDs for a database installation (Linux and UNIX).

Passwordless secure shell (SSH) for root and instance user IDs

Passwordless SSH for both the root user and the instance user must be configured between the HADR nodes. The instance user and root ID must be able to use SSH between the two hosts using both Fully Qualified Domain Name and hostname aliases.

Local storage

Ensure the following local storage, for example /tmp, on each node for all cluster related software (excluding space required for Db2 server, database, log files and so on):
  • 50 MB for cluster storage RPMs and extracted files
  • 200 MB for full installation of cluster related software
  • At least 1GB in /var to store cluster software log files
  • at least 150 MB in /usr for RHEL
  • at least 300 MB in /usr for SUSE
The following is a list of new Pacemaker and Corosync files in the /usr filesystem:
  • /usr/share/pacemaker
  • /usr/share/doc/packages
  • /usr/share/licenses
  • /usr/share/man/man7
  • /usr/share/man/man8
  • /usr/lib/pacemaker
  • /usr/lib/ocf/resource.d/pacemaker
  • /usr/lib/systemd/system
  • /usr/lib/debug/dwz
  • /usr/lib64
  • /usr/lib64/pkgconfig
  • /usr/sbin

Pacemaker and Corosync port usage information

Table 1. If a firewall is set up on each host or in the network, the following ports should be opened:
Service name Port number Protocols
crmd 3121 TCP
corosync-qnetd 5403 TCP
corosync 5404 - 5405 UDP

Packages

KornShell (ksh) and python3-dnf-plugin-versionlock packages are required for Pacemaker. The latter package is used by the Db2 installer to lock all Pacemaker and Corosync RPMs.

In addition, several other checks are performed on your system to meet the installation requirements of Db2. Run the db2prereqcheck command to determine whether your system satisfies these prerequisites before you begin the installation process. For more information on the db2prereqcheck command, refer to db2prereqcheck - Check installation prerequisites.

Cluster software

Db2 supports Pacemaker as its integrated cluster manager solution only where the Pacemaker software stack being used is supplied by Db2 directly, corresponds to a specific Db2 release and is configured entirely using the new db2cm utility or as instructed by Db2 Support. For support of the Pacemaker software from Db2, it is required that the configuration provided as well as the Pacemaker software stack remain unchanged.

For version 11.5.5 and version 11.5.4, the Pacemaker version supported by Db2 must be downloaded from this public IBM® website: Db2 Automated HADR with Pacemaker. There are specific compressed tar files available for each Linux distribution and architecture.

For version 11.5.6 and later releases, the Pacemaker software is included in the Db2 Install image. On-premise deployments do not require any additional downloads and additional packages using Pacemaker, such as the Booth Cluster Ticket Manager, are not supported. For cloud-based deployments, alternate configurations referenced in Public cloud vendors supported with Db2 Pacemaker may require specific packages to be downloaded from the aforementioned public IBM website. Refer to Public cloud vendors supported with Db2 Pacemaker for more information.

QDevice quorum mechanism

This is the recommended quorum mechanism for a production system. This requires a third host to install the corosync-qnetd software to act as the arbitrator. The host itself is not required to be part of the cluster and does not require the Db2 server to be installed.

Disk space required on HADR nodes: 10MB (in addition to corosync)

Qnetd server host minimum requirements:
  • 2 vCPU
  • 8 GB memory
  • 10 MB of free disk space + 2 MB per additional cluster configured to use this host as a QDevice.
Other requirements:
  • The host used must be accessible via TCP/IP to the other two hosts in the cluster.
  • The cluster hosts must be able to communicate with the QDevice host by using the IP address that is specified in their /etc/hosts file.
  • All clusters using the QNetd server must have unique cluster names.

Virtual IP address (VIP)

Virtual IP is often setup per HADR enabled database in Db2 HADR for the purpose of enabling automatic client reroute when failover occurs. For information on prerequisites for setting up VIP, refer to Networks in a Pacemaker cluster.

Db2 high availability disaster recovery (HADR)

If you are using the HADR functionality, complete the following tasks:
  • Ensure that both HADR databases exist on different systems.
  • Ensure that all HADR databases are started in their respective primary and standby database roles, and that all HADR primary-standby database pairs are in peer state.
  • Ensure that you are using either the SYNC HADR synchronization mode or NEARSYNC HADR synchronization mode.
  • Set the hadr_peer_window configuration parameter to a value of at least 120 seconds for all HADR databases.
  • Disable the Db2 fault monitor.

Db2 Mutual Failover

If you are using the Mutual Failover functionality, complete the following tasks:
  • Ensure that both hosts are running identical Db2 versions, with identical installation paths.
  • Ensure that both hosts have identical Db2 groups and users created.
  • Ensure that both hosts have access to the shared mount, with only one host active at any given time. Only Db2 can run the shared mount. No other automation, including systemd, can run the shared mount.
  • Ensure that the file system you are using is in the list of supported file systems for Mutual Failover.
  • Disable the Db2 fault monitor.

Partitioned database environment

Note: High availability for multiple database partitions will be supported in a future release.