IBM Support

Setting up a two-node Db2 Pacemaker cluster with fencing on AWS with Db2 V11.5.8.0 and later releases

How To


This document provides an alternate configuration for Pacemaker without the use of a third lightweight host as a quorum device (Qdevice) arbitrator. It discusses the pros and cons of the alternate setup with fencing only, thereby guiding the choice to the user based on cost vs recovery time tradeoff.


The objective of this document is to detail an alternative to the two-node + quorum device best practice Pacemaker solution detailed in the IBM Documentation here:
The procedure outlined in this document can be used for both HADR and Mutual Failover cluster configurations on AWS with Db2 V11.5.8.0. For prior Db2 versions, refer to the original document.
On AWS, you do not necessarily need to configure a quorum device on a third host. Instead, you can configure fencing as described in this document.
The advantage of configuring a two-node Pacemaker cluster with fencing is that it removes the requirement of a third host for the quorum device, thus reducing on-going cost.  
The disadvantage is that the longer recovery time from primary host failure due to the added time it takes to successfully fence the failed host from the cluster. Based on our internal test result in a controlled environment, it can take up to 6 times longer to recover from a primary host failure with fencing compared to when a quorum device host is used. To compensate for this fact on HADR clusters, the HADR_PEER_WINDOW value of all databases must be set to at least 300 seconds. There are no additional configuration changes required to compensate for long fencing times on Mutual Failover clusters, however fail-over automation will not occur until fencing is completed.
The choice of configuration must be based on your specific business requirements by taking recovery time and cost of implementation into account.
Fencing on AWS is done by using the fence_aws agent. The fence_aws agent is an open source I/O fencing agent for AWS, which uses the boto3 library to connect to AWS.
In Db2 V11.5.5.0, the fencing agent was included as part of the Pacemaker cluster software package in the following IBM website. Starting from Db2 V11.5.6.0, the Pacemaker software stack becomes part of the standard Db2 installation image. Only the fencing agent remains in the following IBM website. Irrespective of which Db2 version is used, the fencing agent for AWS must be the one provided through the following site instead of other versions available elsewhere.
Download and use the pacemaker software package and the AWS fencing agent from this website:
To install the fencing agent, perform the following steps:
1. Download the latest version of the AWS fencing agent from the previously mentioned website.
Db2 v11.5.9.0
For RHEL 9, download: Db2_RHEL9_AWS_fence_agents_4.12.1.tar.gz
For RHEL 8, download: Db2_RHEL8_AWS_fence_agents_4.12.1.tar.gz
For SLES 15, download: Db2_SLES15_AWS_fence_agents_4.12.1.tar.gz
Db2 v11.5.8.0
For RHEL 8, download: Db2_RHEL_AWS_fence_agents_4.11.0-4.tar.gz
For SLES 15, download: Db2_SLES_AWS_fence_agents_4.7.1-3.tar.gz
2. Unpack the archive by using tar - for example
tar -zxf Db2_RHEL8_AWS_fence_agents_4.12.1.tar.gz
The command creates the directory Db2_RHEL8_AWS_fence_agents_4.12.1 in the current working directory.
3. Install the rpm packages
Switch to the directory, and issue the following command:
For SLES: zypper install --allow-unsigned-rpm *.noarch.rpm
For RHEL: dnf install *.noarch.rpm
Note: The fencing agents must be installed on both nodes in the cluster.
Once installed, the following procedure can be performed to configure a two host HADR Pacemaker cluster with fencing on AWS.


Refer to the following IBM Documentation page for a list of platforms supported by Pacemaker, these same restrictions apply here:
Ensure you have configured your environment including the AWS Identity and Access Management (IAM) as described here:


1. Refer to the “Configuring a clustered environment using the Db2 cluster manager (db2cm) utility” page of the IBM Documentation to deploy the automated HADR solution as usual.
2. Ensure the latest AWS CLI utility has been installed as described the AWS documentation.
Once installed the ‘aws’ command must be accessible from /usr/bin/aws, this might require creating a symbolic link.
ln –s <aws cli location> /usr/bin/aws
For example:
ln –s /usr/local/aws-cli/v2/current/bin/aws /usr/bin/aws
3. Create the following policy and attach it to the instances with an IAM Role by using the JSON example. See the AWS documentation ‘Creating IAM policies’ for more detail.
Note: Replace the region, account-id and instance-id values with the appropriate value from your AWS account. 
    "Version": "2012-10-17", 
    "Statement": [ 
            "Sid": "Stmt000", 
            "Effect": "Allow", 
            "Action": [ 
            "Resource": "*" 
            "Sid": "Stmt001", 
            "Effect": "Allow", 
            "Action": [ 

            "Resource": [ 
4. Ensure the HADR_PEER_WINDOW is set to at least 300 seconds. Run the following on the primary database.
db2 update db cfg for <database name> using HADR_PEER_WINDOW 300
Restart HADR or reactivate the database for the change to take affect.

5. Create the fence agent resource using the db2cm command.
db2cm -create -aws -fence
Remove fencing resources from the cluster
1. Use the db2cm utility to remove the fencing agent resource.
db2cm -delete -aws –fence
2. Verity that the fence agent has been removed by running the db2cm –list command.
3. If the fence agent is being removed permanently, then  delete the IAM policy. Refer to the AWS documentation below for deleting policies.

Document Location


[{"Type":"MASTER","Line of Business":{"code":"LOB10","label":"Data and AI"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSEPGG","label":"Db2 for Linux, UNIX and Windows"},"ARM Category":[{"code":"a8m3p0000006xc1AAA","label":"High Availability-\u003EPacemaker"}],"ARM Case Number":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions"}]

Document Information

Modified date:
16 November 2023