Creating services to expose the HADR endpoints in Db2

To specify connection information for the remote source or remote target database in Db2 High Availability Disaster Recovery (HADR), you must create and specify service endpoints to expose the Db2 HADR ports.

Unlike traditional (on-premises) Db2 HADR, you cannot use pod IP address to define the HADR remote server because in Red Hat® OpenShift® whenever a pod is rescheduled the pod or container IP address changes. Instead, you must expose the HADR connection information as an OpenShift service that can then be referenced by the remote HADR copy to set the database configuration parameters hadr_remote_host and hadr_remote_svc. The OpenShift cluster DNS then resolves the service to the correct active pod, regardless of which worker node it is scheduled on.

As of IBM Software Hub 4.5, network policies were introduced to control traffic flow to and from the Db2 deployments. Extra network policies are required to allow the databases to communicate with each other for HADR.

You can use the provided script to generate the required service and network policy definitions for HADR.

Depending on the HADR topology, the created HADR services script generates the appropriate service definitions:

Single cluster (single or multiple projects)
A ClusterIP type service for each database in its own Red Hat OpenShift project
Different clusters
  • A NodeIP type service for each database in its own Red Hat OpenShift project
  • ExternalName services that correspond to every database in each Red Hat OpenShift project plus cluster
Important: This script needs to be run on every database in the HADR configuration with the appropriate parameters corresponding to your HADR topology.
Note: This script generates only the YAML definitions. You can use the oc apply -f command to create the services and network policies directly from the output, or if the output is redirected to a file, from that file.

Syntax

Read syntax diagramSkip visual syntax diagram create_hadr_services --db-roleprimarystandbyaux1aux2 --primary-name name --standby-name name --aux1-namename--aux2-namename--primary-ext-hostipaddress/hostname--standby-ext-hostipaddress/hostname--aux1-ext-hostipaddress/hostname--aux2-ext-hostipaddress/hostname

Parameters

--db-role
The HADR role for the current database, either primary, standby, aux1, or aux2.
--primary-name
The name of the primary Db2 cluster.
--standby-name
The name of the standby Db2 cluster.
--aux1-name
The name of the auxiliary1 Db2 cluster.
--aux2-name
The name of the auxiliary2 Db2 cluster.
--primary-ext-host
The external IP address or hostname of the primary database cluster. Required if the topology includes multiple clusters, for example virtual IP, ELB, infra node with external-facing Ingress Controller.
--standby-ext-host
The external IP address or hostname of the standby database cluster. Required if the topology includes multiple clusters.
--aux1-ext-host
The external IP address or hostname of the auxiliary1 database cluster. Required if the topology includes multiple clusters.
--aux2-ext-host
The external IP address or hostname of the auxiliary2 database cluster. Required if the topology includes multiple clusters.
--cpd
Required if the deployment is on CP4D (Software Hub). On standalone, skip the parameter.

Before you start

Note the names of the Db2uCluster or Db2uInstance custom resources that correspond to the primary and standby databases. Designate an HADR role for each database:

  • primary => primary
  • principal standby => standby
  • auxiliary 1 => aux1
  • auxiliary 2 => aux2

As noted previously, automated failover is only supported between the primary and principal standby databases. As such, these roles are designated in the HADR configuration and cannot be switched without reconfiguring HADR.

oc get db2ucluster,db2uinstance
NAME STATE AGE
db2oltp-primary Ready 6h26m
db2oltp-standby Ready 6h26m
db2oltp-aux Ready 6h26m