Creating Db2 Warehouse HADR services for a multiple cluster topology

You can create HADR services and network policies for a multiple cluster topology.

About this task

The examples in this procedure are for an HADR setup with a primary and two standby databases. The primary and principal standby are in cluster1, and the auxiliary standby is in cluster2.

Procedure

  1. Generate the HADR service and network policy definitions using the create_hadr_services script on the primary database pod.
    Ports are always set up for primary and three standby databases, regardless of the your configuration.
    oc exec -it c- db2wh-primary -db2u-0 -- create_hadr_services --db-role primary --primary-name db2wh-primary --standby-name db2wh-standby --aux1-name db2wh-aux --primary-ext-host api.cluster1.ibm.com --standby-ext-host api.cluster1.ibm.com --aux1-ext-host api.cluster2.ibm.com
    
    apiVersion: v1
    kind: Service
    metadata:
      name: c-db2wh-primary-hadr-svc
    spec:
      selector:
        app: db2wh-primary
        type: engine
      ports:
        - name: db2u-hadrp
          port: 60006
          targetPort: 60006
        - name: db2u-hadrs
          port: 60007
          targetPort: 60007
        - name: db2u-hadra1
          port: 60008
          targetPort: 60008
        - name: db2u-hadra2
          port: 60009
          targetPort: 60009
      type: NodePort
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: c-db2wh-primary-hadr-svc-ext
    spec:
      type: ExternalName
      externalName: api.cluster1.ibm.com
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: c-db2wh-standby-hadr-svc-ext
    spec:
      type: ExternalName
      externalName: api.cluster1.ibm.com
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: c-db2wh-aux-hadr-svc-ext
    spec:
      type: ExternalName
      externalName: api.cluster2.ibm.com
    ---
    
  2. Switch to the project that the primary database is in, and use the oc apply -f command directly on the output to create the k8s services and network policies:
    oc project ${NAMESPACE_PRIMARY}
    oc exec -it c- db2wh-primary-db2u-0 -- create_hadr_services --db-role primary --primary-name db2wh-primary --standby-name db2wh-standby --aux1-name db2wh-aux --primary-ext-host api.cluster1.ibm.com --standby-ext-host api.cluster1.ibm.com --aux1-ext-host api.cluster2.ibm.com | oc apply -f -
  3. Verify that the services and network policies were created:
    • One NodePort service that matches the Db2 Warehouse deployment
    • Multiple ExternalName services – one for each database in the HADR configuration
    oc get svc | grep hadr-svc
    The following will be the returned output:
    c-db2wh-aux-hadr-svc-ext       ExternalName   <none>     api.cluster2.ibm.com   <none>   9s
    c-db2wh-primary-hadr-svc        NodePort       172.30.77.20     <none>   60006:32457/TCP,60007:31243/TCP,60008:30374/TCP,60009:30977/TCP   2m15s
    c-db2wh-primary-hadr-svc-ext    ExternalName   <none>  api.cluster1.ibm.com   <none>    9s
    c-db2wh-standby-hadr-svc-ext    ExternalName   <none>     api.cluster1.ibm.com   <none>
  4. Run the following command to verify that the network policy was created in the same OpenShift® Container Platform project as the Db2 Warehouse deployment:
    oc get networkpolicy | grep hadr-ext
    The following will be the returned output:
    c-db2wh-primary-hadr-ext     formation_id=db2wh-primary,type=engine         25s
  5. Repeat steps 1 to 4 for each standby database, ensuring that you use the appropriate value for --db-role and that you are in the project that corresponds to the database:
    # Create services for principal standby database in cluster1
    oc project ${NAMESPACE_STANDBY}
    oc exec -it c- db2wh-standby-db2u-0 -- create_hadr_services --db-role standby --primary-name db2wh-primary --standby-name db2wh-standby --aux1-name db2wh-aux --primary-ext-host api.cluster1.ibm.com --standby-ext-host api.cluster1.ibm.com --aux1-ext-host api.cluster2.ibm.com | oc apply -f -
    
    # Check services on cluster1
    oc get svc | grep hadr-svc
    # Output:
    c-db2wh-aux-hadr-svc-ext       ExternalName   <none>           api.cluster2.ibm.com   <none>                                                                            9s
    c-db2wh-primary-hadr-svc        NodePort       172.30.77.20     <none>                            60006:32457/TCP,60007:31243/TCP,60008:30374/TCP,60009:30977/TCP                   2m15s
    c-db2wh-primary-hadr-svc-ext    ExternalName   <none>           api.cluster1.ibm.com   <none>                                                                            9s
    c-db2wh-standby-hadr-svc        NodePort       172.30.247.77     <none>                            60006:32649/TCP,60007:31384/TCP,60008:30473/TCP,60009:30652/TCP                   1m10s
    c-db2wh-standby-hadr-svc-ext    ExternalName   <none>           api.cluster1.ibm.com   <none>     
    
    # Create services for auxiliary standby database in cluster2
    oc project ${NAMESPACE_AUX}
    oc exec -it c- db2wh-aux-db2u-0 -- create_hadr_services --db-role aux1 --primary-name db2wh-primary --standby-name db2wh-standby --aux1-name db2wh-aux --primary-ext-host api.cluster1.ibm.com --standby-ext-host api.cluster1.ibm.com --aux1-ext-host api.cluster2.ibm.com | oc apply -f –
    
    # Check services on cluster2
    oc get svc | grep hadr-svc
    # Output:
    c-db2wh-aux-hadr-svc        NodePort       172.30.241.25     <none>                            60006:34578/TCP,60007:31546/TCP,60008:30698/TCP,60009:30448/TCP                   10s
    c-db2wh-aux-hadr-svc-ext       ExternalName   <none>           api.cluster2.ibm.com   <none>                                                                            9s
    c-db2wh-primary-hadr-svc-ext    ExternalName   <none>           api.cluster1.ibm.com   <none>                                                                            9s
    c-db2wh-standby-hadr-svc-ext    ExternalName   <none>           api.cluster1.ibm.com   <none>     
    

    The example shows one NodePort service that corresponds to each database in the OpenShift cluster, for example primary plus standby in Cluster1 and aux in Cluster2, but one ExternalName service that corresponds to each database in the topology, for example primary plus standby plus aux in Cluster1 and the same in Cluster2.