Enabling replication for the Db2 service

You can enable the replication feature that is built into the Db2 service on Cloud Pak for Data by editing the custom resource for the service and setting configuration parameters.

For more information, see Installing the Db2 service.

About this task

You must enable replication on both the source and target databases in the same or different clusters.

Enabling replication and activating replication are two important, but different steps:

  1. When you enable replication, the Db2U operator deploys replication as an add-on component with its own pod, container, and services. Some Db2 registry variables and configuration parameters that support replication are also set (later on this page).
  2. Activating replication is done through the Data Management Console service on Cloud Pak for Data. To use the Db2® Data Management Console to activate replication, you must enable replication before you deploy the DMC. When you activate replication in the DMC web interface, Db2 starts logging SQL changes in an expanded format for replication, message queues and other IBM® MQ objects are enabled for data transport, metadata tables are created, and the queue manager, capture process, and replication REST server are started.
Prerequisite: Security-Enhanced Linux (SELinux) must be enabled on the cluster before you enable replication.

Procedure

  1. Edit the Db2 custom resource (CR) to enable replication and accept the 90-day trial license.
    1. Put the custom resource into edit mode:
      oc edit db2ucluster deployment-ID
    2. Insert the following properties in the addOns.qrep section of the CR:
      Warning: Indentations within a YAML file are important and must be preserved.
      
      addOns:
          qrep:
            enabled: true
            infraHost: db2-cluster-hostname
            infraIP: db2-cluster-external-ip
            license:
              accept: true
      

      where db2-cluster-hostname is the host name of the management node of the OpenShift® cluster to which replication requests can be sent, and db2-cluster-external-ip is the external IP address of the management node.

    3. In the storage section of the CR, insert a storage name called qrepdata with the storage class name of your choice. This example uses a NFS storage class named managed_nfs-storage. Default storage requested is 100Gi, which you can modify for your replication storage requirement.
      storage:
      - name: qrepdata
        spec:
          accessModes:
          - ReadWriteMany
          resources:
            requests:
              storage: 100Gi
          storageClassName: managed-nfs-storage
        type: create
      Note: In the current release, the Db2 deployment on which you are enabling replication must use one of the following supported storage classes:
      • Network File System (NFS)
      • IBM Storage Scale
      • OpenShift Data Foundation (ODF)

      When you save and close the CR, the operator deploys the replication component. The following Db2 database configuration parameters are set automatically to enable replication:

      DFT_SCHEMAS_DCC=YES
      LOG_DDL_STMTS=YES
      LOG_APPL_INFO=YES
      EXTBL_LOCATION=/mnt/blumeta0/home;/mnt/bludata0/scratch;/mnt/external;/mnt/qrepdata/applyetfiles/repl

      The following registry variables are set automatically:

      DB2_DCC_BINARY_FILE=true
      DB2_DCC_FILE_DEL_THRES=1
      DB2_DCC_FILE_INS_THRES=10
      DB2_DCC_FILE_CHUNKSZ=100000000
      DB2_DCC_FILE_PATH=/mnt/qrepdata/db2supplog/db2
      DB2_CDE_DCC=true
      DB2_FMP_RUN_AS_CONNECTED_USER=NO
  2. Run the following commands to check the status of the replication deployment:
    oc get deployment | grep qrep
    oc get pod | grep qrep

    Use the Db2 instance ID to find a matching replication deployment and pod name. Verify that the Db2 cluster is in Ready state and that the replication add-on is in Running state.

  3. The qrep-expose-nodeports.sh sample script exposes replication services using NodePort. Complete the following steps:
    Note: Replication services can also be exposed outside the cluster via Routes, ClusterIP, LoadBalancer, and ExternalName.
    1. Find the Db2 pod prefix where the replication pod is deployed:
      oc get po | grep db2
      In the following output, the replication pod is c-db2oltp-1636513131239517-qrep-7c7847968c-7pjs2 and the prefix is c-db2oltp-1636513131239517.
      c-db2oltp-1636513131239517-db2u-0                  1/1     Running           0          11m
      c-db2oltp-1636513131239517-etcd-0                  1/1     Running           0          11m
      c-db2oltp-1636513131239517-qrep-7c7847968c-7pjs2   1/1     Running           0          11m
      
    2. Copy the script from the replication container to the haproxy server of the OpenShift cluster:
      oc cp repl_container_pod_name:opt/ibm/bludr/scripts/bin/qrep-expose-nodeports.sh qrep-expose-nodeports.sh
    3. To be able to run the script, change the permissions of the copied file by running the following command:
      chmod +x qrep-expose-nodeports.sh
    4. Run the script for each replication container:
      ./qrep-expose-nodeports.sh db2u_cluster_instance_prefix

      From the example in Step 3.a, db2u_cluster_instance_prefix is c-db2oltp-1636513131239517.

What to do next

Activate replication.