Configuring shared log archiving for Db2 HADR

You can configure shared log archiving for deployments with multiple standby HADR databases in the same cluster and namespace. This prevents the need to manually copy archived log files from an old primary database to the new primary database when requested by auxiliary standbys in a takeover scenario.

About this task

You can only configure shared log archiving for Db2uCluster custom resources that are in the same Kubernetes namespace. To learn more, see Log archiving configuration for Db2 high availability disaster recovery (HADR).

In a containerized environment, a shared archive log storage can be achieved by specifying a separate log archive storage area on the designated primary Db2 custom resource (CR), and referencing the existing persistent volume claim (PVC) in the standby CRs.

Procedure

To configure shared log archiving for Db2 HADR, you must complete one of the following procedures depending on if you are using a new or existing deployment:

For new deployments:
  1. Add an archivelogs storage field to the CR.

    It is critical that the following are included in the storage specification:

    1. For storage type, use create.
    2. For access mode, use ReadWriteMany to ensure that the PVC can be shared between multiple pods.
    3. For storageClassName, use the name of the storage class that you are using for data storage.

    See Creating archive log storage for a new deployment of Db2 on Red Hat® OpenShift for more information, but note the specific requirements above

    The following example shows the syntax of the archive logs storage area section of a Db2 CR definition using ocs-storagecluster-cephfs for the Red Hat® OpenShift® Container Platform.

    spec:
      storage:
        - name: archivelogs
          type: "create"
          spec:
            storageClassName: "ocs-storagecluster-cephfs"
            accessModes:
              - ReadWriteMany
            resources:
              requests:
                storage: 30Gi
  2. Create the primary and auxiliary Db2 standby's. Reference the existing primary's archive log PVC.

    The following example shows the syntax of the archive logs storage area section of a Db2 CR definition using an existing claim. Replace DB2_PRIMARY_NAME with the name of the designated primary Db2uCluster in step 1.

    spec:
      storage:
        - claimName: c-<DB2_PRIMARY_NAME>-archivelogs
          name: archivelogs
          spec:
            resources: {}
          type: existing

For existing deployments with HADR configured and running:

If HADR is already configured and running, the following steps will detail how to create a separate archive log PVC for the designated primary Db2uCluster CR, and how to reconfigure the archive log location for each database. This procedure requires a scheduled downtime.

  1. Set environment variables with the appropriate values for your environment:
    DB2_CR_PRIMARY=<Name of primary Db2uCluster CR>
    DB2_CR_STANDBY=<Name of principal standby Db2uCluster CR>
    DB2_CR_AUX=<Name of auxiliary standby Db2uCluster CR>
  2. Stop HADR on all standby (principal and auxiliary) Db2 pods with the following command:
    oc exec -it c-${DB2_CR_STANDBY}-db2u-0 -- manage_hadr -stop
    oc exec -it c-${DB2_CR_AUX}-db2u-0 -- manage_hadr -stop
  3. Stop HADR on the primary Db2 pod:
    oc exec -it c-${DB2_CR_PRIMARY}-db2u-0 -- manage_hadr -stop
  4. Edit the primary Db2uCluster CR and add a separate archivelogs storage area.

    It is critical that the following are included in the storage specification:

    1. For storage type, use create.
    2. For access mode, use ReadWriteMany to ensure that the PVC can be shared between multiple pods.
    3. For storageClassName, use the name of the storage class that you are using for data storage.
    The following example shows the syntax of the archive logs storage area section of a Db2 CR definition using ocs-storagecluster-cephfs for the Red Hat OpenShift Container Platform.
    spec:
      storage:
        - name: archivelogs
          type: "create"
          spec:
            storageClassName: "ocs-storagecluster-cephfs"
            accessModes:
              - ReadWriteMany
            resources:
              requests:
                storage: 30Gi
    Use the following command to edit the primary Db2uCluster CR:
    oc edit db2ucluster ${DB2_CR_PRIMARY}
  5. Wait for primary Db2uCluster to reach Ready state and confirm the archive logs PVC has been created:
    oc get pvc -l formation_id=${DB2_CR_PRIMARY}
    The following is an example of the PVCs associated with the primary Db2uCluster. c-db2oltp-primary-archivelogs is the newly created PVC:
    # oc get pvc -l formation_id=db2oltp-primary
    
    NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
    
    c-db2oltp-primary-archivelogs   Bound    pvc-21184a4a-e5d5-4170-a15b-6cb352311cd6   30Gi       RWX            managed-nfs-storage   5m
    
    c-db2oltp-primary-backup        Bound    pvc-2e62ca48-c871-46f7-bafb-8e64b894bb57   50Gi       RWX            managed-nfs-storage   172m
    
    c-db2oltp-primary-meta          Bound    pvc-ece8e06a-3843-4abe-a892-c11f6a62d158   100Gi      RWX            managed-nfs-storage   172m
    
    data-c-db2oltp-primary-db2u-0   Bound    pvc-419154fb-8b1c-4b98-a328-1355f69340d5   100Gi      RWO            managed-nfs-storage   170m
    
  6. Confirm that the archive log path has been updated in the ConfigMap containing Db2 configurations. This ConfigMap is named c-<DB2_CR>-db2dbconfig, and the LOGARCHMETH1 DISK value should be set to /mnt/logs/archive. If it is not correctly set, manually update the value.
    oc get cm c-${DB2_CR_PRIMARY}-db2dbconfig -ojsonpath='{.data.dbConfig}' | grep -i LOGARCHMETH1
    The following output is an example of the expected value:
    # oc get cm c-db2oltp-primary-db2dbconfig -ojsonpath='{.data.dbConfig}' | grep -i LOGARCHMETH1
    
    LOGARCHMETH1 DISK:/mnt/logs/archive
  7. Exec into the primary Db2 pod by running the following command:
    oc exec -it c-${DB2_CR_PRIMARY}-db2u-0 -- bash
  8. Change the ownership of the new archive log storage mount to the Db2 instance owner and update the Db2 configuration settings.
    chown -R db2inst1:db2iadm1 /mnt/logs/archive/
    
    su - db2inst1 -c "/db2u/scripts/apply-db2cfg-settings.sh"
    Note: Ignore errors about Wolverine, as the High Availability mechanism is disabled when HADR is enabled
  9. Add a separate archivelog storage area to the principal standby Db2uCluster, referencing the existing PVC from the primary. Use the following command to edit the Db2uCluster CR:
    oc edit db2ucluster ${DB2_CR_STANDBY}
    The following example shows the syntax of the archive logs storage area section of a Db2 CR definition using an existing claim.
    spec:storage:-claimName:c-<DB2_CR_PRIMARY>-archivelogsname:archivelogsspec:resources: {}
          type:existing
  10. Repeat steps 6-8 to update the archive log storage path.
  11. Repeat steps 9-10 for the auxiliary standby Db2uCluster, labeled as ${DB2_CR_AUX}.
  12. Restart HADR as standby on principal and auxiliary standby Db2 engine pods:
    oc exec -it c-${DB2_CR_AUX}-db2u-0 -- manage_hadr -start_as standby
    oc exec -it c-${DB2_CR_STANDBY}-db2u-0 -- manage_hadr -start_as standby
  13. Restart HADR on primary Db2 engine pod:
    oc exec -it c-${DB2_CR_PRIMARY}-db2u-0 -- manage_hadr -start_as primary
  14. Confirm logs are synced between primary and standbys by using the manage_hadr -status command and observing the *_LOG_FILE,PAGE,POS values. These are expected to be in sync once the standbys have caught up. Refer to the following example:
    PRIMARY_LOG_FILE,PAGE,POS = S0000016.LOG, 0, 411676046
    STANDBY_LOG_FILE,PAGE,POS = S0000016.LOG, 0, 411676046
    STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000016.LOG, 0, 411676046

    If the standbys are not in sync, check the Db2 diaglogs for errors regarding missing archive logs. Copy the missing archive logs from the old storage path /mnt/bludata0/db2/archive_log to the new storage path /mnt/logs/archive/. These logs might exist only in the current primary, but if takeovers had occurred previously, they might also have been archived separately on a standby.

    The following is an example of a command to copy the archive logs from the old to new path from within the Db2 engine pod:

    cp -r /mnt/bludata0/db2/archive_log/db2inst1/BLUDB/NODE0000/LOGSTREAM0000/C0000001 /mnt/logs/archive/db2inst1/BLUDB/NODE0000/LOGSTREAM0000