Using the built-in Db2 etcd store (only non-production, single OpenShift cluster environments)

Db2 on Cloud Pak for Data includes a built-in etcd store. For development-only environments with all databases on the same OpenShift cluster, you can use the built-in etcd store from one of the deployments as the etcd endpoint for governor and HADR.

About this task

Important: The built-in etcd store is not to be used in production environments.

If you use the built-in etcd store with Db2 on Cloud Pak for Data, it might affect how Db2 is deployed. If the deployment is on a dedicated node, the etcd pod must be detached and moved to a different node. This node must not be the same as the node where the primary or standby deployments are running. This restriction ensures that if those nodes are shut down, etcd remains available.

Procedure

To move the etcd pod, complete the following steps:

  1. Label and taint the desired node for etcd.
    This must be different from the node that the Db2 deployments are running on. See Setting up dedicated nodes for your Db2 deployment for the required steps, but use a different label.
  2. Scale down the etcd StatefulSet to 0:
    oc scale sts c-db2oltp-1573141715-etcd --replicas=0
  3. For the LABEL_KEY and LABEL_VALUE environment variables, set their values to what you used to label and taint the desired node in step 1.
    For example:
    LABEL_KEY="icp4data"
    LABEL_VALUE="db2-etcd"
  4. Create a file to patch the etcd StatefulSet with the new label and taint:
    cat <<EOF > patch_db2_etcd_sts.yaml
    spec:
      template:
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: ${LABEL_KEY}
                    operator: In
                    values:
                    - ${LABEL_VALUE}
            podAntiAffinity: {}
          tolerations:
          - effect: NoSchedule
            key: ${LABEL_KEY}
            operator: Equal
            value: ${LABEL_VALUE}
    EOF
  5. Apply the patch to the etcd StatefulSet:
    oc patch sts c-db2oltp-1573141715-etcd --patch-file patch_db2_etcd_sts.yaml
  6. Scale up the etcd StatefulSet to 1:
    oc scale sts c-db2oltp-1573141715-etcd --replicas=1
  7. Determine the etcd endpoint:
    If the databases are all in the same OpenShift project:
    From the infrastructure or master node, discover the etcd service endpoint of the primary deployment. For example:
    oc get svc | grep etcd
    db2oltp-primary-etcd ClusterIP None <none> 2380/TCP,2379/TCP 5h
    db2oltp-standby-etcd ClusterIP None <none> 2380/TCP,2379/TCP 4h

    Assuming that your primary deployment name is db2oltp-primary, then the etcd endpoint to use is db2oltp-primary-etcd:2379 (port 2379 is the etcd client port).

    If the databases are in different OpenShift projects:
    From the infrastructure or master node, discover the etcd service endpoint of the primary deployment in its OpenShift project. For example:
    oc get svc | grep etcd
    db2oltp-primary-etcd ClusterIP None <none> 2380/TCP,2379/TCP 5h

    Assuming that your primary deployment name is db2oltp-primary, and the OpenShift project is zen, then the etcd endpoint to use is db2oltp-primary-etcd.zen:2379 (port 2379 is the etcd client port).