Deploy etcd packaged with the Db2 installation

Important: IBM Cloud Pak for Data Version 4.7 will reach end of support (EOS) on 31 July, 2025. For more information, see the Discontinuance of service announcement for IBM Cloud Pak for Data Version 4.X.

Upgrade to IBM Software Hub Version 5.1 before IBM Cloud Pak for Data Version 4.7 reaches end of support. For more information, see Upgrading IBM Software Hub in the IBM Software Hub Version 5.1 documentation.

You can use a YAML file to configure the etcd store that comes packaged with Db2 for your HADR configuration on a Red Hat OpenShift cluster.

About this task

The etcd store enables automated failover.

Procedure

  1. Set the following environment variables:
    1. Set the RWO_STORAGE_CLASS environment variable to the volume used for the etcd:
      export RWO_STORAGE_CLASS=<storage_volume>
    2. Set the DB2UCLUSTER_PRIMARY environment variable to the name of the cluster with your Db2 installation:
      export DB2UCLUSTER_PRIMARY=<Db2uCluster_name>
    3. Set the ETCD_ID environment variable to the name of the etcd service and StatefulSet:
      ETCD_ID=<ETCD_service_and_StatefulSet_name>
    4. Set other environment variables to generate the required etcd YAML files.
      Set DB2U_SA to the Service Account used for the primary Db2U deployment:
      DB2U_SA=account-${PROJECT_CPD_INST_OPERANDS}-${DB2UCLUSTER_PRIMARY}
      Set DB2U_VERSION to the etcd image for your release:
      DB2U_VERSION=$(oc get db2ucluster ${DB2UCLUSTER_PRIMARY} -n ${PROJECT_CPD_INST_OPERANDS} -ojsonpath="{.status.version}")
      Set ETCD_IMAGE to the Db2 version being used for the primary Db2UCluster.
      ETCD_IMAGE=$(oc get configmap db2u-release -n ${PROJECT_CPD_INST_OPERATORS} -ojsonpath="{.data.json}" | jq -r ".databases.db2u.\"${DB2U_VERSION}\".images.etcd")
  2. Run the following command to create a StatefulSet:
    The following YAML file creates a StatefulSet with three replicas for the following reasons:
    • Deploying the etcd store.
    • A persistent volume using volumeClaimTemplates to store the etcd data.
    • Creating a service endpoint.
    
    cat << EOF > ext_etcd.yaml
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: ${ETCD_ID}
      name: ${ETCD_ID}
      namespace: ${PROJECT_CPD_INST_OPERANDS}
    spec:
      clusterIP: None
      clusterIPs:
      - None
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      ports:
      - name: etcd
        port: 2379
        protocol: TCP
        targetPort: 2379
      - name: peer
        port: 2380
        protocol: TCP
        targetPort: 2380
      publishNotReadyAddresses: true
      selector:
        app: ${ETCD_ID}
      sessionAffinity: None
      type: ClusterIP
    status:
      loadBalancer: {}
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      labels:
        app: ${ETCD_ID}
      name: ${ETCD_ID}
      namespace: ${PROJECT_CPD_INST_OPERANDS}
    spec:
      podManagementPolicy: Parallel
      replicas: 3
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: ${ETCD_ID}
      serviceName: ${ETCD_ID}
      template:
        metadata:
          labels:
            app: ${ETCD_ID}
        spec:
         # affinity:                                              # nodeSelector can be used to deploy etcd on a set of nodes
         #   nodeAffinity:                                        # Label the nodes in the nodegroup for etcd and use the labels 
         #     requiredDuringSchedulingIgnoredDuringExecution:    # in this yaml
         #       nodeSelectorTerms:
         #       - matchExpressions:
         #         - key: db2u
         #           operator: In
         #           values:
         #           - etcd
          containers:
          - command:
            - /scripts/start.sh
            env:
            - name: SERVICENAME
              value: ${ETCD_ID}
            - name: role
              value: etcd
            - name: INITIAL_CLUSTER_SIZE
              value: "3"
            - name: SET_NAME
              value: ${ETCD_ID}
            - name: RUNTIME_ENV
              value: LOCAL
            - name: ETCDCTL_API
              value: "3"
            image: ${ETCD_IMAGE}
            imagePullPolicy: IfNotPresent
            lifecycle:
              preStop:
                exec:
                  command:
                  - /scripts/stop.sh
            livenessProbe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              tcpSocket:
                port: 2379
              timeoutSeconds: 1
            name: etcd
            readinessProbe:
              failureThreshold: 5
              initialDelaySeconds: 15
              periodSeconds: 15
              successThreshold: 1
              tcpSocket:
                port: 2379
              timeoutSeconds: 3
            resources:
              limits:
                cpu: 500m
                ephemeral-storage: 10Mi
                memory: 512Mi
              requests:
                cpu: 100m
                ephemeral-storage: 5Mi
                memory: 256Mi
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              readOnlyRootFilesystem: false
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /persistence
              name: etcd
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
            runAsGroup: 2001
            fsGroup: 2001
          serviceAccount: ${DB2U_SA}            # !!!!! replace the serviceAccount and serviceAccountName
          serviceAccountName: ${DB2U_SA}        # !!!!!  with the one created with your db2oltp deployment
          terminationGracePeriodSeconds: 120
         # tolerations:                                      # Use tolerations if you're planning to taint the nodes created
         # - effect: NoSchedule                              # in the etcd nodegroup
         #   key: db2u
         #   operator: Equal
         #   value: etcd
      volumeClaimTemplates:
      - apiVersion: v1
        kind: PersistentVolumeClaim
        metadata:
          annotations:
            app: ${ETCD_ID}
          name: etcd
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
          storageClassName: ${RWO_STORAGE_CLASS}              # Storage class for the volume used for etcd. Replace with any RWO storage class
          volumeMode: Filesystem
      updateStrategy:
        rollingUpdate:
          partition: 0
        type: RollingUpdate
    EOF
    
  3. Update the nodeSelector and tolerations fields to deploy etcd on dedicated nodes by adding the required labels.
  4. Create the resources by running the following command:
    oc apply -f ext_etcd.yaml