Deploy etcd packaged with the Db2 Warehouse installation

You can use a YAML file to configure the etcd store that comes packaged with Db2 Warehouse for your HADR configuration on a Red Hat OpenShift cluster.

About this task

The etcd store enables automated failover.

Procedure


    1. Set the following environment variables to generate the yaml files with the appropriate values:
      RWO_STORAGE_CLASS="<RWO storage class for the volume used for etcd>"
      DB2UCLUSTER_PRIMARY="<Primary Db2uCluster CR name>"
      ETCD_ID="<New ETCD service and StatefulSet name>"
    2. Set other environment variables to generate the required etcd yaml files:
      DB2U_SA=account-${PROJECT_CPD_INSTANCE}-${DB2UCLUSTER_PRIMARY}
      DB2U_VERSION=$(oc get db2ucluster ${DB2UCLUSTER_PRIMARY} -n ${PROJECT_CPD_INSTANCE} -ojsonpath="{.status.version}")
      ETCD_IMAGE=$(oc get configmap db2u-release -n ${PROJECT_CPFS_OPS} -ojsonpath="{.data.json}" | jq -r ".databases.db2u.\"${DB2U_VERSION}\".images.etcd")
  1. The following yaml file creates a StatefulSet with three replicas for deploying the etcd store, a persistent volume using volumeClaimTemplates to store the etcd data, and a service endpoint:
    
    cat << EOF > ext_etcd.yaml
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: ${ETCD_ID}
      name: ${ETCD_ID}
      namespace: ${PROJECT_CPD_INSTANCE}
    spec:
      clusterIP: None
      clusterIPs:
      - None
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      ports:
      - name: etcd
        port: 2379
        protocol: TCP
        targetPort: 2379
      - name: peer
        port: 2380
        protocol: TCP
        targetPort: 2380
      publishNotReadyAddresses: true
      selector:
        app: ${ETCD_ID}
      sessionAffinity: None
      type: ClusterIP
    status:
      loadBalancer: {}
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      labels:
        app: ${ETCD_ID}
      name: ${ETCD_ID}
      namespace: ${PROJECT_CPD_INSTANCE}
    spec:
      podManagementPolicy: Parallel
      replicas: 3
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: ${ETCD_ID}
      serviceName: ${ETCD_ID}
      template:
        metadata:
          labels:
            app: ${ETCD_ID}
        spec:
         # affinity:                                              # nodeSelector can be used to deploy etcd on a set of nodes
         #   nodeAffinity:                                        # Label the nodes in the nodegroup for etcd and use the labels 
         #     requiredDuringSchedulingIgnoredDuringExecution:    # in this yaml
         #       nodeSelectorTerms:
         #       - matchExpressions:
         #         - key: db2u
         #           operator: In
         #           values:
         #           - etcd
          containers:
          - command:
            - /scripts/start.sh
            env:
            - name: SERVICENAME
              value: ${ETCD_ID}
            - name: role
              value: etcd
            - name: INITIAL_CLUSTER_SIZE
              value: "3"
            - name: SET_NAME
              value: ${ETCD_ID}
            - name: RUNTIME_ENV
              value: LOCAL
            - name: ETCDCTL_API
              value: "3"
            image: ${ETCD_IMAGE}
            imagePullPolicy: IfNotPresent
            lifecycle:
              preStop:
                exec:
                  command:
                  - /scripts/stop.sh
            livenessProbe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              tcpSocket:
                port: 2379
              timeoutSeconds: 1
            name: etcd
            readinessProbe:
              failureThreshold: 5
              initialDelaySeconds: 15
              periodSeconds: 15
              successThreshold: 1
              tcpSocket:
                port: 2379
              timeoutSeconds: 3
            resources:
              limits:
                cpu: 500m
                ephemeral-storage: 10Mi
                memory: 512Mi
              requests:
                cpu: 100m
                ephemeral-storage: 5Mi
                memory: 256Mi
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              readOnlyRootFilesystem: false
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /persistence
              name: etcd
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
            runAsGroup: 2001
            fsGroup: 2001
          serviceAccount: ${DB2U_SA}            # !!!!! replace the serviceAccount and serviceAccountName
          serviceAccountName: ${DB2U_SA}        # !!!!!  with the one created with your db2wh deployment
          terminationGracePeriodSeconds: 120
         # tolerations:                                      # Use tolerations if you're planning to taint the nodes created
         # - effect: NoSchedule                              # in the etcd nodegroup
         #   key: db2u
         #   operator: Equal
         #   value: etcd
      volumeClaimTemplates:
      - apiVersion: v1
        kind: PersistentVolumeClaim
        metadata:
          annotations:
            app: ${ETCD_ID}
          name: etcd
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
          storageClassName: ${RWO_STORAGE_CLASS}              # Storage class for the volume used for etcd. Replace with any RWO storage class
          volumeMode: Filesystem
      updateStrategy:
        rollingUpdate:
          partition: 0
        type: RollingUpdate
    EOF
    
  2. Review the yaml file. Update the nodeSelector and tolerations fields to deploy etcd on dedicated nodes by uncommenting and adding the required labels.
  3. Create the resources by running the following command:
    oc apply -f ext_etcd.yaml