Deploy etcd packaged with the Db2 installation
You can use a YAML file to configure the etcd store that comes packaged with Db2 for your HADR configuration on a Red Hat OpenShift cluster.
About this task
The etcd store enables automated failover.
Procedure
-
Set the following environment variables:
- Set the RWO_STORAGE_CLASS environment variable to the volume used
for the etcd:
export RWO_STORAGE_CLASS=<storage_volume>
- Set the DB2UCLUSTER_PRIMARY environment variable to the name of the
cluster with your Db2 installation:
export DB2UCLUSTER_PRIMARY=<Db2uCluster_name>
- Set the ETCD_ID environment variable to the name of the etcd
service and StatefulSet:
ETCD_ID=<ETCD_service_and_StatefulSet_name>
- Set other environment variables to generate the required etcd YAML
files. Set DB2U_SA to the Service Account used for the primary Db2U deployment:
Set DB2U_VERSION to the etcd image for your release:DB2U_SA=account-${PROJECT_CPD_INST_OPERANDS}-${DB2UCLUSTER_PRIMARY}
Set ETCD_IMAGE to the Db2 version being used for the primaryDB2U_VERSION=$(oc get db2ucluster ${DB2UCLUSTER_PRIMARY} -n ${PROJECT_CPD_INST_OPERANDS} -ojsonpath="{.status.version}")
Db2UCluster
.ETCD_IMAGE=$(oc get configmap db2u-release -n ${PROJECT_CPD_INST_OPERATORS} -ojsonpath="{.data.json}" | jq -r ".databases.db2u.\"${DB2U_VERSION}\".images.etcd")
- Set the RWO_STORAGE_CLASS environment variable to the volume used
for the etcd:
- Run the following command to create a StatefulSet: The following YAML file creates a StatefulSet with three replicas for the following reasons:
- Deploying the etcd store.
- A persistent volume using
volumeClaimTemplates
to store the etcd data. - Creating a service endpoint.
cat << EOF > ext_etcd.yaml --- apiVersion: v1 kind: Service metadata: labels: app: ${ETCD_ID} name: ${ETCD_ID} namespace: ${PROJECT_CPD_INST_OPERANDS} spec: clusterIP: None clusterIPs: - None ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: etcd port: 2379 protocol: TCP targetPort: 2379 - name: peer port: 2380 protocol: TCP targetPort: 2380 publishNotReadyAddresses: true selector: app: ${ETCD_ID} sessionAffinity: None type: ClusterIP status: loadBalancer: {} --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: ${ETCD_ID} name: ${ETCD_ID} namespace: ${PROJECT_CPD_INST_OPERANDS} spec: podManagementPolicy: Parallel replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: ${ETCD_ID} serviceName: ${ETCD_ID} template: metadata: labels: app: ${ETCD_ID} spec: # affinity: # nodeSelector can be used to deploy etcd on a set of nodes # nodeAffinity: # Label the nodes in the nodegroup for etcd and use the labels # requiredDuringSchedulingIgnoredDuringExecution: # in this yaml # nodeSelectorTerms: # - matchExpressions: # - key: db2u # operator: In # values: # - etcd containers: - command: - /scripts/start.sh env: - name: SERVICENAME value: ${ETCD_ID} - name: role value: etcd - name: INITIAL_CLUSTER_SIZE value: "3" - name: SET_NAME value: ${ETCD_ID} - name: RUNTIME_ENV value: LOCAL - name: ETCDCTL_API value: "3" image: ${ETCD_IMAGE} imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /scripts/stop.sh livenessProbe: failureThreshold: 5 initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 tcpSocket: port: 2379 timeoutSeconds: 1 name: etcd readinessProbe: failureThreshold: 5 initialDelaySeconds: 15 periodSeconds: 15 successThreshold: 1 tcpSocket: port: 2379 timeoutSeconds: 3 resources: limits: cpu: 500m ephemeral-storage: 10Mi memory: 512Mi requests: cpu: 100m ephemeral-storage: 5Mi memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /persistence name: etcd dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true runAsUser: 1001 runAsGroup: 2001 fsGroup: 2001 serviceAccount: ${DB2U_SA} # !!!!! replace the serviceAccount and serviceAccountName serviceAccountName: ${DB2U_SA} # !!!!! with the one created with your db2oltp deployment terminationGracePeriodSeconds: 120 # tolerations: # Use tolerations if you're planning to taint the nodes created # - effect: NoSchedule # in the etcd nodegroup # key: db2u # operator: Equal # value: etcd volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: app: ${ETCD_ID} name: etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: ${RWO_STORAGE_CLASS} # Storage class for the volume used for etcd. Replace with any RWO storage class volumeMode: Filesystem updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate EOF
- Update the
nodeSelector
andtolerations
fields to deploy etcd on dedicated nodes by adding the required labels. - Create the resources by running the following command:
oc apply -f ext_etcd.yaml