You can provision Db2® Big
SQL on
dedicated Red Hat®
OpenShift® worker nodes.
About this task
Do this advanced configuration only when it is necessary. Constraining Db2 Big
SQL to a specific set of nodes can complicate
activities, including upgrades and maintenance at the OpenShift level. In general, it is recommended that
you allow Db2 Big
SQL to be scheduled across all
available OpenShift worker nodes.
This task assumes that all Db2 Big
SQL
instances share the same set of dedicated nodes. That is, the nodes are labeled as
icp4data=database-bigsql
.
Procedure
- Log in to your OpenShift cluster as a project administrator:
oc login <OpenShift_URL>:<port>
- Retrieve the name of the worker node that you want to dedicate to Db2 Big
SQL:
- Label and taint the nodes:
oc adm taint node node name icp4data=database-bigsql:PreferNoSchedule
oc label node node name icp4data=database-bigsql
- Change to the project where the Cloud Pak for Data control
plane is installed:
oc project ${PROJECT_CPD_INSTANCE}
- Identify the Cloud Pak for Data pod that holds
the Db2 Big
SQL Custom Resource (CR)
template:
pod=$(oc get pod -o custom-columns=NAME:.metadata.name --no-headers | grep ibm-nginx-[0-9a-h]*-[0-9a-z]* | head -n 1)
- On the local host, back up the Db2 Big
SQL CR template from the pod:
oc cp $pod:/user-home/_zen-addons/bigsql/7.4.4/yamls/bigsql-cr.yaml /tmp/bigsql-cr.yaml
- On the pod, update the Db2 Big
SQL CR
template file:
oc exec -it $pod -- sed -i -e 's/^spec:/spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: icp4data\n operator: In\n values:\n - database-bigsql\n tolerations:\n - key: "icp4data"\n operator: "Equal"\n value: "database-bigsql"\n effect: "PreferNoSchedule"/' /user-home/_zen-addons/bigsql/7.4.4/yamls/bigsql-cr.yaml