Deploying IBM Db2 Warehouse SMP using Kubernetes
Use the Helm Chart application package manager and Kubernetes to deploy IBM® Db2® Warehouse to a single-node symmetric
multiprocessing (SMP) system.
Before you begin
If you previously installed Db2 Warehouse on your current hardware, do not follow this procedure. Instead, redeploy Db2 Warehouse by following the procedure described in Redeploying IBM Db2 Warehouse using Kubernetes.
Ensure that your Linux® system meets the prerequisites
described in IBM Db2 Warehouse prerequisites for Linux and x86 hardware. Additionally, it must contain the following environments:
- Docker
17.06.02 or later, with the storage
driver set to
overlay2 - Kubernetes, with access to the
kubectlandhelmcommands
Ensure that you meet the prerequisites described in Getting container images.
Procedure
- Ensure that you have root authority on the host operating system.
-
Refer to Configuration options.
If the default value of any of the following options needs to be overridden in your Db2 Warehouse environment, contact your IBM Support representative:
DB_CODESET DB_COLLATION_SEQUENCE DB_PAGE_SIZE DB_TERRITORY ENABLE_ORACLE_COMPATIBILITY TABLE_ORG -
From the master node of the Kubernetes cluster, log in to Docker by using your API key:
whereecho <apikey> | docker login -u iamapikey --password-stdin icr.io<apikey>is the API key that you created as a prerequisite in Getting container images. -
Pull the current container image:
docker pull icr.io/obs/hdm/db2wh_ee:v11.5.7.0-cn5-db2wh-linux -
Issue the following docker run command to extract the YAML files that are
needed to deploy Helm Chart:
- For a Db2 Warehouse Enterprise
Edition
container:
docker run -dit --name=test --entrypoint=/bin/bash icr.io/obs/hdm/db2wh_ee:v3.1.0-db2wh-linux
- For a Db2 Warehouse Enterprise
Edition
container:
-
Issue the following commands to create a shell container and extract the Helm Chart YAML files
from the container to the master node host:
docker cp test:/opt/ibm/scripts/kubernetes/smp/db2warehouse-smp-helm . docker stop test; docker rm test; -
Issue the following command to generate a base64-encoded version of your Docker login
credentials:
cat ~/.docker/config.json | base64 -w0 - Copy the generated encoded Docker login credentials to the clipboard.
-
Open the secret.yaml file:
vi db2warehouse-smp-helm/templates/secret.yaml -
Paste the generated encoded Docker login credentials over the
.dockerconfigjsonstring, and close the secret.yaml file. - Log in to the Kubernetes master node as an administrator so that you can issue the kubectl and helm commands.
-
Set up a storage persistent volume (PV):
-
On an NFS server host, issue the following command to install the NFS utility:
yum install -y nfs-utils -
On an NFS server host, issue the following command to make the mount point directory for the
NFS server:
mkdir -p /mnt/clusterfs -
On an NFS server host, edit the /etc/exports file to add the following
mount share point and options:
where/mnt/clusterfs <IP_address>(rw,sync,no_root_squash,no_all_squash)<IP_address>represents the IP address of the host to which IBM Db2 Warehouse is to be deployed.On an NFS server host, issue the following command to make the file systems available to remote users:exportfs -
On an NFS server host, issue the following command to restart the NFS service. This applies the
changes to the /etc/exports file
systemctl restart nfs -
On the Kubernetes master node, create a file with the name
db2w-nfs-pv.yaml for the storage PV. Replace
<NFS-server-host-IP-address>with the IP address of the NFS server host.vi db2w_nfs_pv.yaml apiVersion: v1 kind: PersistentVolume metadata: labels: pv-name: db2w_nfs_pv name: db2w_nfs_pv spec: accessModes: - ReadWriteOnce capacity: storage: 50Gi persistentVolumeReclaimPolicy: Retain nfs: path: /mnt/clusterfs server: <NFS-server-host-IP-address> -
On the Kubernetes master node, create a file with the name
db2w-nfs-pvc.yaml for the PVC.
vi db2w_nfs_pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: db2w_nfs_pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: "" selector: matchLabels: pv-name: db2w_nfs_pv -
Issue the following commands to create and bind together, in the NFS, a PV with the name
db2w_nfs_pv and a PVC with the name db2w_nfs_pvc:
kubectl create -f db2w_nfs_pv.yaml kubectl create -f db2w_nfs_pvc.yaml -
Issue the following command to verify that db2w_nfs_pv and db2w_nfs_pvc are properly
bound:
kubectl describe pvc db2w_nfs_pvc
-
On an NFS server host, issue the following command to install the NFS utility:
-
Modify the db2warehouse-smp-helm/values.yaml file
- Set the existingClaimName to the PVC name (db2w_nfs_pvc).
- Replace the password specified by BLUADMIN.PASSWORD with a new password for the bluadmin user.
- If necessary, adjust the
repoandtag namefields to correspond to the name of the image that you are deploying. For example, for an image with the nameicr.io/obs/hdm/db2wh_ce:v3.0.1-db2wh_devc-linux, specify the following fields:repo: "icr.io/obs/hdm/db2wh_ce" tag name: "v3.0.1-db2wh_devc-linux"
-
Issue the following command to install Helm Chart:
helm install --name db2wh-smp-deploy db2warehouse-smp-helm -
Issue the following commands to check whether the deployment is progressing successfully:
- To retrieve the full pod
name:
kubectl get pod | grep db2warehouse-smp - To check the pod status and confirm that it is creating the
container:
kubectl describe pod full-pod-name - After the container is created, issue the following command to monitor its log until the log
indicates that the deployment has concluded
successfully:
kubectl logs -f full-pod-name
- To retrieve the full pod
name:
-
Log in to the web console:
- Open the /etc/hosts file to determine the IP address of the proxy node.
- Issue the following command to retrieve the port
number:
The output contains a phrase of the formkubectl get service | grep db2warehouse8443:<port>, where<port>represents the port number. - In a browser, enter the URL of the web console. The URL has the form https://IP_address:port_number.
- Log in with the user ID bluadmin and the password that was set in Step 13