Deploying IBM Db2 Warehouse SMP using Kubernetes

Use the Helm Chart application package manager and Kubernetes to deploy IBM® Db2® Warehouse to a single-node symmetric multiprocessing (SMP) system.

Before you begin

If you previously installed Db2 Warehouse on your current hardware, do not follow this procedure. Instead, redeploy Db2 Warehouse by following the procedure described in Redeploying IBM Db2 Warehouse using Kubernetes.

Ensure that your Linux® system meets the prerequisites described in IBM Db2 Warehouse prerequisites for Linux and x86 hardware. Additionally, it must contain the following environments:
  • Docker 17.06.02 or later, with the storage driver set to overlay2
  • Kubernetes, with access to the kubectl and helm commands

Ensure that you meet the prerequisites described in Getting container images.

Procedure

  1. Ensure that you have root authority on the host operating system.
  2. Refer to Configuration options.
    If the default value of any of the following options needs to be overridden in your Db2 Warehouse environment, contact your IBM Support representative:
    DB_CODESET
    DB_COLLATION_SEQUENCE
    DB_PAGE_SIZE
    DB_TERRITORY
    ENABLE_ORACLE_COMPATIBILITY
    TABLE_ORG
  3. From the master node of the Kubernetes cluster, log in to Docker by using your API key:
    echo <apikey> | docker login -u iamapikey --password-stdin icr.io
    where <apikey> is the API key that you created as a prerequisite in Getting container images.
  4. Pull the current container image:
    docker pull icr.io/obs/hdm/db2wh_ee:v11.5.6.0-db2wh-linux
  5. Issue the following docker run command to extract the YAML files that are needed to deploy Helm Chart:
    • For a Db2 Warehouse Enterprise Edition container:
      docker run -dit --name=test --entrypoint=/bin/bash icr.io/obs/hdm/db2wh_ee:v3.1.0-db2wh-linux
  6. Issue the following commands to create a shell container and extract the Helm Chart YAML files from the container to the master node host:
    docker cp test:/opt/ibm/scripts/kubernetes/smp/db2warehouse-smp-helm .
    docker stop test; docker rm test;
  7. Issue the following command to generate a base64-encoded version of your Docker login credentials:
    cat ~/.docker/config.json | base64 -w0
  8. Copy the generated encoded Docker login credentials to the clipboard.
  9. Open the secret.yaml file:
    vi db2warehouse-smp-helm/templates/secret.yaml
  10. Paste the generated encoded Docker login credentials over the .dockerconfigjson string, and close the secret.yaml file.
  11. Log in to the Kubernetes master node as an administrator so that you can issue the kubectl and helm commands.
  12. Set up a storage persistent volume (PV):
    1. On an NFS server host, issue the following command to install the NFS utility:
      yum install -y nfs-utils
    2. On an NFS server host, issue the following command to make the mount point directory for the NFS server:
      mkdir -p /mnt/clusterfs
    3. On an NFS server host, edit the /etc/exports file to add the following mount share point and options:
      /mnt/clusterfs <IP_address>(rw,sync,no_root_squash,no_all_squash)
      where <IP_address> represents the IP address of the host to which IBM Db2 Warehouse is to be deployed.
      On an NFS server host, issue the following command to make the file systems available to remote users:
      exportfs
    4. On an NFS server host, issue the following command to restart the NFS service. This applies the changes to the /etc/exports file
      systemctl restart nfs
    5. On the Kubernetes master node, create a file with the name db2w-nfs-pv.yaml for the storage PV. Replace <NFS-server-host-IP-address> with the IP address of the NFS server host.
      vi db2w_nfs_pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        labels:
          pv-name: db2w_nfs_pv
        name: db2w_nfs_pv
      spec:
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 50Gi
        persistentVolumeReclaimPolicy: Retain
        nfs:
          path: /mnt/clusterfs
          server: <NFS-server-host-IP-address>
    6. On the Kubernetes master node, create a file with the name db2w-nfs-pvc.yaml for the PVC.
      vi db2w_nfs_pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: db2w_nfs_pvc
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: ""
        selector:
          matchLabels:
            pv-name: db2w_nfs_pv
    7. Issue the following commands to create and bind together, in the NFS, a PV with the name db2w_nfs_pv and a PVC with the name db2w_nfs_pvc:
      kubectl create -f db2w_nfs_pv.yaml
      kubectl create -f db2w_nfs_pvc.yaml
    8. Issue the following command to verify that db2w_nfs_pv and db2w_nfs_pvc are properly bound:
      kubectl describe pvc db2w_nfs_pvc
  13. Modify the db2warehouse-smp-helm/values.yaml file
    1. Set the existingClaimName to the PVC name (db2w_nfs_pvc).
    2. Replace the password specified by BLUADMIN.PASSWORD with a new password for the bluadmin user.
    3. If necessary, adjust the repo and tag name fields to correspond to the name of the image that you are deploying. For example, for an image with the name icr.io/obs/hdm/db2wh_ce:v3.0.1-db2wh_devc-linux, specify the following fields:
      repo: "icr.io/obs/hdm/db2wh_ce"
      tag name: "v3.0.1-db2wh_devc-linux"
  14. Issue the following command to install Helm Chart:
    helm install --name db2wh-smp-deploy db2warehouse-smp-helm
  15. Issue the following commands to check whether the deployment is progressing successfully:
    1. To retrieve the full pod name:
      kubectl get pod | grep db2warehouse-smp
    2. To check the pod status and confirm that it is creating the container:
      kubectl describe pod full-pod-name
    3. After the container is created, issue the following command to monitor its log until the log indicates that the deployment has concluded successfully:
      kubectl logs -f full-pod-name
  16. Log in to the web console:
    1. Open the /etc/hosts file to determine the IP address of the proxy node.
    2. Issue the following command to retrieve the port number:
      kubectl get service | grep db2warehouse
      The output contains a phrase of the form 8443:<port>, where <port> represents the port number.
    3. In a browser, enter the URL of the web console. The URL has the form https://IP_address:port_number.
    4. Log in with the user ID bluadmin and the password that was set in Step 13