April 3, 2020 By Phil Downey
George Baklarz
Olaf Depper
5 min read

This article explains the steps you can take to build an OpenShift environment on a Centos 7.7 VM image on your desktop using VMware.

You can also apply it to deploy on AWS, IBM, or Azure Cloud Infrastructure as a service deployment.

The below instructions will have you deploy a Db2 as a service on OpenShift in an environment that is perfect for development and demonstration purposes. The Db2 install will be a licensed community edition that is fully featured but limited to 4 cores—more than enough for desktop development.

The service that is being deployed is the basis of IBM Db2 Warehouse on Cloud and Db2 OLTP Services for IBM’s Cloud Pak for Data.

Prerequisites

You should have resources of at least 16GB of ram and 5 cores for your VM and reserve 50-60GB of space for the image. A good internet connection is also required—at least 50 Mbps—as certain steps may time out as the install waits to download required code and containers.

Create your image and environment for installing OpenShift and Db2

1. Build a Centos Image 7.7

You can source a version here.

Note: When Centos installs, it does not automatically enable network access. Make sure you enable it as part of your install. Also, when selecting Software Installation options, choose an install type and install a development desktop with system tools, Python, and development tools. This will ensure the right server components are there to install OpenShift and that you have an environment friendly for desktop development.

See this link for more details on installation.

2. Create a user called “db2shift” using the install UI

During installation, you can define the root password and a second user—“db2shift”—and select it to be a system administrator (AKA member of the wheel group).

3. Create a user called “db2shift” (alternative command line approach)

If you prefer the command line, follow these steps:

  1. Add a linux user:
    sudo useradd db2shift
  2. Give the user a password:
    sudo passwd db2shift
  3. Give the user access to the wheel and root privileges:
    sudo usermod -G wheel,root db2shift
  4. Switch to the db2shift user-id:
    su  -  db2shift

4. Install single-node OpenShift cluster

  1. Update your system with the required tools to install OpenShift:
    sudo yum update -y
  2. Load the dependencies for OpenShift:
    sudo yum install -y epel-release && sudo yum install -y python-pip python-devel git && sudo yum group install -y "Development Tools" && sudo reboot
  3. Clone the OpenShift Ansible Install github repository—first, log back into the image as db2shift after the machine reboots:
    git clone https://github.com/openshift/openshift-ansible

     

    cd openshift-ansible
    git checkout release-3.11
  4. Install python dependencies for Ansible Install:
    sudo pip install -r requirements.txt
  5. Install a single-node version of OpenShift 3.11:
    sudo ansible-playbook -i inventory/hosts.localhost playbooks/prerequisites.yml
    sudo ansible-playbook -i inventory/hosts.localhost playbooks/deploy_cluster.yml

Note: If the second step fails, you can re-run from the second step again. The problem could be that your network bandwidth is causing a time out on downloading some components in OpenShift. If, for some reason, you have environmental issues while using your environment, you can re-run these steps to restart your image. I always suggest that if you are using VMware, this is a good point to take a backup of your environment. More details on this process can be found at Michael Tipton’s blog.

5. Set up admin user for OpenShift

  1. Install password tools:
    sudo yum -y install httpd-tools
    sudo touch /etc/origin/openshift-passwd
    sudo htpasswd -b /etc/origin/openshift-passwd admin redhat
  2. Add admin to admin group on the cluster:
    oc login localhost:8443 --username=admin --password=redhat --insecure-skip-tls-verify=true
    sudo oc adm policy add-cluster-role-to-user cluster-admin admin

6. Install Helm

  1. Create the Tiller project:
    oc new-project tiller
    export TILLER_NAMESPACE=tiller # add to your .bashrc
    cd /home/db2shift
  2. Install and deploy Tiller:
    sudo curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-linux-amd64.tar.gz | tar xz
    export PATH=$PATH:/home/db2shift/linux-amd64
    helm init --client-only
  3. Install Tiller and check the progress:
    oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="${TILLER_NAMESPACE}" -p HELM_VERSION=v2.16.1 | oc create -f -
    oc rollout status deployment tiller

7. Set up OpenShift for Db2

  1. Set Container_group_permissions:
    sudo setsebool -P container_manage_cgroup true
  2. Set up Hostpath directory:
    mkdir -p /home/db2shift/db2vol1
  3. Give the directory the permissions it requires to be managed as a Hostpath volume:
    chmod 777 -R /home/db2shift/db2vol1
    sudo chgrp root -R /home/db2shift/db2vol1
    sudo semanage fcontext -a -t container_file_t "/home/db2shift/db2vol1(/.*)?"
    sudo restorecon -Rv /home/db2shift/db2vol1

8. Build the Db2 project

  1. Create and prepare the Db2 project:
    oc login -u admin -n default
    oc new-project db2
    oc policy add-role-to-user admin "system:serviceaccount:${TILLER_NAMESPACE}:tiller"
  2. Clone the Github project:
    cd /home/db2shift
    git clone -b Openshift_1Node  https://github.com/IBM/Db2.git

9. Apply Db2 security, permissions, and local storage 

  1. Apply Db2 administrative bindings and SCC:
    oc apply -f /home/db2shift/Db2/deployment/adm
  2. Enable Db2 SCC to allow privileges to work with Hostpath storage and add SCC to Service Account:
    oc adm policy add-scc-to-user db2u-scc system:serviceaccount:db2:db2u
  3. Create Db2 volume and claim:
    • Place the following db2vol.yaml in home directory:
      {
        "kind" : "PersistentVolume",
        "apiVersion" : "v1",
        "metadata":{
          "name" : "db2-vol",
          "labels":{
            "name" : "db2-vol",
            "type" : "local"
          }
        },
        "spec":{
          "capacity":{
            "storage" : "8Gi"
          },
          "accessModes":[
            "ReadWriteMany", "ReadWriteOnce",  "ReadOnlyMany"
          ],
          "hostPath": {
            "path" : "/home/db2shift/db2vol1"
          }
        }
      }
      
      {
        "kind" : "PersistentVolumeClaim",
        "apiVersion": "v1",
        "metadata": {
          "name": "db2pvc"
        },
        "spec": {
          "accessModes": [
            "ReadWriteMany"
          ],
          "resources": {
            "requests": {
              "storage": "3Gi"
            }
          },
          "selector":{
            "matchLabels":{
              "name" : "db2-vol",
              "type" : "local"
              }
            }
          }
        }
    • Apply the volume and PVC definitions:
      oc apply -f db2vol.yaml # see below

10. Get up and running

  1. Run Db2 install:
    cd /home/db2shift/Db2/deployment
    ./db2u-install --db-type db2oltp --namespace db2 --release-name db2u --existing-pvc db2pvc --tiller-namespace tiller --cpu-size 2 --memory-size 3Gi

    Note: If, during this state, you get back-off messages when running the oc get pods command, do not worry—this is likely be because your network connection is slow and the jobs time out waiting for the Db2 containers to download. They may do this until all containers are downloaded.

  2. Monitor the deployment until the db2u-db2u-0 pod is running with a status of 1/1:
    • You can use the following command from your command line to determine the status:
      oc get pods
      NAME                                READY       STATUS  RESTARTS    AGE
      db2u-db2u-0                         1/1         Running     0       10m
      db2u-db2u-ldap-74b767d76-sbct6      1/1         Running     0       10m
      db2u-db2u-nodes-cfg-job-7m7bz       0/1         Completed   0       10m
      db2u-db2u-restore-morph-job-bnxfx   0/1         Completed   0       10m
      db2u-db2u-sqllib-shared-job-bjrwc   0/1         Completed   0       10m
      db2u-db2u-tools-c47c5b565-svwcw     1/1         Running     0       10m
      db2u-etcd-0                         1/1         Running     0       10m
      db2u-etcd-1                         1/1         Running     0       10m
      db2u-etcd-2                         1/1         Running     0       10m
    • Monitoring through the console (login with user ID: admin and password: redhat). Go to the Db2 Project and select the Applications -> Pods and you will see the following:

Connecting to Db2

Run the following command:

oc get svc
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                   AGE
db2u-db2u            ClusterIP   172.30.96.174    <none>        50000/TCP,50001/TCP,25000/TCP,25001/TCP,…  2m
db2u-db2u-engn-svc   NodePort    172.30.169.115   <none>        50000:31020/TCP,50001:31578/TCP            2m
db2u-db2u-internal   ClusterIP   None             <none>        50000/TCP,9443/TCP                         2m
db2u-db2u-ldap       NodePort    172.30.122.233   <none>        389:30154/TCP                              2m
db2u-etcd            ClusterIP   None             <none>        2380/TCP,2379/TCP                          2m

Note: Db2 exposes its service externally via the db2u-db2u-engn-svc service. In this case, 31020 is the non-SSL port Db2 will be listening on, and 31578 is the port that will support SSL connectivity. They are mapped to the db2u-db2u-0 50000/50001 ports that Db2 operates on.

To get your IP address for your server for external communication, use the following (if you are running on VMware):

ip addr show | grep 'scope global noprefixroute dynamic'
result: inet 192.168.154.132/24 brd 192.168.154.255 scope global noprefixroute dynamic ens33

Alternatively, you can just type “ip addr show” and it should be the IP address of the second device listed. This will provide you the IP address your OpenShift node will be listening on, external and internal to the VM.

In my example, I can now connect to Db2 and then on 192.168.154.132 port 31020 from my desktop (make sure any firewalls on your desktop allow your VM to be reachable).

You can also connect to the OpenShift Console using your IP address and port 8443. For example, for the system above, the address would be https://192.168.154.132:8443.

Learn more

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters