How to deploy IBM Db2 Community Edition through the Operator on a desktop-sized machine using Red Hat Code Ready Containers (CRC).

Data modernization is gaining momentum in the marketplace, driving the movement of bespoke deployments of databases in traditional landscapes to public database services and hybrid cloud platforms. The goal is to drive greater standardization and faster time to delivery through cloud native services, operators and virtualization that enable the automated lifecycle management of database landscapes. These changes will align data landscapes more closely to the agile delivery processes being applied in many companies and bring significant ROI/TCO and time-to-delivery benefits.

For the enterprises mapping their databases to these platforms, it has been somewhat of a challenge. Typical enterprise databases’ non-functional requirements often do not align to what is available from fully automated public cloud services.

Fortunately, IBM has released the IBM Db2 Operator for OpenShift, which is designed for modernizing your enterprise database landscape in a hybrid cloud platform that spans public, private and in-datacenter clouds. It is the foundation of the IBM Cloud Pak® for Data Db2 Cartridge and is also available from the IBM Operator Catalog for OpenShift as Community Edition Db2 and as the Db2 Cartridge for IBM Cloud Pak for Data on the Red Hat Market Place.

It provides a fully automated deployment of Db2 for OLTP on the OpenShift platform, with automated updates and high-availability capabilities — all while embracing the microservices architecture, security, elasticity and platform resilience of the OpenShift Kubernetes Platform.

Now, you are probably thinking, do I need to have a large OpenShift cluster?  

In this post, I am going to demonstrate how you can deploy IBM Db2 Community Edition through the Operator on a desktop-sized machine using Red Hat CodeReady Containers (CRC).

Getting started

The following are the recommended requirements in order to follow the instructions in this post:

Operating system

  • Windows 10 Professional
  • Linux (Centos or Red Hat 8 recommended)
  • Mac OS (Catalina or higher)


  • 60GB+ of Storage (default 35GB will work, but you may run out of space quickly)
  • 24GB+ of memory
  • 6 available virtual CPUs (preferably 8+)

The first thing you need to do is deploy your OpenShift CodeReady Container (CRC) Environment. Typically, it is best practice to update your CRC environment regularly because there is a new release every month.

You can download the CodeReady Containers (CRC) from the Red Hat site and use free of charge. You will need to sign up to Red Hat Developers to get access to the code.

For the purposes of portability and workload isolation, we deployed using a virtual machine running Linux as the CRC host, but this will also work on Windows and Mac OS. Make sure to follow the guidelines for the use of CRC on your chose platform.

When you download and install CRC, following the instructions carefully, place the downloaded pull secret file in your home directory — you will need it when you create a new CRC instance.

Step 1: Installing Red Hat CodeReady Containers

Follow the instructions provided when you download CodeReady Containers (CRC) to ensure it is installed correctly.

For this use case, you will require a minimum of 6 cores available to CRC. I would recommend 8 if you want multiple databases and applications. You’ll also need 24GB of memory (it is possible with 16GB of memory but we found CRC struggles and prefers 18GB+).

  1. Download the correct tar file from Red Hat’s website: tar xvf crc….tar.xy
  2. Check your PATH variable and chose a location that is on your path (e.g., /usr/local/bin)
  3. sudo mv <your Download location/crc-xxxx-amd64/crc /usrlocal/bin

Once CRC is installed, run the following:

crc setup
Scroll to view full table

To start the system, you have two alternatives:

  • Option 1: Define the target environment, overriding the defaults in the start command:
    crc start -c 8 -m 24000 -p <fully qualified pull secret file> -d 60
    Scroll to view full table

    Note: On Mac, the -d flag does not apply.

  • Option 2: Predefine the database settings in the CRC config:
    crc config set cpus <number>
    crc config set memory <number-in-mib>
    crc config set disk-size <number in GB>
    crc config set  pull-secret-file <qualified pull secret file>
    crc start
    Scroll to view full table

When the process is complete, you will be given the command line commands on how to log on via the oc command line tool. If your OpenShift deployment is consuming a lot of CPU, it is most likely because it is updating itself. Wait until CPU cycles drop below 1 CPU utilisation:

To enable the oc command line environment, type the following:

eval $(crc oc-env)
Scroll to view full table

Once this is completed, type the following to open the OpenShift console:

crc console
Scroll to view full table

To log in, you will likely have to approve access to the site through your browser as it will not recognise the security certificate.

You should use the kubeadmin password as provided when you started the server or use the CRC command line tool to retrieve it.

Step 2: Installing the DB2 Operators

Once you have this up and running, you must enable the IBM Operator Catalog in your Red Hat OpenShift clusters Operator Catalog:

From your OpenShift UI console, roll over the + icon on the tool bar (top right-hand corner of the page) and select Import YAML:

Paste the following YAML content into the space provided:

kind: CatalogSource
  name: ibm-operator-catalog
  namespace: openshift-marketplace
  displayName: "IBM Operator Catalog"
  publisher: IBM
  sourceType: grpc
      interval: 45m
Scroll to view full table

Then, click Create.

Depending on your network and system, it may take a few moments for this to appear in the Operator Catalog.

Shortly after, switch to your Operator Hub Window and select the IBM Operator Catalog option on the left-hand side. Then select the Database category for the Hub — you will then see the IBM Db2 Operator.

Step 3: Preparing to install Db2

While you are waiting for the IBM Operator Catalog to load, you can prepare the persistent volumes associated with the CodeReady Containers (CRC) deployment so they are suitable for Db2’s usage. Normally, you would deploy on an NFS, OCS or Ceph.

You can do the following either from a command line by logging onto the server using the oc command as displayed after you start the CRC server or via the compute menu and clicking on Node then Terminal:

If you are at a command line and logged in, type the following to determine your node name:

oc get nodes
Scroll to view full table

Then copy the node name and use it in the following command instead of <mynodename>.

What we are going to do is use the Openshift “debug” terminal to access the Worker node so we can make adjustment to directory permissions:

oc debug node/<mynodename>
Scroll to view full table

Or, for single node clusters (like CRC), the following automates the process and initiates the debug terminal. Be patient, as the debug node starts a separate container each time you run it. Sometimes it starts straight away; sometimes it may take a minute or so:

oc debug node/$(oc get nodes | grep crc | cut -d “ “ -f 1)
Scroll to view full table

From either your command line or the node terminal window, type the following commands:

chroot /host
sudo setsebool -P container_manage_cgroup true
chmod 777 -R /mnt/pv-data
sudo chgrp root -R /mnt/pv-data
sudo semanage fcontext -a -t container_file_t "/mnt/pv-data(/.*)?"
sudo restorecon -Rv /mnt/pv-data
Scroll to view full table

Note: You may receive a rejection on the setsebool command — this is okay. The above simply makes sure there are no barriers to Db2 using the persistent volume (PV) directories.

These commands take the pre-built CRC Host Path PV directories and apply the permissions required for Db2 to run.

To exit debug mode, type and run the exit command twice.

Next, from the command line and logged into the cluster, we are going to run one further set of commands that will look for available PV, as pre-created by CRC, and change their reclaim policy to “Retain.” In CRC, these are pre-created as part of the system setup.

This is necessary because the default setting tries to clean up the persistent volume if the persistent volume is released and will hit an error as a result because Db2 restricts access to the files it creates in the PV.

Also, setting the policy to “Retain” ensures you can go back to a database previously created and restart a service by reusing the PV and access the data again. If you do not want to retain the data after creating a service, you need to go back into the terminal debug mode and delete the contents of each of the persistent volumes.

Now, execute the below commands from your command line, ensuring the oc-env has been enabled and you are logged in.

Make sure you are using a bash compatible shell before running:

mylist=$(oc get pv | grep pv | grep Available | cut -d " " -f1)
for i in $mylist ; do  oc patch pv $i -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' ; done
Scroll to view full table

The above will change each persistent volume that is available for use by Db2 to have a persistent volume reclaim policy of retain. This is the recommended policy for Db2.

Step 4: Getting the Db2 Operator ready

The next thing you need to do is create a project in which you want to deploy Db2.

For this example, I am going to use “db2-dev” as the project for my Db2 deployment. In Kubernetes, this is the equivalent of a namespace.

You can either do this via the console from the Home Menu or from the command line using the following:

oc new-project db2-dev
Scroll to view full table

Before we can deploy Db2, we need to generate our pull secret so the Db2 database service containers can be accessed by the cluster. You will need to use your IBM ID — if you do not have one you can apply here once you have done that and are logged on.

Scroll down to software and click on the Container Software Library:

Now copy your Entitlement key and save it somewhere safe:

After you have done this, go to your command line and use the entitlement key, your email ID/IBM ID and the name of the namespace (in this example, it is “db2-dev”).

From your command line, run the following:

## Set the variables to the correct values
## Use cp for the value of the docker-username field
ENTITLEDKEY="<Entitlement Key from MyIBM> "
EMAIL="<email ID associated with your IBM ID>"
NAMESPACE="<project or Namespace Name for Db2 project>"
oc create secret docker-registry ibm-registry   \                   \
    --docker-username=cp                        \
    --docker-password=${ENTITLEDKEY}            \
    --docker-email=${EMAIL}                     \
Scroll to view full table

Once this is done, go back to the Red Hat OpenShift Operator Hub on the left-hand side of your console and go to the IBM Operator Catalog:

Click on IBM Db2 and click Install:

Install into the db2-dev project:

The Operator is now installed, and you can click View Operator view it:

You can deploy the Operator via the UI and tailor the YAML generated prior to deploying Db2, or you can simply deploy using YAML. As we are going to tailor the deployment to meet our small footprint requirement, we are going to use a customer deployment.

Note that there are three different kinds of deployments you can use. We are going to focus on the db2u cluster option, which is a simple database you can develop against. However, in Step 5, we are going to use YAML to define our service at a more granular level:

Step 5: Creating the database

Run the following YAML from the console or from a command line to create your database service. Note that we are not deploying the LDAP services because we’re assuming you are running this for development and only require db2inst1 access.

Simply open up the YAML editor using the + button at the top right of the console and paste the below into it:

If you wish for the LDAP services to run, you can simply remove the LDAP section in the YAML script as the operator will deploy LDAP by default:

kind: Db2uCluster
  name: db2ucluster-db2dev1
     - ibm-registry
    privileged: true
    dbType: db2oltp
      password: db2oltp
      enabled: false
            cpu: '0.5'
            memory: 2Gi
            cpu: '0.5'
            memory: 2Gi
     enabled: true
     enabled: false
  size: 1
    - name: meta
      type: "create"
          - ReadWriteMany
            storage: 10Gi
    - name: data
      type: "create"
          - ReadWriteMany
            storage: 50Gi
Scroll to view full table

This config has the Db2 server engine using half a VPC — this is only recommended for small development databases. For production-sized deployments, check the Db2 documentation for further info.

You need to specify the current version in the above YAML, to check the version of your Db2 Deployment run the following and replace 11.5.5.cn2 in the above script:

oc describe deployments db2u-operator-manager | grep version | head -1
Scroll to view full table

If you do not wish to work with the Db2 Restful service, simply change the “enabled” setting for the addon in the script to “false,” and once your Db2 is deployed, use the below Knowledge Center documentation to work with the service.

If you wish to work with the Db2 Graph Addon, as it is a tech preview, simply change the enabled value to “true” in the addons section.

You can monitor the installation process under the Workloads > PODS menu in the console or by executing oc get pods from the command line. Make sure you are in the db2-dev project. You can set yourself to the right project with the oc project db2-dev command:

Your initial deployment may take some time to download the required containers from the repository. Otherwise, Db2 deployment takes about four minutes — longer if deploying graph and rest addons:

You know your Db2 is ready when the “morph” pod has a status of “completed.”

Your database can now be connected to using the db2inst1 user id and the password we gave it (db2oltp).

To determine the Host and Port to connect to, do the following:

  • To get your host name:
    crc ip
    Scroll to view full table
  • To get your host port number (assuming you used the service name as per the example “db2ucluster-db2dev1”):
    oc get svc | grep db2ucluster-db2dev1 | grep 'engn-svc' | cut -d ":" -f 2 | cut -d "/" -f1
    Scroll to view full table

Step 6: (Optional) Connecting to your Db2 database with VSCODE environment and Python

First you need to ensure VSCODE and Python are installed on your machine.

Depending on whether you are on Mac OS, Windows or Linux, follow these instructions.

Make sure the Python addon is installed. You will need to install python3 and python3-devel. On Centos Linux, for example, run the following:

dnf install python3 -y
Scroll to view full table
dnf install python3-devel -y
Scroll to view full table

Once installed, run the following to install the Db2 drivers:

run pip3 install ibm_db
Scroll to view full table

Support and other information for the drivers can be found here.

Once your environment is ready, you can cut and paste the below code and change the following in the connection string section to copy and run it in VSCODE.

  • Hostname=<as obtained by previous instruction>
  • Port =<as obtained by previous instruction>
  • Pwd=<password as defined previous YAML>
  • Database=<database name as defined in previous YAML>
import ibm_db
#bm_db_conn = ibm_db.connect('pydev', 'db2inst1', 'secret')
ibm_db_conn = ibm_db.connect(conn_str,'','')
import ibm_db_dbi
create="drop table if exists mytable"
ibm_db.exec_immediate(ibm_db_conn, create)
create="create table mytable(id int, name varchar(50))"
ibm_db.exec_immediate(ibm_db_conn, create)
insert = "insert into mytable values(?,?)"
stmt_insert = ibm_db.prepare(ibm_db_conn, insert)
 # Fetch data using ibm_db_dbi
select="select id, name from mytable"
# Fetch data using ibm_db
stmt_select = ibm_db.exec_immediate(ibm_db_conn, select)
cols = ibm_db.fetch_tuple( stmt_select )
print("%s, %s" % (cols[0], cols[1]))
cols = ibm_db.fetch_tuple( stmt_select )
print("%s, %s" % (cols[0], cols[1]))
cols = ibm_db.fetch_tuple( stmt_select )
print("%s, %s" % (cols[0], cols[1]))
Scroll to view full table

Get started

Now you have a connection to Db2 on OpenShift and you can get developing using your Db2 Community Edition Db2 Database on Red Hat OpenShift. This provides you full function Db2 on OpenShift — limited to 4 cores — including in-memory columnar database capabilities, Restful API services, JSON capability, machine learning (ML) and AI functions and the new technical preview of Db2 graph database capabilities.

For production database landscapes, you may want to consider IBM Db2 on Cloud Pak for Data, which adds centralised security management and integration to your enterprise authentication services, a centralised console for managing, monitoring and developing with Db2 and other enhanced capabilities.

Learn more

More from Cloud

Clients can strengthen defenses for their data with IBM Storage Defender, now generally available

2 min read - We are excited to inform our clients and partners that IBM Storage Defender, part of our IBM Storage for Data Resilience portfolio, is now generally available. Enterprise clients worldwide continue to grapple with a threat landscape that is constantly evolving. Bad actors are moving faster than ever and are causing more lasting damage to data. According to an IBM report, cyberattacks like ransomware that used to take months to fully deploy can now take as little as four days. Cybercriminals…

2 min read

Integrating data center support: Lower costs and decrease downtime with your support strategy

3 min read - As organizations and their data centers embrace hybrid cloud deployments, they have a rapidly growing number of vendors and workloads in their IT environments. The proliferation of these vendors leads to numerous issues and challenges that overburden IT staff, impede clients’ core business innovations and development, and complicate the support and operation of these environments.  Couple that with the CIO’s priorities to improve IT environment availability, security and privacy posture, performance, and the TCO, and you now have a challenge…

3 min read

Using advanced scan settings in the IBM Cloud Security and Compliance Center

5 min read - Customers and users want the ability to schedule scans at the timing of their choice and receive alerts when issues arise, and we’re happy to make a few announcements in this area today: Scan frequency: Until recently, the IBM Cloud® Security and Compliance Center would scan resources every 24 hours, by default, on all of the attachments in an account. With this release, users can continue to run daily scans—which is the recommended option—but they also have the option for…

5 min read

Modernizing child support enforcement with IBM and AWS

7 min read - With 68% of child support enforcement (CSE) systems aging, most state agencies are currently modernizing them or preparing to modernize. More than 20% of families and children are supported by these systems, and with the current constituents of these systems becoming more consumer technology-centric, the use of antiquated technology systems is archaic and unsustainable. At this point, families expect state agencies to have a modern, efficient child support system. The following are some factors driving these states to pursue modernization:…

7 min read