MongoDB in Red Hat OpenShift Container Platform
This scenario uses an IBMz MongoDB EE Operator to test MongoDB in Red Hat OpenShift Container Platform (OpenShift).
To make MongoDB PoC run in OpenShift 4.11 and above, this Helm chart is modified: ibm-mongodb-enterprise-helm.
For OpenShift version below 4.11 (4.8, 4.9 and 4.10) an operator is available to try out MongoDB Enterprise Edition in OpenShift. See IBMz MongoDB EE Operator. As this operator was not available for 4.12 (at this time) a custom Helm Chart is used.
OpenShift prerequisites
An NFS server (NFSv4-only) is required to run these steps.
-
Provision storage for the MongoDB instance by creating a persistent volume. In this PoC use-case, NFS was used to make debugging easier. However, consider using OpenShift Data Foundation for reliability and performance in production environments. Make sure to adjust the value for the NFS_SERVER_IP variable in the following example:
NFS_SHARE_CAPACITY="20Gi" NFS_SHARE_PATH="/nfs_share" NFS_SERVER_IP="?.?.?.?" STORAGE_CLASS_NAME="mongo-storage-class" cat <(echo '{ "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "nfs4mongo-pv-0001", "annotations": { "pv.kubernetes.io/bound-by-controller": "yes" }, "finalizers": [ "kubernetes.io/pv-protection" ] }, "spec": { "capacity": { "storage": "${NFS_SHARE_CAPACITY}" }, "accessModes": [ "ReadWriteMany" ], "nfs": { "path": "${NFS_SHARE_PATH}", "server": "${NFS_SERVER_IP}" }, "persistentVolumeReclaimPolicy": "Delete", "storageClassName": "${STORAGE_CLASS_NAME}" } }') > nfs-persistent-volume.json oc create -f nfs-persistent-volume.json -
On the NFS server-side the user-id and group-id in the folder must match the user-id that the container runtime of mongoDB uses later (check the
mongo-db-helm-with-replset.yamland search forrunAsUser). Run this command on the NFS server:chown 1000740005:1000740005 /mongo_share -
In the Developer View of OpenShift, add a new project. In this example, the project is called
mongodb-test-1. Switch to the new project.Note: This project name/namespace is used in later steps. If you use a different project name, remember to change the name in the subsequent commands. -
To display the Helm chart in the Developer Catalog, add the Red Hat Helm chart as a Helm chart repository by clicking Helm Chart repositories > Create Helm Chart Repository to . Set the URL, to point to
https://redhat-developer.github.io/redhat-helm-charts.

-
In Developer Catalog, click +Add and Helm Charts. Search for, then click IBM Mongodb Enterprise Helm > Install Helm Chart. Before you continue, edit the yaml file and replace the entire content with this yaml file:
affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 database: adminpassword: admin123 adminuser: adminuser name_database: testdb fullnameOverride: '' global: license: true persistence: claims: accessMode: ReadWriteMany capacity: 10 capacityUnit: Gi mountPath: /data/db/ name: mongodb-pvc-0001 storageClassName: mongo-storage-class securityContext: fsGroup: 0 supplementalGroup: 0 image: pullPolicy: Always repository: quay.io/ibm/ibmz-mongodb-enterprise-database tag: "v4.4-rh7-s390x" imagePullSecrets: [] ingress: annotations: {} controller: nginx host: mongotest.apps.ocp0.sa.boe tls: [] nameOverride: '' nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 1 resources: {} securityContext: runAsNonRoot: true # Must be in a certain range: runAsUser: 1000740005 service: port: 27017 type: ClusterIP serviceAccount: annotations: {} create: true name: mongod tolerations: [] -
Verify your installation.
-
Go to the terminal of the spawned Pod, then run this command:
mongo -u myUserAdmin -p password --authenticationDatabase admin
-
Set up MongoDB as part of a replica set
The installation of MongoDB spawns a single instance of mongod and is not in a
replica set. The MongoDB connector that is used later in the Kafka setup expects a replica set and
does not work without it.
In this section, the public image is slightly modified to make the mongod
instance part of a replica set. This replica set has just a single instance which thinks that the
other instances are currently down and promotes itself to be the primary.
Related resources:
-
"The $changeStream stage is only supported on replica sets" error while using mongodb-source-connect
This procedure edits the init script that sets up mongodb to change the
behaviour. You can also modify /etc/mongod.conf instead.
The following commands require that podman is installed on the bastion.
-
Download and run the mongodb docker image with an interactive shell:
podman run -it quay.io/ibm/ibmz-mongodb-enterprise-database:v4.4-rh7-s390x bash -
Copy the
mongo_init.shto the local file system:podman cp CONTAINER:/var/log/mongodb/mongo_init.sh ./mongo_init.sh -
Edit the
mongo_init.shscript:-
Change the line that starts the
mongoddaemon to:mongod --replSet "rs0" -bind_ip_all.
Note:--authwas removed because it also caused issues. -
-
Make the
mongo_init.shscript executable and readable by anyone:chmod 777 mongo_init.sh -
Create a Dockerfile that uses the original image and replaces the
mongo_init.shscript:FROM quay.io/ibm/ibmz-mongodb-enterprise-database:v4.4-rh7-s390x COPY mongo_init.sh /var/log/mongodb/mongo_init.sh -
Build and tag the docker image:
# cd into the dir with the Dockerfile podman build . -t custom-mongodb-1:latest -
Push the image to the internal OpenShift image registry. When the
oc get routecommand is not working check out: How to expose the registry.# Get the route to the internal OpenShift image registry HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') # Login to the internal docker image registry podman login -u admin -p $(oc whoami -t) ${HOST} NAMESPACE="mongodb-test-1" IMAGE_NAME="custom-mongodb-1:latest" LOCAL_IMAGE_NAME="localhost/${IMAGE_NAME}" REMOTE_IMAGE_NAME="${IMAGE_NAME}" podman push ${LOCAL_IMAGE_NAME} ${HOST}/${NAMESPACE}/${REMOTE_IMAGE_NAME} -
Give the
mongodservice account permission to pull the image from the internal image registrymongodb-test-1namespace:oc policy add-role-to-user \ system:image-puller system:serviceaccount:mongodb-test-1:mongod \ --namespace=mongodb-test-1 -
Deploy the Helm chart that now uses the custom image you pushed to the internal image registry:
affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 database: adminpassword: admin123 adminuser: adminuser name_database: testdb fullnameOverride: '' global: license: true persistence: claims: accessMode: ReadWriteMany capacity: 10 capacityUnit: Gi mountPath: /data/db/ name: mongodb-pvc-0001 storageClassName: mongo-storage-class securityContext: fsGroup: 0 supplementalGroup: 0 image: pullPolicy: Always # CUSTOM IMAGE (WITH REPLICA SET ENABLED): repository: default-route-openshift-image-registry.apps.ocp0.sa.boe/mongodb-test-1/custom-mongodb-1 tag: "latest" imagePullSecrets: [] ingress: annotations: {} controller: nginx host: mongotest.apps.ocp0.sa.boe tls: [] nameOverride: '' nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 1 resources: {} securityContext: runAsNonRoot: true # Bust be in a certain range: runAsUser: 1000740005 service: port: 27017 type: ClusterIP serviceAccount: annotations: {} create: true name: mongod tolerations: []Note: If you get an image pull error, delete the pod. OpenShift automatically creates a new pod and tries the pull process again. -
Open a terminal for the mongodb pod. Log in and then initiate the replica set:
mongo -u myUserAdmin -p password --authenticationDatabase admin rs.initiate()Note: Make sure you wait until the mongodb transitioned from secondary to primary.
Set up MongoDB on an LPAR
Follow the official documentation to Install MongoDB Enterprise Edition on Red Hat or CentOS. You can try it with a higher version than 4.4 but as this use-case uses version 4.4 on the OpenShift-side, version 4.4 is used on the LPAR-side for consistency.