Upgrading Product Master Kubernetes or OpenShift deployment (Fix Pack 2 to Fix Pack 3)

This topic explains the procedure to migrate Kubernetes or OpenShift® cluster from the Fix pack 2 to the Fix pack 3 release.

Before you begin

About this task

Starting IBM® Product Master 12.0 Fix Pack 3 release, the Product Master container images are operator-based and built on the Red Hat® Enterprise Linux® (RHEL) 8 Universal Base Image (UBI) base Docker image. This ensures that the images are compatible with all the environments that support Operator Life Cycle Management (OLM) and Kubernetes (K8). Moreover, the images are lightweight as WebSphere® Application Server has been replaced with the light version of application server – WebSphere Liberty.

Support for both the Db2® and Oracle databases is available out of the box without the overhead of creating custom images with database clients.

Following are the major changes in Fix pack 3 release.

  1. Images are based on WebSphere Liberty and RHEL 8 UBI base image.
  2. New images for REST API and operator.
  3. Images for third-party software (Mongo DB, Elastic Search, Hazelcast, and IBM MQ).
  4. Images have default user UID 5000 and GID 0. Removed was, svcuser, and svcgroup from all the images.
  5. New volumes have been added for elasticsearch-data and restapi.
  6. Existing volumes have been renamed: appsvr renamed to admin, dam renamed to appdata and newui renamed to personaui.
  7. Mongo DB has been upgraded to 4.0.22 and 3.x Version support is deprecated.
  8. Hazelcast has been upgraded to 4.1 and 4.0 Version support is deprecated.
  9. Admin UI and Persona-based UI applications are now accessed by using Ngnix Ingress URL and port.

Procedure

IBM Product Master services are deployed on the Kubernetes cluster by using OLM. You need to backup all persistent volumes, MongoDB database as well as Db2 or Oracle database before proceeding with migration.

  1. Log in to the MongoDB pod and generate MongoDB data backup.
  2. Copy the data dump from the MongoDB pod to a backup server.
  3. Create a backup copy of all directories that are being used as persistent volume on all worker nodes.
    Example
    e.g. cp -ar /mnt/ipm12 /mnt/ipm12-backup
  4. If you are using the Cloud environment, backup all your data before proceeding.
  5. MongoDB does not support direct upgrade to 4.0.22 version hence an intermediate upgrade to version 3.6 is required. Run the following commands to make MongoDB available for upgrade to 3.6 version during Product Master deployment.
    $ kubectl exec ipm-mongodb-xxxxxxxx --mongo --username 
    admin --authenticationDatabase admin --password xxxxxxxx
    --eval "db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )"
    $ kubectl exec ipm-mongodb-xxxxxxxx -- mongo --username admin --authenticationDatabase 
    admin --password xxxxxxx --eval "db.adminCommand( { setFeatureCompatibilityVersion: '3.6' } )"
  6. Uninstall the Product Master application.
  7. Delete created persistent volumes by using following command.
    $kubectl delete -f volumes.yaml
    Note: Do not delete directories that contain application data (/mnt/ipm12).
  8. Rename the following persistent volume names.
    /mnt/ipm12/dam /mnt/ipm12/appdata
    /mnt/ipm12/appsrv /mnt/ipm12/admin
    /mnt/ipm12/newui /mnt/ipm12/personaui
  9. Use the following command to create directories for the new volume directories.
    $mkdir /mnt/ipm12/elasticsearch-data
    $mkdir /mnt/ipm12/restapi
  10. Set the permissions for the new user default with UID 5000 and GID 0.
  11. Use the following command on all the worker nodes to assign ownership for the default user.
    adduser default --uid 5000 --gid 0
    chown 5000.0 /mnt/ipm12 -R
    chmod 775 /mnt/ipm12
    If you get error while adding user, remove the existing user that has 5000 ownership.
    Note: If you are using multiple worker node and is shared between the nodes using Network File System (NFS), /mnt/ipm12/dam then you need to update NFS configuration because the directory is now changed to /mnt/ipm12/appdata. Also, you need to make the directory structure changes on all the worker nodes.
  12. Set the MongoDB Version as 3.6 in the ipm_12.0.3_cr.yaml CRD file. According to the MongoDB documentation, there is no direct upgrade from the MongoDB Version 3.4 to 4.0.22. You need to upgrade MongoDB Version 3.4 to 3.6 and then to 4.0.22.
  13. Verify that the storageclass is same as mentioned in the volumes.yaml file for all the persistent volumes.
  14. Replica count of MongoDB, Elasticsearch, IBM MQ, and Hazelcast, should be 1. If you increase it to a value greater than 1, the services start failing.

Results

You can now deploy IBM Product Master 12.0 Fix Pack 3 release services.

What to do next

Upgrade your MongoDB Version.

  1. Edit the ipm_12.0.3_cr.yaml file and set replica count to 0 for the MongoDB service and apply the CRD using the following command.
    $ kubectl apply -f ipm_12.0.3_cr.yaml
  2. Monitor the pods by using the following command.
    $ kubectl get pods
  3. Once the MongoDB pod is deleted, again edit the ipm_12.0.3_cr.yaml file and change the MongoDB Version from 3.6 to 4.0.22 and set replica count to 1.
  4. Apply the CRD again using the following command.
    $ kubectl apply -f ipm_12.0.3_cr.yaml
Your deployment should now be running with the MongoDB Version 4.0.22.