Upgrading Netcool Operations Insight on Red Hat OpenShift Container Platform offline with a bastion host

Use these instructions to upgrade an existing Netcool® Operations Insight® deployment from version 1.6.12 or version 1.6.11 to 1.6.13, on an offline Red Hat® OpenShift® Container Platform cluster, by using a bastion host. Before you upgrade, you must back up your deployment.

Before you begin

  • Before you upgrade, configure single sign-on and add Web GUI customization. For more information, see Configuring single sign-on and adding Web GUI customization with scripts.
    Note: The keytool path changed in version 1.6.13 from /home/netcool/app/was/java/bin/keytool to /home/netcool/app/was/java/8.0/bin/keytool.
  • Note: Upgrading from OwnNamespace mode to SingleNamespace mode is not supported. The SingleNamespace mode is supported only for cloud and hybrid deployments that are installed with the oc-ibm_pak plug-in and portable compute or portable storage devices.
  • Ensure that you complete all the steps in Preparing your cluster. Most of these steps were completed as part of your previous Netcool Operations Insight deployment.
  • Ensure that you have an adequately sized cluster. For more information, see Sizing for a Netcool Operations Insight on Red Hat OpenShift deployment.
  • Configure persistent storage for your deployment. Only version 1.6.12 or version 1.6.11 deployments with persistence enabled are supported for upgrade to version 1.6.13.
  • Note: Before you upgrade to version 1.6.13, if present, remove the noi-root-ca secret by running the following command:
    oc delete secret noi-root-ca
  • Before you upgrade, save a backup copy of the cloud native analytics gateway configmap: ea-noi-layer-eanoigateway. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.
    Note: To ensure that no customizations are lost in the upgrade process, complete this task for all customized configmaps. Back up all manually-configured configmaps before you upgrade and then reapply the customizations after you upgrade.

All the necessary images for version 1.6.13 are either in the freely accessible operator repository (icr.io/cpopen), or in the IBM Entitled Registry (cp.icr.io). You need an entitlement key for the IBM Entitled Registry.

About this task

You can upgrade your deployment on an offline Red Hat OpenShift Container Platform cluster that has no internet connectivity by using an air-gapped environment. Create an online bastion host that can download the Netcool Operations Insight CASE bundle from IBM CloudPaks. Access the necessary images in the IBM Entitled Registry. Mirror the images to a registry on the Red Hat OpenShift Container Platform cluster. Then, the Netcool Operations Insight operator and instance can be upgraded on the Red Hat OpenShift Container Platform cluster.

To upgrade from version 1.6.12 or 1.6.11 to version 1.6.13, complete the following steps.

Procedure

  1. Complete the steps in 1. Set up your mirroring environment.
  2. Complete the steps in 2. Set environment variables and download CASE files
  3. Complete the steps in 3. Mirror images

Upgrade the Netcool Operations Insight catalog and operator

  1. Complete the steps in 4. Install Netcool Operations Insight on Red Hat OpenShift Container Platform.

Upgrade the NOI instance

  1. Note: If you already set up access to the target registry, skip this step.
    Create the target-registry-secret by running the following command:
    oc create secret docker-registry target-registry-secret \
        --docker-server=$TARGET_REGISTRY \
        --docker-username=$TARGET_REGISTRY_USER \
        --docker-password=$TARGET_REGISTRY_PASSWORD \
        --namespace=$TARGET_NAMESPACE
  2. Note: If you are upgrading from version 1.6.11 to version 1.6.13, complete this step.
    To avoid issues with CouchDB or Redis pods after upgrade, complete the following steps.
    If your deployment has more than one CouchDB replica, for example a production size deployment, scale the CouchDB statefulset to zero.
    oc scale sts <release-name>-couchdb --replicas=0
    Scale the Redis statefulset to zero.
    oc scale sts <release-name>-ibm-redis-server --replicas=0
  3. From the Red Hat OpenShift Container Platform OLM UI, go to Operators > Installed Operators and select your Project. Then select IBM Cloud Pak for AIOps Event Manager.
  4. Note: Complete this step if you are upgrading from version 1.6.11.
    Go to the All instances tab and select your instance. Edit the YAML.
    • Update the spec.version value (from 1.6.11) to spec.version: 1.6.13.
    Note: To avoid an imagepullbackoff error after an upgrade, edit the cemformation. For more information, see Migration job does not complete.
  5. Note: Complete this step if you are upgrading from version 1.6.12.
    Go to the All instances tab and select your instance. Edit the YAML.
    1. Update the spec.version value (from 1.6.12) to spec.version: 1.6.13.
    2. Add the following settings for the helmValuesCEM section.
      spec:
        helmValuesCEM:
          commonimages.brokers.image.digest: sha256:d40731e355dd6a28cef6224f788b5456c94df7556eba1babc51ce5e9ac2784da
          commonimages.brokers.image.tag: 1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.cemusers.image.digest: sha256:8c6a4f119270029982cac2206aaa433118d3d0f04738bbf220abf73c2de21798
          commonimages.cemusers.image.tag: 1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.channelservices.image.digest: sha256:3b64430b32787ae03ec88b409e8fd720d619b214ff2c4c1947bf26300139ee1a
          commonimages.channelservices.image.tag: 1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.eventanalyticsui.image.digest: sha256:bee1274a8fe0d3015425bdd13c09d9faec26885f343da6720e17f7a5c37b27eb
          commonimages.eventanalyticsui.image.tag: 1.15.0-20240307T210322Z-multi-arch-L-KBDX-JKT9JU
          commonimages.eventpreprocessor.image.digest: sha256:0066b6d6b28b1c70552c86e4c13aa752f865608fba368057e724d8ce98820b68
          commonimages.eventpreprocessor.image.tag: 1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.incidentprocessor.image.digest: sha256:4ef66e6932b362c56d2a7fbe5e789a364ef8ef960c8100f3bf4a09fe11a58bb0
          commonimages.incidentprocessor.image.tag: 1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.notificationprocessor.image.digest: sha256:d875654db5b60fd9727bddca4c4da8539d2fb20c872fc83496e2acfeaa2d28e2
          commonimages.notificationprocessor.image.tag: 1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.integrationcontroller.image.digest: sha256:7f82bad08d6c51fff865f74551466027e36bc39cbd7bbed9481a134881063165
          commonimages.integrationcontroller.image.tag: 1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.normalizer.image.digest: sha256:af550c0e3cd2d2c239556c9175198e9f5a4561999f059841c79a98c838e3801b
          commonimages.normalizer.image.tag:   1.15.0-20240307T210317Z-multi-arch-L-KBDX-JKT9JU
          commonimages.rba.rbs.image.digest: sha256:2e82106c4be36dddc03132d8e74eec11cc5de1624d3bf73a3a12d2cf29cbe5d8
          commonimages.rba.rbs.image.tag: 1.34.0-20240305124329-ubi8-minimal-L-KBDX-JKT9JU
          commonimages.rba.as.image.digest: sha256:937c4ac77690670e1eee409ae514d7c880a66b7e70b1cf1c55091e64cdd6e506
          commonimages.rba.as.image.tag: 1.34.0-20240301170623-ubi8-minimal-L-KBDX-JKT9JU
      Wait for all pods, except for the CEM pods, to restart and all jobs to complete. Then, edit the YAML file and remove the helmValuesCEM section. The CEM pods will restart.
    Note: To avoid an imagepullbackoff error after an upgrade, edit the cemformation. For more information, see Migration job does not complete.
  6. Edit the Netcool Operations Insight properties to provide access to the target registry.
    1. Update spec.advanced.imagePullRepository so that it points to the target registry that you created.
    2. Set spec.entitlementSecret to the target registry secret.
  7. Select Save.

What to do next

If the {{ .Release.Name }}-spark-shared-state PVC is included after the upgrade, you can delete it. This PVC is not used in Netcool Operations Insight 1.6.13 and onward.
  1. Check whether a shared spark PVC exists:
    oc get pvc | grep 'spark'
  2. Delete the {{ .Release.Name }}-spark-shared-state PVC resource if it is included:
    oc delete pvc {{ .Release.Name }}-spark-shared-state
    Where {{ .Release.Name }} is the release name of the PVC resource.

After you upgrade your cloud deployment, update the cloud native analytics gateway configmap. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

To enable or disable an observer, use the oc patch command, as in following example:
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'	

If you don't want to roll back to a previous version, remove the nasm-elasticsearch and common-datarouting pods from your full cloud 1.6.13 deployment. For more information, see Postupgrade task.