Upgrading Netcool Operations Insight on Red Hat OpenShift Container Platform with the Operator Lifecycle Manager (OLM) user interface

Use these instructions to upgrade an existing Netcool® Operations Insight® deployment from version 1.6.8 or version 1.6.7 to version 1.6.9, with the Red Hat® OpenShift® Container Platform Operator Lifecycle Manager (OLM) user interface (UI).

Before you begin

  • Ensure that you complete all the steps in Preparing your cluster. Most of these steps were completed as part of your previous Netcool Operations Insight deployment.
  • Ensure that you have an adequately sized cluster. For more information, see Sizing for a Netcool Operations Insight on Red Hat OpenShift deployment.
  • Configure persistent storage for your deployment. Only version 1.6.8 or version 1.6.7 deployments with persistence enabled are supported for upgrade to version 1.6.9.
    Note: There is a need for read-write-many storage volumes. Before you upgrade Netcool Operations Insight on Red Hat OpenShift, ensure that your storage provider supports ReadWriteMany (RWX) volumes.
  • Before you upgrade to version 1.6.9, if present, remove the noi-root-ca secret by running the following command:
    oc delete secret noi-root-ca
  • Before you upgrade to version 1.6.9, if present, reverse any image overrides from the test fix of the previous release.
    1. Edit the custom resource (CR).
      oc edit noi <release-name>
      Where <release-name> is the release name, for example, evtmanager.
    2. Manually remove the tag, name, and digest entries of image overrides from the helmValuesNOI section of the YAML file.

If you want to verify the origin of the catalog, then use the OLM UI and CASE upgrade method instead. For more information, see Upgrading a cloud deployment offline using CASE.

All the necessary images for version 1.6.9 are either in the freely accessible operator repository (icr.io/cpopen), or in the IBM Entitled Registry (cp.icr.io) for which you need an entitlement key.

To upgrade from version 1.6.8 or 1.6.7 to version 1.6.9, complete the following steps.

Procedure

Upgrade the catalog source

  1. From the Red Hat OpenShift Container Platform OLM UI, go to Administration > Cluster Settings. Then, go to the Configurations tab and select the OperatorHub configuration resource.
  2. Under the Sources tab, click the existing Netcool Operations Insight catalog source, ibm-operator-catalog.
  3. Confirm that the catalog source name and image for version 1.6.9, icr.io/cpopen/ibm-operator-catalog:latest, is specified in the catalog source YAML. If necessary, update the spec.image value to icr.io/cpopen/ibm-operator-catalog:latest and select Save.
    Note: When you installed version 1.6.8 or version 1.6.7, you specified icr.io/cpopen/ibm-operator-catalog:latest as the catalog source name and image. For more information, see step 1b in Installing Netcool Operations Insight with the Operator Lifecycle Manager (OLM) user interface for version 1.6.8, or step 1b in Installing Netcool Operations Insight with the Operator Lifecycle Manager (OLM) user interface for version 1.6.7.

Create a PostgreSQL subscription

  1. Create a subscription called cloud-native-postgresql-catalog-subscription.
    cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: cloud-native-postgresql-catalog-subscription
      namespace: <namespace>
    spec:
      channel: stable-v1.18
      installPlanApproval: Automatic
      name: cloud-native-postgresql
      source: ibm-operator-catalog
      sourceNamespace: openshift-marketplace
    EOF
    
    Where <namespace> is the namespace that you specified when preparing your cluster. For more information, see Preparing your cluster.
    Note: After you create the PostgreSQL subscription, the following error is displayed: no operator group found that is managing this namespace. Ignore this error and proceed to the next step.

Upgrade the Netcool Operations Insight operator

  1. To upgrade the operator from the OLM UI, go to Operators > Installed Operators > IBM Cloud Pak for Watson AIOps Event Manager. Go to the Subscription tab.
    • Select v1.13 in the Update channel section.
    Note: It takes a few minutes for IBM Cloud Pak for Watson AIOps Event Manager to install. When installed, ensure that the status of IBM Cloud Pak for Watson AIOps Event Manager is Succeeded before you proceed to the next steps.

Upgrade the Netcool Operations Insight instance

  1. From the Red Hat OpenShift Container Platform OLM UI, upgrade the Netcool Operations Insight instance by going to Operators > Installed Operators and select your Project. Then select IBM Cloud Pak for Watson AIOps Event Manager.
  2. Go to the All instances or NOI tab and select your instance. Edit the Netcool Operations Insight instance YAML.
    • Add the following settings for Postgresql, PostgresqlWal, and storageClassSharedSpark in the spec.persistence section of the YAML file.
      spec:
        persistence
          storageClassPostgresql: <storage-class>
          storageSizePostgresql: <storage-size>
          storageClassPostgresqlWal: <storage-class>
          storageSizePostgresqlWal: <storage-size>
          storageClassSharedSpark: <read-write-many storage-class>
          storageSizeSharedSpark: <storage-size>
      
      Note: The storage for the Spark pods is shared between the spark pods. The shared Spark storage must support multi-node access.
      Note: For more information about storage sizes, see Sizing for a Netcool Operations Insight on Red Hat OpenShift deployment.
    • Add the following settings for edbPostgresImage, edbPostgresLicenseImage, and edbPostgresSubscriptionName in the spec.postgresql section of the YAML file.
      spec:
        postgresql:
          edbPostgresImage: cp.icr.io/cp/cpd/postgresql:13.8@sha256:91a88a30e9cd2dd5e5d1e8cfb588b12e3852939f777b9e9ddc6657d3bbc10367
          edbPostgresLicenseImage: cp.icr.io/cp/cpd/edb-postgres-license-provider@sha256:fd8339c382e1c5d69184d9c3f299a3da5c9a12a579e0db5e76e86d65be9190fd
          edbPostgresSubscriptionName: cloud-native-postgresql-catalog-subscription
      
    • Update the spec.version value (from 1.6.7 or 1.6.8) to spec.version: 1.6.9.
  3. Select Save and wait until all pods are restarted. You can monitor progress from the Red Hat OpenShift Container Platform UI.
    Note: If you are upgrading from version 1.6.7 to 1.6.9, delete the following statefulsets and deployment.
    oc delete sts <release-name>-spark-master -n $NAMESPACE
    oc delete sts <release-name>-spark-slave -n $NAMESPACE
    oc delete deployment <release-name>-grafana -n $NAMESPACE
    
  4. Check if the netDisco observer is enabled or disabled by running the following command.
     oc get noi $noi_instance_name --no-headers -o custom-columns=":spec.topology.netDisco"
    If the output returns true, proceed to the next step.
    If the output returns <none>, run the following command to delete the following persistent volume claim (PVC).
    oc delete pvc data-<release-name>-topology-elasticsearch-0

What to do next

After you upgrade your cloud deployment, update the webgui statefulset. For more information, see Configuring Netcool Operations Insight on Red Hat OpenShift with LDAP MS Active Directory.

After you upgrade your cloud deployment, delete the following roles with the oc delete role command.
  • oc delete role <release-name>-cassandra-auth-secret-generator
  • oc delete role <release-name>-cassandra-role
Note: For geo-redundant cloud deployments, do not remove the <release-name>-cassandra-role role.
To enable or disable an observer, use the oc patch command, as in following example:
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'