Upgrading Netcool Operations Insight on Red Hat OpenShift Container Platform with the Operator Lifecycle Manager (OLM) user interface

Use these instructions to upgrade an existing Netcool® Operations Insight® deployment from version 1.6.9 or version 1.6.8 to version 1.6.10, with the Red Hat® OpenShift® Container Platform Operator Lifecycle Manager (OLM) user interface (UI).

Before you begin

  • Ensure that you complete all the steps in Preparing your cluster. Most of these steps were completed as part of your previous Netcool Operations Insight deployment.
  • Ensure that you have an adequately sized cluster. For more information, see Sizing for a Netcool Operations Insight on Red Hat OpenShift deployment.
  • Configure persistent storage for your deployment. Only version 1.6.9 or version 1.6.8 deployments with persistence enabled are supported for upgrade to version 1.6.10.
    Note: There is a need for read-write-many storage volumes. Before you upgrade Netcool Operations Insight on Red Hat OpenShift, ensure that your storage provider supports ReadWriteMany (RWX) volumes.
  • Before you upgrade to version 1.6.10, if present, remove the noi-root-ca secret by running the following command:
    oc delete secret noi-root-ca
  • Before you upgrade to version 1.6.10, if present, reverse any image overrides from the test fix of the previous release.
    1. Edit the custom resource (CR).
      oc edit noi <release-name>
      Where <release-name> is the release name, for example, evtmanager.
    2. Manually remove the tag, name, and digest entries of image overrides from the helmValuesNOI section of the YAML file.
  • Before you upgrade, save a backup copy of the cloud native analytics gateway configmap: ea-noi-layer-eanoigateway. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

If you want to verify the origin of the catalog, then use the OLM UI and CASE upgrade method instead. For more information, see Upgrading a cloud deployment offline using CASE.

All the necessary images for version 1.6.10 are either in the freely accessible operator repository (icr.io/cpopen), or in the IBM Entitled Registry (cp.icr.io) for which you need an entitlement key.

To upgrade from version 1.6.9 or 1.6.8 to version 1.6.10, complete the following steps.

Procedure

Upgrade the catalog source

  1. From the Red Hat OpenShift Container Platform OLM UI, go to Administration > Cluster Settings. Then, go to the Configurations tab and select the OperatorHub configuration resource.
  2. Under the Sources tab, click the existing Netcool Operations Insight catalog source, ibm-operator-catalog.
  3. Confirm that the catalog source name and image for version 1.6.10, icr.io/cpopen/ibm-operator-catalog:latest, is specified in the catalog source YAML. If necessary, update the spec.image value to icr.io/cpopen/ibm-operator-catalog:latest and select Save.
    Note: When you installed version 1.6.9 or version 1.6.8, you specified icr.io/cpopen/ibm-operator-catalog:latest as the catalog source name and image. For more information, see step 1b in Installing Netcool Operations Insight with the Operator Lifecycle Manager (OLM) user interface for version 1.6.9, or step 1b in Installing Netcool Operations Insight with the Operator Lifecycle Manager (OLM) user interface for version 1.6.8.

Update or create a PostgreSQL subscription

  1. Update your PostgreSQL subscription or create a new one, depending on what version you are upgrading from.
    If you are upgrading from version 1.6.9, update the PostgreSQL subscription.
    1. Go to Home > Search.
    2. Select the project (namespace) that your NOI operator subscription is deployed in from the project list.
    3. Select SUB Subscription in the resources list. A list of subscriptions is displayed. Click the subscription called cloud-native-postgresql-catalog-subscription. A new window with details for the cloud-native-postgresql-catalog-subscription subscription is displayed.
    4. Update the channel to stable-v1.18.
    If you are upgrading from version 1.6.8, create a subscription called cloud-native-postgresql-catalog-subscription.
    cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: cloud-native-postgresql-catalog-subscription
      namespace: <namespace>
    spec:
      channel: stable-v1.18
      installPlanApproval: Automatic
      name: cloud-native-postgresql
      source: ibm-operator-catalog
      sourceNamespace: openshift-marketplace
    EOF
    
    Where <namespace> is the namespace that you specified when preparing your cluster. For more information, see Preparing your cluster.
    Note: After you create the PostgreSQL subscription, the following error is displayed: no operator group found that is managing this namespace. Ignore this error and proceed to the next step.

Upgrade the Netcool Operations Insight operator

  1. To upgrade the operator from the OLM UI, go to Operators > Installed Operators > IBM Cloud Pak for AIOps Event Manager. Go to the Subscription tab.
    • Select v1.14 in the Update channel section.
    Note: It takes a few minutes for IBM Cloud Pak for AIOps Event Manager to install. When installed, ensure that the status of IBM Cloud Pak for AIOps Event Manager is Succeeded before you proceed to the next steps.

Upgrade the Netcool Operations Insight instance

  1. Upgrade the Netcool Operations Insight instance by going to the OLM UI. Go to Operators > Installed Operators and select your Project. Then select IBM Cloud Pak for AIOps Event Manager.
  2. Note: Complete this step if you are upgrading from version 1.6.8.
    Go to the All instances or NOI tab and select your instance. Edit the Netcool Operations Insight instance YAML.
    • Add the following settings for Postgresql, PostgresqlWal, and storageClassSharedSpark in the spec.persistence section of the YAML file.
      spec:
        persistence
          storageClassPostgresql: <storage-class>
          storageSizePostgresql: <storage-size>
          storageClassPostgresqlWal: <storage-class>
          storageSizePostgresqlWal: <storage-size>
          storageClassSharedSpark: <read-write-many storage-class>
          storageSizeSharedSpark: <storage-size>
      
      Note: The storage for the Spark pods is shared between the spark pods. The shared Spark storage must support multi-node access.
      Note: For more information about storage sizes, see Sizing for a Netcool Operations Insight on Red Hat OpenShift deployment.
    • Add the following value for edbPostgresSubscriptionName in the spec.postgresql section of the YAML file.
      spec:
        postgresql:
          edbPostgresSubscriptionName: cloud-native-postgresql-catalog-subscription
      
    • Update the spec.version value (from 1.6.8) to spec.version: 1.6.10.
  3. Note: Complete this step if you are upgrading from version 1.6.9.
    Go to the All instances or NOI tab and select your instance. Edit the YAML. Update the spec.version value (from 1.6.9) to spec.version: 1.6.10.
    Important: If you are upgrading a trial deployment, use the YAML view and add the following code snippet.
    spec:
      postgresql:
        resources:
          limitsCPU: '1'
          limitsMemory: 1Gi
          requestsCPU: 500m
          requestsMemory: 1Gi
  4. Select Save and wait until all pods are restarted. You can monitor progress from the Red Hat OpenShift Container Platform UI.

What to do next

After you upgrade your cloud deployment, update the cloud native analytics gateway configmap. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

To enable or disable an observer, use the oc patch command, as in following example:
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noi $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'