Upgrading a hybrid deployment with the Operator Lifecycle Manager (OLM) user interface

Use these instructions to upgrade an existing hybrid deployment from version 1.6.9 or version 1.6.8 to 1.6.10, with the Red Hat® OpenShift® Container Platform Operator Lifecycle Manager (OLM) user interface (UI).

Before you begin

  • Note: The SingleNamespace mode is supported only for a hybrid deployment with a portable device. Upgrade for a SingleNamespace deployment is not supported.
  • Ensure that you complete all the steps in Preparing your cluster. Most of these steps were completed as part of your previous Netcool® Operations Insight® deployment.
  • Ensure that you have an adequately sized cluster. For more information, see Sizing for a hybrid deployment.
  • Configure persistent storage for your deployment. Only version 1.6.9 or version 1.6.8 deployments with persistence enabled are supported for upgrade to version 1.6.10.
    Note: There is a need for read-write-many storage volumes. Before you upgrade Netcool Operations Insight on Red Hat OpenShift, ensure that your storage provider supports ReadWriteMany (RWX) volumes.
  • Before you upgrade to version 1.6.10, if present, remove the noi-root-ca secret by running the following command.
    oc delete secret noi-root-ca
  • Before you upgrade to version 1.6.10, if present, reverse any image overrides from the test fix of the previous release.
    1. Edit the custom resource (CR).
      oc edit noihybrid <release-name>
      Where <release-name> is the release name, for example, evtmanager.
    2. Manually remove the tag, name, and digest entries of image overrides from the helmValuesNOI section of the YAML file.
  • Before you upgrade, save a backup copy of the cloud native analytics gateway configmap: ea-noi-layer-eanoigateway. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

If you want to verify the origin of the catalog, then use the OLM UI and CASE installation method instead. For more information, see Upgrading hybrid Netcool Operations Insight on Red Hat OpenShift Container Platform with the oc ibm-pak plug-in and Container Application Software for Enterprises (CASE).

All the necessary images for version 1.6.10 are either in the freely accessible operator repository (icr.io/cpopen), or in the IBM® Entitled Registry (cp.icr.io). You need an entitlement key for the IBM Entitled Registry.

To upgrade from version 1.6.9 or 1.6.8 to version 1.6.10, complete the following steps.

Procedure

Upgrade on-premises Operations Management

  1. Use IBM Installation Manager to upgrade on-premises Operations Management to version 1.6.10. For more information, see Upgrading and rolling back on premises.

Upgrade the catalog source

  1. From the Red Hat OpenShift Container Platform OLM UI, go to Administration > Cluster Settings. Then, go to the Configurations tab and select the OperatorHub configuration resource.
  2. Under the Sources tab, click the existing Netcool Operations Insight catalog source, ibm-operator-catalog.
  3. Confirm that the catalog source name and image for version 1.6.10, icr.io/cpopen/ibm-operator-catalog:latest, is specified in the catalog source YAML. If necessary, update the spec.image value to icr.io/cpopen/ibm-operator-catalog:latest and select Save.
    Note: When you installed version 1.6.9 or version 1.6.8, you specified icr.io/cpopen/ibm-operator-catalog:latest as the catalog source name and image. For more information, see step 1b in Installing cloud native components with the Operator Lifecycle Manager (OLM) user interface for version 1.6.9, or step 1b in Installing cloud native components with the Operator Lifecycle Manager (OLM) user interface for version 1.6.8.
  4. When you edit the YAML, ensure that the following lines are set within the spec.
    
      updateStrategy:
        registryPoll:
          interval: 45m

Update or create a PostgreSQL subscription

  1. Update your PostgreSQL subscription or create a new one, depending on what version you are upgrading from.
    If you are upgrading from version 1.6.9, update the PostgreSQL subscription.
    1. Go to Home > Search.
    2. Select the project (namespace) that your NOI operator subscription is deployed in from the project list.
    3. Select SUB Subscription in the resources list. A list of subscriptions is displayed. Click the subscription called cloud-native-postgresql-catalog-subscription. A new window with details for the cloud-native-postgresql-catalog-subscription subscription is displayed.
    4. Update the channel to stable-v1.18.
    If you are upgrading from version 1.6.8, create a subscription called cloud-native-postgresql-catalog-subscription.
    cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: cloud-native-postgresql-catalog-subscription
      namespace: <namespace>
    spec:
      channel: stable-v1.18
      installPlanApproval: Automatic
      name: cloud-native-postgresql
      source: ibm-operator-catalog
      sourceNamespace: openshift-marketplace
    EOF
    
    Where <namespace> is the namespace that you specified when preparing your cluster. For more information, see Preparing your cluster.
    Note: After you create the PostgreSQL subscription, the following error is displayed: no operator group found that is managing this namespace. Ignore this error and proceed to the next step.

Upgrade the Netcool Operations Insight operator

  1. To upgrade the operator from the OLM UI, go to Operators > Installed Operators > IBM Cloud Pak for AIOps Event Manager. Go to the Subscription tab.
    • Select v1.14 in the Update channel section.
    Note: It takes a few minutes for IBM Cloud Pak for AIOps Event Manager to install. When installed, ensure that the status of IBM Cloud Pak for AIOps Event Manager is Succeeded before you proceed to the next steps.

Upgrade the Netcool Operations Insight instance

  1. Upgrade the Netcool Operations Insight instance by going to the OLM UI. Go to Operators > Installed Operators and select your Project. Then select IBM Cloud Pak for AIOps Event Manager.
  2. Note: Complete this step if you are upgrading from version 1.6.8.
    Go to the All instances or NOIHybrid tab and select your instance. Edit the Netcool Operations Insight instance YAML.
    • Add the following settings for Postgresql, PostgresqlWal, and storageClassSharedSpark in the spec.persistence section of the YAML file.
      spec:
        persistence
          storageClassPostgresql: <storage-class>
          storageSizePostgresql: <storage-size>
          storageClassPostgresqlWal: <storage-class>
          storageSizePostgresqlWal: <storage-size>
          storageClassSharedSpark: <read-write-many storage-class>
          storageSizeSharedSpark: <storage-size>
      
      Note: The storage for the Spark pods is shared between the spark pods. The shared Spark storage must support multi-node access.
      Note: For more information about storage sizes, see Sizing for a hybrid deployment.
    • Add the following value for edbPostgresSubscriptionName in the spec.postgresql section of the YAML file.
      spec:
        postgresql:
          edbPostgresSubscriptionName: cloud-native-postgresql-catalog-subscription
      
    • Update the spec.version value (from 1.6.8) to spec.version: 1.6.10.
  3. Note: Complete this step if you are upgrading from version 1.6.9.
    Go to the All instances or NOIHybrid tab and select your instance. Edit the YAML. Update the spec.version value (from 1.6.9) to spec.version: 1.6.10.
    Important: If you are upgrading a trial deployment, use the YAML view and add the following code snippet.
    spec:
      postgresql:
        resources:
          limitsCPU: '1'
          limitsMemory: 1Gi
          requestsCPU: 500m
          requestsMemory: 1Gi
  4. Select Save and wait until all pods are restarted. You can monitor progress from the Red Hat OpenShift Container Platform UI.
  5. For high-availability disaster recovery hybrid deployments, ensure that the following setting is added under the metadata section on both the primary and backup deployments:
    metadata.labels.managedByUser: "true"
  6. For high-availability disaster recovery hybrid deployments, console integrations are only installed on one of the common UIs, either the primary or the backup. Run one of the following commands on the primary or backup cluster.
    Primary cluster
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=primary
    Backup cluster
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=backup

Upgrade the Netcool Hybrid Deployment Option Integration Kit

  1. Use Installation Manager to upgrade the Netcool Hybrid Deployment Option Integration Kit.
    1. Start Installation Manager in GUI mode with the following commands.
      cd IM_dir/eclipse
      ./IBMIM
      Where IM_dir is the Installation Manager Group installation directory, for example /home/netcool/IBM/InstallationManager/eclipse.
    2. From the main Installation Manager screen, select Update, and from the Update Packages window select Netcool Hybrid Deployment Option Integration Kit.
    3. Proceed through the windows, accept the license and the defaults, and enter the on-premises WebSphere® Application Server password.
      Note: When you upgrade the Hybrid Integration Kit, Installation Manager displays that the OAuth configuration is complete. However, you must reenter the Client Secret.
    4. On the window OAuth 2.0 Configuration, set Redirect URL to the URL of your cloud native components deployment. This URL is https://netcool-release_name.apps.fqdn/users/api/authprovider/v1/was/return
      Where
      • release_name is the name of your deployment, as specified by the value used for name (OLM UI Form view), or name in the metadata section of the noi.ibm.com_noihybrids_cr.yaml or noi.ibm.com_nois_cr.yaml files (YAML view).
      • fqdn is the cluster FQDN.
    5. On the window OAuth 2.0 Configuration, set Client ID and Client Secret to the values that were set for them in secret release_name-was-oauth-cnea-secrets when you installed the cloud native components. Retrieve these values by running the following commands on your cloud native Netcool Operations Insight components deployment.
      oc get secret release_name-was-oauth-cnea-secrets -o json -n namespace| grep client-secret | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
      oc get secret release_name-was-oauth-cnea-secrets -o json -n namespace | grep client-id | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
      Where
      • release_name is the name of your deployment, as specified by the value used for name (OLM UI Form view), or name in the metadata section of the noi.ibm.com_noihybrids_cr.yaml or noi.ibm.com_nois_cr.yaml files (YAML view).
      • namespace is the name of the namespace in which the cloud native components are installed.
    6. Select Next and Update.
    7. Create or replace the JAZZSM_HOME/profile/config/cells/JazzSMNode01Cell/oauth20/NetcoolOAuthProvider.xml file with the following content:
      <?xml version="1.0" encoding="UTF-8"?>
      <OAuthServiceConfiguration>
      
        <!-- Example parameters for JDBC database stores -->
                <parameter name="oauth20.client.provider.classname" type="cc" customizable="false">
                  <value>com.ibm.ws.security.oauth20.plugins.db.CachedDBClientProvider</value>
                </parameter>
                <parameter name="oauth20.token.cache.classname" type="cc" customizable="false">
                  <value>com.ibm.ws.security.oauth20.plugins.db.CachedDBTokenStore</value>
                </parameter>
            <parameter name="oauth20.client.cache.seconds" type="cc" customizable="true">
                  <value>600</value>
                </parameter>
                <parameter name="oauthjdbc.JDBCProvider" type="ws" customizable="false">
                  <value>jdbc/oauthProvider</value>
                </parameter>
                <parameter name="oauthjdbc.client.table" type="ws" customizable="false">
                  <value>OAuthDBSchema.OAUTH20CLIENTCONFIG</value>
                </parameter>
                <parameter name="oauthjdbc.token.table" type="ws" customizable="false">
                  <value>OAuthDBSchema.OAUTH20CACHE</value>
                </parameter>
                <parameter name="oauthjdbc.CleanupInterval" type="ws" customizable="true">
                  <value>3600</value>
                </parameter>
                <parameter name="oauthjdbc.CleanupBatchSize" type="ws" customizable="true">
                  <value>250</value>
                </parameter>
            <parameter name="oauthjdbc.AlternateSelectCountQuery" type="ws" customizable="false">
              <value>false</value>
            </parameter>
                <parameter name="oauth20.db.token.cache.jndi.tokens" type="ws" customizable="false">
                  <value>services/cache/OAuth20DBTokenCache</value>
                </parameter>
                <parameter name="oauth20.db.token.cache.jndi.clients" type="ws" customizable="false">
                  <value>services/cache/OAuth20DBClientCache</value>
                </parameter>
        
        <parameter name="oauth20.max.authorization.grant.lifetime.seconds" type="cc" customizable="true">
          <value>604800</value>
        </parameter>
        <parameter name="oauth20.code.lifetime.seconds" type="cc" customizable="true">
          <value>60</value>
        </parameter>
        <parameter name="oauth20.code.length" type="cc" customizable="true">
          <value>30</value>
        </parameter>
        <parameter name="oauth20.token.lifetime.seconds" type="cc" customizable="true">
          <value>3600</value>
        </parameter>
        <parameter name="oauth20.access.token.length" type="cc" customizable="true">
          <value>40</value>
        </parameter>
        <parameter name="oauth20.issue.refresh.token" type="cc" customizable="true">
          <value>true</value>
        </parameter>
        <parameter name="oauth20.refresh.token.length" type="cc" customizable="true">
          <value>50</value>
        </parameter>
        <parameter name="oauth20.access.tokentypehandler.classname" type="cc" customizable="false">
          <value>com.ibm.ws.security.oauth20.plugins.BaseTokenHandler</value>
        </parameter>
        <parameter name="oauth20.mediator.classnames" type="cc" customizable="false">
        </parameter>
        <parameter name="oauth20.allow.public.clients" type="cc" customizable="true">
          <value>true</value>
        </parameter>
        <parameter name="oauth20.grant.types.allowed" type="cc" customizable="false">
          <value>authorization_code</value>
          <value>refresh_token</value>
          <value>password</value>
        </parameter>
        <parameter name="oauth20.authorization.form.template" type="cc" customizable="true">
          <value>template.html</value>
        </parameter>
        <parameter name="oauth20.authorization.error.template" type="cc" customizable="true">
          <value></value>
        </parameter>
        <parameter name="oauth20.authorization.loginURL" type="cc" customizable="true">
          <value>login.jsp</value>
        </parameter>
        <!-- Optional audit handler, uncomment or add a plugin to enable
                <parameter name="oauth20.audithandler.classname" type="cc" customizable="true">
                      <value>com.ibm.oauth.core.api.audit.XMLFileOAuthAuditHandler</value>
                </parameter>
                <parameter name="xmlFileAuditHandler.filename" type="cc" customizable="true">
                      <value>D:\oauth20Audit.xml</value>
                </parameter>
        -->
      
        <!-- Parameters for TAI configuration. These can optionally be added as TAI Custom properties instead, which gives more flexibility.
                        Additional custom TAI properties can be added as parameters by specifying type="tai"
        -->
                <parameter name="filter" type="tai" customizable="true">
            <value>request-url%=ibm/console</value>
                </parameter>
      
                <parameter name="oauthOnly" type="tai" customizable="true">
                  <value>false</value>
                </parameter>
      
              <parameter name="oauth20.autoauthorize.param" type="ws" customizable="false">
                  <value>autoauthz</value>
                </parameter>
              <parameter name="oauth20.autoauthorize.clients" type="ws" customizable="true">
                  <value>hdm-client-id</value>
            <value>debug-client-id</value>
                </parameter>
      
        <!-- mediator for resource owner credential: optional mediator to validate resource owner credential against current active user registry>
        -->
          <parameter name="oauth20.mediator.classnames" type="cc" customizable="true">
                  <value>com.ibm.ws.security.oauth20.mediator.ResourceOwnerValidationMedidator</value>
          </parameter>
      
         <!-- optional limit for the number of tokens a user/client/provider combination can be issued
             <parameter name="oauth20.token.userClientTokenLimit" type="ws" customizable="true">
                  <value>100</value>
            </parameter>
         -->
      </OAuthServiceConfiguration>
  2. Restart Dashboard Application Services Hub on your Dashboard Application Services Hub on-premises installation by using the following commands.
    cd <JazzSM_WAS_Profile>/bin
    ./stopServer.sh server1 -username smadmin -password <smadmin password>
    ./startServer.sh server1
    Where:
    • <JazzSM_WAS_Profile> is the location of the application server profile that is used for Jazz® for Service Management. This location is usually /opt/IBM/JazzSM/profile.
    • <smadmin password> is the smadmin password.
    To obtain the smadmin password, run the following command.
    oc get secret <release_name>-was-secret -o json -n namespace | grep WAS_PASSWORD | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
    Where:
    • <release_name> is the release name for the current cluster.
    • namespace is the namespace that Netcool Operations Insight is deployed into, which can be retrieved by using the oc project command.
    For more information, see Retrieving passwords from secrets.

What to do next

After you upgrade your hybrid deployment, update the cloud native analytics gateway configmap. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

To enable or disable an observer, use the oc patch command, as in following example:
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'