Upgrading hybrid IBM Netcool Operations Insight on Red Hat OpenShift with the oc ibm-pak plug-in and Container Application Software for Enterprises (CASE)

Use these instructions to upgrade an existing Netcool® Operations Insight® deployment from version 1.6.11 or version 1.6.10 to 1.6.12, by using the Red Hat® OpenShift® Container Platform Operator Lifecycle Manager (OLM) user interface (UI) and CASE (Container Application Software for Enterprises). Before you upgrade, you must back up your deployment.

Before you begin

  • Note: Upgrading from OwnNamespace mode to SingleNamespace mode is not supported. The SingleNamespace mode is supported only for cloud and hybrid deployments that are installed with the oc-ibm_pak plug-in and portable compute or portable storage devices..
  • Ensure that you complete all the steps in Preparing your cluster. Most of these steps were completed as part of your previous Netcool Operations Insight deployment.
  • Ensure that you have an adequately sized cluster. For more information, see Sizing for a hybrid deployment.
  • Configure persistent storage for your deployment. Only version 1.6.11 or version 1.6.10 deployments with persistence enabled are supported for upgrade to version 1.6.12.
  • Before you upgrade to version 1.6.12, if present, remove the noi-root-ca secret by running the following command.
    oc delete secret noi-root-ca
  • Before you upgrade, save a backup copy of the cloud native analytics gateway configmap: ea-noi-layer-eanoigateway. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

All the necessary images for version 1.6.12 are either in the freely accessible operator repository (icr.io/cpopen), or in the IBM® Entitled Registry (cp.icr.io). You need an entitlement key for the IBM Entitled Registry.

To upgrade from version 1.6.11 to version 1.6.12, complete the following steps:

Procedure

Upgrade on-premisesOperations Management

  1. Use IBM Installation Manager to upgrade on-premises Operations Management to version 1.6.12. For more information, see Upgrading and rolling back on premises.

Set environment variables

  1. Run the following commands to set environment variables.
    export CASE_NAME=$CASE_NAME
    export CASE_VERSION=$CASE_VERSION
    export TARGET_NAMESPACE=$TARGET_NAMESPACE
    Example:
    export CASE_NAME=ibm-netcool-prod
    export CASE_VERSION=1.12.0
    export TARGET_NAMESPACE=noihybrid

Get the Netcool Operations Insight CASE

  1. Download and install version 1.11.2 or later of IBM Catalog Management Plug-in for IBM Cloud® Paks from the IBM/ibm-pak-plugin external icon. Version 1.10.0 or earlier is also supported. Versions 1.11.0 and 1.11.1 are not supported. Extract the binary file by entering the following command:
    tar -xf oc-ibm_pak-linux-amd64.tar.gz
  2. Run the following command to move the file to the /usr/local/bin directory.
    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    Note: If you are installing as a nonroot user, you must use sudo.
  3. Confirm that the oc ibm-pak is installed by running the following command:
    oc ibm-pak --help

    Expected result: The plug-in usage is displayed.

  4. Download the Netcool Operations Insight CASE bundle (ibm-netcool-prod) to your Red Hat OpenShift Container Platform cluster. Run the following command.
    oc ibm-pak get $CASE_NAME --version $CASE_VERSION
    Note: If you want to install the previous 1.6.11 version, specify --version 1.11.0 in the oc ibm-pak get command.
  5. Check that the CASE repository URL is pointing to the default https://github.com/IBM/cloud-pak/raw/master/repo/case/ location by running the oc ibm-pak config command.

    Example output:

    Repository Config
    
    Name                        CASE Repo URL
    ----                        -------------
    IBM Cloud-Pak Github Repo * https://github.com/IBM/cloud-pak/raw/master/repo/case/
    
    If the repository is not pointing to the default location (asterisk indicates default URL), then run the following command.
    oc ibm-pak config repo 'IBM Cloud-Pak Github Repo' --enable
    If the URL is not displayed, then add the repository by running the following command.
    oc ibm-pak config repo 'IBM Cloud-Pak Github Repo' --url https://github.com/IBM/cloud-pak/raw/master/repo/case/

Install the Netcool Operations Insight catalog and upgrade the operator

  1. Install the catalog source and set the recursive flag and input directory.
    oc ibm-pak launch \
    $CASE_NAME \
    --version $CASE_VERSION \
    --namespace openshift-marketplace \
    --inventory noiOperatorSetup \
    --action install-catalog \
    --args "--recursive --inputDir $HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION"
    
  2. Upgrade the IBM Netcool Operations Insight operator by using CASE. Run the following command.
    export CASE_INVENTORY_SETUP=noiOperatorSetup
    oc ibm-pak launch \
    $CASE_NAME \
    --version $CASE_VERSION \
    --namespace $NAMESPACE \
    --inventory $CASE_INVENTORY_SETUP \
    --action install-operator --args "--recursive --inputDir $IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION"
    

Upgrade the Netcool Operations Insight instance

  1. To avoid issues with CouchDB or Redis pods after upgrade, complete the following steps.
    If your deployment has more than one CouchDB replica, for example a production size deployment, scale the CouchDB statefulset to zero.
    oc scale sts <release-name>-couchdb --replicas=0
    Scale the Redis statefulset to zero.
    oc scale sts <release-name>-ibm-redis-server --replicas=0
  2. Upgrade the Netcool Operations Insight instance by going to the OLM UI. Go to Operators > Installed Operators and select your project. Then select IBM Cloud Pak for AIOps Event Manager.
  3. Note: Complete this step if you are upgrading from version 1.6.10.
    Go to the All instances tab and select your instance. Edit the YAML. Update the spec.version value (from 1.6.10) to spec.version: 1.6.12.
  4. Note: Complete this step if you are upgrading from version 1.6.11.
    Go to the All instances tab and select your instance. Edit the YAML. Update the spec.version value (from 1.6.11) to spec.version: 1.6.12.
  5. Select Save.
  6. For high-availability disaster recovery hybrid deployments, ensure that the following setting is added under the metadata section on both the primary and backup deployments:
    metadata.labels.managedByUser: "true"
  7. For high-availability disaster recovery hybrid deployments, console integrations are only installed on one of the common UIs, either the primary or the backup. Run one of the following commands on the primary or backup cluster.
    Primary cluster
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=primary
    Backup cluster
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=backup

Upgrade the Netcool Hybrid Deployment Option Integration Kit

  1. Use Installation Manager to upgrade the Netcool Hybrid Deployment Option Integration Kit.
    1. Start Installation Manager in GUI mode with the following commands.
      cd IM_dir/eclipse
      ./IBMIM
      Where IM_dir is the Installation Manager Group installation directory, for example /home/netcool/IBM/InstallationManager/eclipse.
    2. From the main Installation Manager screen, select Update, and from the Update Packages window select Netcool Hybrid Deployment Option Integration Kit.
    3. Proceed through the windows, accept the license and the defaults, and enter the on-premises WebSphere® Application Server password.
      Note: When you upgrade the Hybrid Integration Kit, Installation Manager displays that the OAuth configuration is complete. However, you must reenter the Client Secret.
    4. On the window OAuth 2.0 Configuration, set Redirect URL to the URL of your cloud native components deployment. This URL is https://netcool-release_name.apps.fqdn/users/api/authprovider/v1/was/return
      Where
      • release_name is the name of your deployment, as specified by the value used for name (OLM UI Form view), or name in the metadata section of the noi.ibm.com_noihybrids_cr.yaml or noi.ibm.com_nois_cr.yaml files (YAML view).
      • fqdn is the cluster FQDN.
    5. On the window OAuth 2.0 Configuration, set Client ID and Client Secret to the values that were set for them in secret release_name-was-oauth-cnea-secrets when you installed the cloud native components. Retrieve these values by running the following commands on your cloud native Netcool Operations Insight components deployment.
      oc get secret release_name-was-oauth-cnea-secrets -o json -n namespace| grep client-secret | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
      oc get secret release_name-was-oauth-cnea-secrets -o json -n namespace | grep client-id | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
      Where
      • release_name is the name of your deployment, as specified by the value used for name (OLM UI Form view), or name in the metadata section of the noi.ibm.com_noihybrids_cr.yaml or noi.ibm.com_nois_cr.yaml files (YAML view).
      • namespace is the name of the namespace in which the cloud native components are installed.
    6. Select Next and Update.
    7. Create or replace the JAZZSM_HOME/profile/config/cells/JazzSMNode01Cell/oauth20/NetcoolOAuthProvider.xml file with the following content:
      <?xml version="1.0" encoding="UTF-8"?>
      <OAuthServiceConfiguration>
      
        <!-- Example parameters for JDBC database stores -->
                <parameter name="oauth20.client.provider.classname" type="cc" customizable="false">
                  <value>com.ibm.ws.security.oauth20.plugins.db.CachedDBClientProvider</value>
                </parameter>
                <parameter name="oauth20.token.cache.classname" type="cc" customizable="false">
                  <value>com.ibm.ws.security.oauth20.plugins.db.CachedDBTokenStore</value>
                </parameter>
            <parameter name="oauth20.client.cache.seconds" type="cc" customizable="true">
                  <value>600</value>
                </parameter>
                <parameter name="oauthjdbc.JDBCProvider" type="ws" customizable="false">
                  <value>jdbc/oauthProvider</value>
                </parameter>
                <parameter name="oauthjdbc.client.table" type="ws" customizable="false">
                  <value>OAuthDBSchema.OAUTH20CLIENTCONFIG</value>
                </parameter>
                <parameter name="oauthjdbc.token.table" type="ws" customizable="false">
                  <value>OAuthDBSchema.OAUTH20CACHE</value>
                </parameter>
                <parameter name="oauthjdbc.CleanupInterval" type="ws" customizable="true">
                  <value>3600</value>
                </parameter>
                <parameter name="oauthjdbc.CleanupBatchSize" type="ws" customizable="true">
                  <value>250</value>
                </parameter>
            <parameter name="oauthjdbc.AlternateSelectCountQuery" type="ws" customizable="false">
              <value>false</value>
            </parameter>
                <parameter name="oauth20.db.token.cache.jndi.tokens" type="ws" customizable="false">
                  <value>services/cache/OAuth20DBTokenCache</value>
                </parameter>
                <parameter name="oauth20.db.token.cache.jndi.clients" type="ws" customizable="false">
                  <value>services/cache/OAuth20DBClientCache</value>
                </parameter>
        
        <parameter name="oauth20.max.authorization.grant.lifetime.seconds" type="cc" customizable="true">
          <value>604800</value>
        </parameter>
        <parameter name="oauth20.code.lifetime.seconds" type="cc" customizable="true">
          <value>60</value>
        </parameter>
        <parameter name="oauth20.code.length" type="cc" customizable="true">
          <value>30</value>
        </parameter>
        <parameter name="oauth20.token.lifetime.seconds" type="cc" customizable="true">
          <value>3600</value>
        </parameter>
        <parameter name="oauth20.access.token.length" type="cc" customizable="true">
          <value>40</value>
        </parameter>
        <parameter name="oauth20.issue.refresh.token" type="cc" customizable="true">
          <value>true</value>
        </parameter>
        <parameter name="oauth20.refresh.token.length" type="cc" customizable="true">
          <value>50</value>
        </parameter>
        <parameter name="oauth20.access.tokentypehandler.classname" type="cc" customizable="false">
          <value>com.ibm.ws.security.oauth20.plugins.BaseTokenHandler</value>
        </parameter>
        <parameter name="oauth20.mediator.classnames" type="cc" customizable="false">
        </parameter>
        <parameter name="oauth20.allow.public.clients" type="cc" customizable="true">
          <value>true</value>
        </parameter>
        <parameter name="oauth20.grant.types.allowed" type="cc" customizable="false">
          <value>authorization_code</value>
          <value>refresh_token</value>
          <value>password</value>
        </parameter>
        <parameter name="oauth20.authorization.form.template" type="cc" customizable="true">
          <value>template.html</value>
        </parameter>
        <parameter name="oauth20.authorization.error.template" type="cc" customizable="true">
          <value></value>
        </parameter>
        <parameter name="oauth20.authorization.loginURL" type="cc" customizable="true">
          <value>login.jsp</value>
        </parameter>
        <!-- Optional audit handler, uncomment or add a plugin to enable
                <parameter name="oauth20.audithandler.classname" type="cc" customizable="true">
                      <value>com.ibm.oauth.core.api.audit.XMLFileOAuthAuditHandler</value>
                </parameter>
                <parameter name="xmlFileAuditHandler.filename" type="cc" customizable="true">
                      <value>D:\oauth20Audit.xml</value>
                </parameter>
        -->
      
        <!-- Parameters for TAI configuration. These can optionally be added as TAI Custom properties instead, which gives more flexibility.
                        Additional custom TAI properties can be added as parameters by specifying type="tai"
        -->
                <parameter name="filter" type="tai" customizable="true">
            <value>request-url%=ibm/console</value>
                </parameter>
      
                <parameter name="oauthOnly" type="tai" customizable="true">
                  <value>false</value>
                </parameter>
      
              <parameter name="oauth20.autoauthorize.param" type="ws" customizable="false">
                  <value>autoauthz</value>
                </parameter>
              <parameter name="oauth20.autoauthorize.clients" type="ws" customizable="true">
                  <value>hdm-client-id</value>
            <value>debug-client-id</value>
                </parameter>
      
        <!-- mediator for resource owner credential: optional mediator to validate resource owner credential against current active user registry>
        -->
          <parameter name="oauth20.mediator.classnames" type="cc" customizable="true">
                  <value>com.ibm.ws.security.oauth20.mediator.ResourceOwnerValidationMedidator</value>
          </parameter>
      
         <!-- optional limit for the number of tokens a user/client/provider combination can be issued
             <parameter name="oauth20.token.userClientTokenLimit" type="ws" customizable="true">
                  <value>100</value>
            </parameter>
         -->
      </OAuthServiceConfiguration>
  2. Restart Dashboard Application Services Hub on your Dashboard Application Services Hub on-premises installation by using the following commands.
    cd <JazzSM_WAS_Profile>/bin
    ./stopServer.sh server1 -username smadmin -password <smadmin password>
    ./startServer.sh server1
    Where:
    • <JazzSM_WAS_Profile> is the location of the application server profile that is used for Jazz® for Service Management. This location is usually /opt/IBM/JazzSM/profile.
    • <smadmin password> is the smadmin password.
    To obtain the smadmin password, run the following command.
    oc get secret <release_name>-was-secret -o json -n namespace | grep WAS_PASSWORD | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
    Where:
    • <release_name> is the release name for the current cluster.
    • namespace is the namespace that Netcool Operations Insight is deployed into, which can be retrieved by using the oc project command.
    For more information, see Retrieving passwords from secrets.

What to do next

After you upgrade your version, you can delete the ibm-hdm-analytics-dev-aidl-ca configmap after the upgrade. This configmap is not used in Netcool Operations Insight 1.6.12 and onward.
  1. Remove the ibm-hdm-analytics-dev-aidl-ca confimap.
    oc delete configmap ibm-hdm-analytics-dev-aidl-ca
If the {{ .Release.Name }}-spark-shared-state PVC is included after the upgrade, you can delete it. This PVC is not used in Netcool Operations Insight 1.6.12 and onward.
  1. Check whether a shared spark PVC exists:
    oc get pvc | grep 'spark'
  2. Delete the {{ .Release.Name }}-spark-shared-state PVC resource if it is included:
    oc delete pvc {{ .Release.Name }}-spark-shared-state
    Where {{ .Release.Name }} is the release name of the PVC resource.

After you upgrade your hybrid deployment, update the cloud native analytics gateway configmap. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

To enable or disable an observer, use the oc patch command, as in following example:
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'