Upgrading a hybrid deployment offline with a bastion host

Use these instructions to upgrade an existing hybrid deployment from version 1.6.10 or version 1.6.9 to 1.6.11, on an offline Red Hat® OpenShift® Container Platform cluster, by using a bastion host. Before you upgrade, you must back up your deployment.

Before you begin

  • Note: Upgrading from OwnNamespace mode to SingleNamespace mode is not supported. The SingleNamespace mode is supported only for cloud and hybrid deployments that are installed with the oc-ibm_pak plug-in and portable compute or portable storage devices..
  • Ensure that you complete all the steps in Preparing your cluster. Most of these steps were completed as part of your previous Netcool® Operations Insight® deployment.
  • Ensure that you have an adequately sized cluster. For more information, see Sizing for a hybrid deployment.
  • Configure persistent storage for your deployment. Only version 1.6.10 or version 1.6.9 deployments with persistence enabled are supported for upgrade to version 1.6.11.
  • Before you upgrade to version 1.6.11, if present, remove the noi-root-ca secret by running the following command.
    oc delete secret noi-root-ca
  • Before you upgrade to version 1.6.11, if present, reverse any image overrides from the test fix of the previous release.
    1. Edit the custom resource (CR).
      oc edit noihybrid <release-name>
      Where <release-name> is the release name, for example, evtmanager.
    2. Manually remove the tag, name, and digest entries of image overrides from the helmValuesNOI section of the YAML file.
  • Before you upgrade, save a backup copy of the cloud native analytics gateway configmap: ea-noi-layer-eanoigateway. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

All the necessary images for version 1.6.11 are either in the freely accessible operator repository (icr.io/cpopen), or in the IBM® Entitled Registry (cp.icr.io). You need an entitlement key for the IBM Entitled Registry.

To upgrade from version 1.6.10 or 1.6.9 to version 1.6.11, complete the following steps.

Procedure

Upgrade on-premises Operations Management

  1. Use IBM Installation Manager to upgrade on-premises Operations Management to version 1.6.11. For more information, see Upgrading and rolling back on premises.

Upgrade Netcool Operations Insight on OpenShift

  1. Complete the steps in 1. Set up your mirroring environment.
  2. Complete the steps in 2. Set environment variables and download CASE files
  3. Complete the steps in 3. Mirror images

Upgrade the Netcool Operations Insight catalog and operator

  1. Complete the steps in 4. Install Netcool Operations Insight on Red Hat OpenShift Container Platform.

Upgrade the Netcool Operations Insight instance

  1. Upgrade the Netcool Operations Insight instance by going to the OLM UI. Go to Operators > Installed Operators and select your Project. Then select IBM Cloud Pak for AIOps Event Manager.
  2. Note: Complete this step if you are upgrading from version 1.6.9.
    Go to the All instances tab and select your instance. Edit the YAML. Update the spec.version value (from 1.6.9) to spec.version: 1.6.11.
  3. Note: Complete this step if you are upgrading from version 1.6.10.
    Go to the All instances tab and select your instance. Edit the YAML. Update the spec.version value (from 1.6.10) to spec.version: 1.6.11.
  4. Select Save.
  5. For high-availability disaster recovery hybrid deployments, ensure that the following setting is added under the metadata section on both the primary and backup deployments:
    metadata.labels.managedByUser: "true"
  6. For high-availability disaster recovery hybrid deployments, console integrations are only installed on one of the common UIs, either the primary or the backup. Run one of the following commands on the primary or backup cluster.
    Primary cluster
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=primary
    Backup cluster
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=backup

Upgrade the Netcool Hybrid Deployment Option Integration Kit

  1. Use Installation Manager to upgrade the Netcool Hybrid Deployment Option Integration Kit.
    1. Start Installation Manager in GUI mode with the following commands.
      cd IM_dir/eclipse
      ./IBMIM
      Where IM_dir is the Installation Manager Group installation directory, for example /home/netcool/IBM/InstallationManager/eclipse.
    2. From the main Installation Manager screen, select Update, and from the Update Packages window select Netcool Hybrid Deployment Option Integration Kit.
    3. Proceed through the windows, accept the license and the defaults, and enter the on-premises WebSphere® Application Server password.
      Note: When you upgrade the Hybrid Integration Kit, Installation Manager displays that the OAuth configuration is complete. However, you must reenter the Client Secret.
    4. On the window OAuth 2.0 Configuration, set Redirect URL to the URL of your cloud native components deployment. This URL is https://netcool-release_name.apps.fqdn/users/api/authprovider/v1/was/return
      Where
      • release_name is the name of your deployment, as specified by the value used for name (OLM UI Form view), or name in the metadata section of the noi.ibm.com_noihybrids_cr.yaml or noi.ibm.com_nois_cr.yaml files (YAML view).
      • fqdn is the cluster FQDN.
    5. On the window OAuth 2.0 Configuration, set Client ID and Client Secret to the values that were set for them in secret release_name-was-oauth-cnea-secrets when you installed the cloud native components. Retrieve these values by running the following commands on your cloud native Netcool Operations Insight components deployment.
      oc get secret release_name-was-oauth-cnea-secrets -o json -n namespace| grep client-secret | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
      oc get secret release_name-was-oauth-cnea-secrets -o json -n namespace | grep client-id | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
      Where
      • release_name is the name of your deployment, as specified by the value used for name (OLM UI Form view), or name in the metadata section of the noi.ibm.com_noihybrids_cr.yaml or noi.ibm.com_nois_cr.yaml files (YAML view).
      • namespace is the name of the namespace in which the cloud native components are installed.
    6. Select Next and Update.
    7. Create or replace the JAZZSM_HOME/profile/config/cells/JazzSMNode01Cell/oauth20/NetcoolOAuthProvider.xml file with the following content:
      <?xml version="1.0" encoding="UTF-8"?>
      <OAuthServiceConfiguration>
      
        <!-- Example parameters for JDBC database stores -->
                <parameter name="oauth20.client.provider.classname" type="cc" customizable="false">
                  <value>com.ibm.ws.security.oauth20.plugins.db.CachedDBClientProvider</value>
                </parameter>
                <parameter name="oauth20.token.cache.classname" type="cc" customizable="false">
                  <value>com.ibm.ws.security.oauth20.plugins.db.CachedDBTokenStore</value>
                </parameter>
            <parameter name="oauth20.client.cache.seconds" type="cc" customizable="true">
                  <value>600</value>
                </parameter>
                <parameter name="oauthjdbc.JDBCProvider" type="ws" customizable="false">
                  <value>jdbc/oauthProvider</value>
                </parameter>
                <parameter name="oauthjdbc.client.table" type="ws" customizable="false">
                  <value>OAuthDBSchema.OAUTH20CLIENTCONFIG</value>
                </parameter>
                <parameter name="oauthjdbc.token.table" type="ws" customizable="false">
                  <value>OAuthDBSchema.OAUTH20CACHE</value>
                </parameter>
                <parameter name="oauthjdbc.CleanupInterval" type="ws" customizable="true">
                  <value>3600</value>
                </parameter>
                <parameter name="oauthjdbc.CleanupBatchSize" type="ws" customizable="true">
                  <value>250</value>
                </parameter>
            <parameter name="oauthjdbc.AlternateSelectCountQuery" type="ws" customizable="false">
              <value>false</value>
            </parameter>
                <parameter name="oauth20.db.token.cache.jndi.tokens" type="ws" customizable="false">
                  <value>services/cache/OAuth20DBTokenCache</value>
                </parameter>
                <parameter name="oauth20.db.token.cache.jndi.clients" type="ws" customizable="false">
                  <value>services/cache/OAuth20DBClientCache</value>
                </parameter>
        
        <parameter name="oauth20.max.authorization.grant.lifetime.seconds" type="cc" customizable="true">
          <value>604800</value>
        </parameter>
        <parameter name="oauth20.code.lifetime.seconds" type="cc" customizable="true">
          <value>60</value>
        </parameter>
        <parameter name="oauth20.code.length" type="cc" customizable="true">
          <value>30</value>
        </parameter>
        <parameter name="oauth20.token.lifetime.seconds" type="cc" customizable="true">
          <value>3600</value>
        </parameter>
        <parameter name="oauth20.access.token.length" type="cc" customizable="true">
          <value>40</value>
        </parameter>
        <parameter name="oauth20.issue.refresh.token" type="cc" customizable="true">
          <value>true</value>
        </parameter>
        <parameter name="oauth20.refresh.token.length" type="cc" customizable="true">
          <value>50</value>
        </parameter>
        <parameter name="oauth20.access.tokentypehandler.classname" type="cc" customizable="false">
          <value>com.ibm.ws.security.oauth20.plugins.BaseTokenHandler</value>
        </parameter>
        <parameter name="oauth20.mediator.classnames" type="cc" customizable="false">
        </parameter>
        <parameter name="oauth20.allow.public.clients" type="cc" customizable="true">
          <value>true</value>
        </parameter>
        <parameter name="oauth20.grant.types.allowed" type="cc" customizable="false">
          <value>authorization_code</value>
          <value>refresh_token</value>
          <value>password</value>
        </parameter>
        <parameter name="oauth20.authorization.form.template" type="cc" customizable="true">
          <value>template.html</value>
        </parameter>
        <parameter name="oauth20.authorization.error.template" type="cc" customizable="true">
          <value></value>
        </parameter>
        <parameter name="oauth20.authorization.loginURL" type="cc" customizable="true">
          <value>login.jsp</value>
        </parameter>
        <!-- Optional audit handler, uncomment or add a plugin to enable
                <parameter name="oauth20.audithandler.classname" type="cc" customizable="true">
                      <value>com.ibm.oauth.core.api.audit.XMLFileOAuthAuditHandler</value>
                </parameter>
                <parameter name="xmlFileAuditHandler.filename" type="cc" customizable="true">
                      <value>D:\oauth20Audit.xml</value>
                </parameter>
        -->
      
        <!-- Parameters for TAI configuration. These can optionally be added as TAI Custom properties instead, which gives more flexibility.
                        Additional custom TAI properties can be added as parameters by specifying type="tai"
        -->
                <parameter name="filter" type="tai" customizable="true">
            <value>request-url%=ibm/console</value>
                </parameter>
      
                <parameter name="oauthOnly" type="tai" customizable="true">
                  <value>false</value>
                </parameter>
      
              <parameter name="oauth20.autoauthorize.param" type="ws" customizable="false">
                  <value>autoauthz</value>
                </parameter>
              <parameter name="oauth20.autoauthorize.clients" type="ws" customizable="true">
                  <value>hdm-client-id</value>
            <value>debug-client-id</value>
                </parameter>
      
        <!-- mediator for resource owner credential: optional mediator to validate resource owner credential against current active user registry>
        -->
          <parameter name="oauth20.mediator.classnames" type="cc" customizable="true">
                  <value>com.ibm.ws.security.oauth20.mediator.ResourceOwnerValidationMedidator</value>
          </parameter>
      
         <!-- optional limit for the number of tokens a user/client/provider combination can be issued
             <parameter name="oauth20.token.userClientTokenLimit" type="ws" customizable="true">
                  <value>100</value>
            </parameter>
         -->
      </OAuthServiceConfiguration>
  2. Restart Dashboard Application Services Hub on your Dashboard Application Services Hub on-premises installation by using the following commands.
    cd <JazzSM_WAS_Profile>/bin
    ./stopServer.sh server1 -username smadmin -password <smadmin password>
    ./startServer.sh server1
    Where:
    • <JazzSM_WAS_Profile> is the location of the application server profile that is used for Jazz® for Service Management. This location is usually /opt/IBM/JazzSM/profile.
    • <smadmin password> is the smadmin password.
    To obtain the smadmin password, run the following command.
    oc get secret <release_name>-was-secret -o json -n namespace | grep WAS_PASSWORD | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo
    Where:
    • <release_name> is the release name for the current cluster.
    • namespace is the namespace that Netcool Operations Insight is deployed into, which can be retrieved by using the oc project command.
    For more information, see Retrieving passwords from secrets.

What to do next

If the {{ .Release.Name }}-spark-shared-state PVC is included after the upgrade, you can delete it. This PVC is not used in Netcool Operations Insight 1.6.11 and onward.
  1. Check whether a shared spark PVC exists:
    oc get pvc | grep 'spark'
  2. Delete the {{ .Release.Name }}-spark-shared-state PVC resource if it is included:
    oc delete pvc {{ .Release.Name }}-spark-shared-state
    Where {{ .Release.Name }} is the release name of the PVC resource.

After you upgrade your hybrid deployment, update the cloud native analytics gateway configmap. For more information, see Preserving cloud native analytics gateway configmap customizations on upgrade.

To enable or disable an observer, use the oc patch command, as in following example:
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'