Installing cloud native components with the Operator Lifecycle Manager (OLM) user interface

Use these instructions to install the cloud native Netcool® Operations Insight® components for a hybrid deployment, using the Red Hat® OpenShift® Container Platform Operator Lifecycle Manager (OLM) user interface (UI).

Before you begin

Ensure that you have completed all the steps in Planning.

Online installations of Netcool Operations Insight on Red Hat OpenShift components can be run entirely as a nonroot user and do not require users to have sudo access.

The operator images for Netcool Operations Insight on Red Hat OpenShift are in the freely accessible operator repository (icr.io/cpopen), and the operand images are in the IBM® Entitled Registry (cp.icr.io), for which you require an entitlement key.

If you want to verify the origin of the catalog, then use the OLM UI and CASE install method instead. For more information, see Installing cloud native components with the oc ibm-pak plug-in, the Operator Lifecycle Manager (OLM) UI, and CASE (Container Application Software for Enterprises).

For more information about the OLM, see Operator Lifecycle Manager (OLM) external link in the Red Hat OpenShift Container Platform documentation.

About this task

Follow these steps to install cloud native components in a hybrid deployment.

Procedure

Create a catalog source for noi

  1. You can create a catalog source with the Red Hat OpenShift console or the Red Hat OpenShift CLI.
    Red Hat OpenShift console: If you want to create a catalog source with the Red Hat OpenShift console, complete the following steps:
    1. Log in to your Red Hat OpenShift cluster's console.
    2. Add the catalog source.
      The ibm-operator-catalog CatalogSource object can be configured to automatically poll for a newer version, and if one is available, to retrieve it. This configuration triggers an automatic update of your deployment. Polling for updates is enabled by configuring the polling attribute, spec.updateStrategy.registryPoll.
      Note: ibm-operator-catalog also contains the catalogs for other CloudPaks. If multiple CloudPaks are installed on your cluster, then an automatic update is configured for all of them.

      Click the plus icon in the upper right of the window to open the Import YAML dialog box, paste in one of the following code blocks, and then click Create.

      If you want to disable automatic updates, use this YAML to disable catalog polling:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM Content
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
      If you want to enable automatic updates, use this YAML:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM Content
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
        updateStrategy:
          registryPoll:
            interval: 45m
    3. Go to Administration > Cluster Settings. Under Global Configuration > OperatorHub > Sources, verify that the ibm-operator-catalog CatalogSource object is present.
    Red Hat OpenShift CLI:
    Draft comment: deirdrel@ie.ibm.com
    CLI section
    If you want to create a catalog source with the Red Hat OpenShift CLI, complete the following steps:
    1. Add the catalog source.
      The ibm-operator-catalog CatalogSource object can be configured to automatically poll for a newer version, and if one is available, to retrieve it. This configuration triggers an automatic update of your deployment. Polling for updates is enabled by configuring the polling attribute, spec.updateStrategy.registryPoll.
      Note: ibm-operator-catalog also contains the catalogs for other CloudPaks. If multiple CloudPaks are installed on your cluster, then an automatic update is configured for all of them.
      If you want to disable automatic updates, run the following command to disable catalog polling:
      cat << EOF | oc apply -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM Content
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
      EOF
      If you want to enable automatic updates, run the following command:
      cat << EOF | oc apply -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM Content
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
        updateStrategy:
          registryPoll:
            interval: 45m
      EOF
    2. Verify that the ibm-operator-catalog CatalogSource object is present, and is returned by the following command.
      oc get CatalogSources ibm-operator-catalog -n openshift-marketplace
      Example output:
      oc get CatalogSources ibm-operator-catalog -n openshift-marketplace
      NAME                   DISPLAY                 TYPE   PUBLISHER   AGE
      ibm-operator-catalog   NOI Operator Catalog    grpc   IBM         4h13m

Install the Netcool Operations Insight Operator

  1. Go to Operators > OperatorHub, and then search for and select IBM Cloud Pak for Watson™ AIOps Event Manager and click Install.
    Note: Ensure that IBM Cloud Pak for Watson AIOps Event Manager is selected. Do not select Netcool Operations Insight.
  2. Note: If you want to install the previous 1.6.7 version, select v1.11. For more information, see the 1.6.7 documentation: Installing cloud native components with the Operator Lifecycle Manager (OLM) user interface
    Draft comment: deirdrel@ie.ibm.com
    Update each release
    Select v1.12 in the Update channel section.
  3. Select A specific namespace on the cluster as your installation mode and select the namespace that you created in Preparing your cluster to install the operator into. Do not use namespaces that are owned by Kubernetes or Red Hat OpenShift, such as kube-system or default.
  4. Click Install.
  5. Go to Operators > Installed Operators, and view IBM Cloud Pak for Watson AIOps Event Manager. It takes a few minutes to install. Ensure that the status of the installed IBM Cloud Pak for Watson AIOps Event Manager is Succeeded before continuing.

Create a Netcool Operations Insight instance for a hybrid deployment.

  1. From the Red Hat OpenShift Container Platform OLM UI, navigate to Operators > Installed Operators, and select IBM Cloud Pak for Watson AIOps Event Manager. Under Provided APIs > NOIHybrid, select Create Instance.
  2. From the Red Hat OpenShift Container Platform OLM UI, use the YAML or the Form view to configure the properties for the cloud native Netcool Operations Insight components deployment. For more information about configurable properties for a hybrid deployment, see Hybrid operator properties.
    CAUTION:
    Ensure that the name of the Netcool Operations Insight instance does not exceed 10 characters.
  3. Select the Create button.
  4. Under the All Instances tab, a Netcool Operations Insight hybrid instance appears.
    Navigate to Operators > Installed Operators and check that the status of your Netcool Operations Insight instance is Phase: OK. Click on Netcool Operations Insight > All Instances to check it. This means that IBM Cloud Pak for Watson AIOps Event Manager has started and is now in the process of starting up the various pods.
    To monitor the status of the installation, see Monitoring installation progress.
    Note:
    • Changing an existing deployment from a Trial deployment type to a Production deployment type is not supported.
    • Changing an instance's deployment parameters in the Form view is not supported post deployment.
    • If you update custom secrets in the OLM console, the crypto key is corrupted, and the command to encrypt passwords does not work. Update custom secrets only with the CLI. For more information about storing a certificate as a secret, see https://www.ibm.com/docs/en/SS9LQB_1.1.17/LoadingData/t_asm_obs_configuringsecurity.html external link

What to do next

To enable or disable an observer after installation, use the oc patch command, as in following example:
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'