Installing cloud native components with the Operator Lifecycle Manager (OLM) user interface

Use these instructions to install the cloud native components for a hybrid deployment that uses the Red Hat® OpenShift® Container Platform Operator Lifecycle Manager (OLM) user interface (UI).

Before you begin

Ensure that you complete all the steps in Planning.

Online installations of Netcool Operations Insight on Red Hat OpenShift components can be run entirely as a nonroot user and do not require users to have sudo access.

The operator images for Netcool Operations Insight on Red Hat OpenShift are in the freely accessible operator repository (icr.io/cpopen), and the operand images are in the IBM Entitled Registry (cp.icr.io), for which you require an entitlement key.

If you want to verify the origin of the catalog, then use the OLM UI and CASE installation method instead. For more information, see Installing cloud native components with the oc ibm-pak plug-in, the Operator Lifecycle Manager (OLM) UI, and CASE (Container Application Software for Enterprises).

For more information about the OLM, see Operator Lifecycle Manager (OLM) external link in the Red Hat OpenShift Container Platform documentation.

Red Hat OpenShift Container Platform requires a user with cluster-admin privileges for the following operations:
Allow access to the following sites and ports:
Site Description

icr.io

cp.icr.io

dd0.icr.io

dd2.icr.io

dd4.icr.io

dd6.icr.io

Allow access to these hosts on port 443 to enable access to the IBM Cloud Container Registry.

dd1-icr.ibm-zh.com

dd3-icr.ibm-zh.com

dd5-icr.ibm-zh.com

dd7-icr.ibm-zh.com

If you are located in China, also allow access to these hosts on port 443.

github.com

GitHub houses IBM Cloud Pak tools and scripts.

redhat.com

Red Hat OpenShift registries that are required for Red Hat OpenShift Container Platform, and for Red Hat OpenShift Container Platform upgrades.
For more information, see Configuring your firewall for OpenShift Container Platform external link in the Red Hat OpenShift Container Platform documentation.

About this task

Follow these steps to install cloud native components in a hybrid deployment.

Procedure

Create a catalog source for Netcool Operations Insight

  1. You can create the catalog source with the Red Hat OpenShift console or the Red Hat OpenShift CLI.
    Red Hat OpenShift console: If you want to create the catalog source with the Red Hat OpenShift console, complete the following steps:
    1. Log in to your Red Hat OpenShift cluster's console.
    2. Add the ibm-operator-catalog catalog source.
      The ibm-operator-catalog CatalogSource object can be configured to automatically poll for a newer version, and if one is available, to retrieve it. This configuration triggers an automatic update of your deployment. Polling for updates is enabled by configuring the polling attribute: spec.updateStrategy.registryPoll.
      Note: ibm-operator-catalog also contains the catalogs for other CloudPaks. If multiple CloudPaks are installed on your cluster, then an automatic update is configured for all of them.

      Click the plus icon in the upper right of the window to open the Import YAML dialog box, paste in one of the following code blocks, and then click Create.

      If you want to disable automatic updates, use this YAML to disable catalog polling for the ibm-operator-catalog catalog source:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
      If you want to enable automatic updates, use this YAML to enable catalog polling for the ibm-operator-catalog catalog source:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
        updateStrategy:
          registryPoll:
            interval: 45m
    3. Go to Administration > Cluster Settings. Under Global Configuration > OperatorHub > Sources, verify that the ibm-operator-catalog CatalogSource object is present.
    Red Hat OpenShift CLI:
    Draft comment: deirdrel@ie.ibm.com
    CLI section
    If you want to create the catalog source with the Red Hat OpenShift CLI, complete the following steps:
    1. Add the ibm-operator-catalog catalog source.
      The ibm-operator-catalog CatalogSource object can be configured to automatically poll for a newer version, and if one is available, to retrieve it. This configuration triggers an automatic update of your deployment. Polling for updates is enabled by configuring the polling attribute: spec.updateStrategy.registryPoll.
      Note: ibm-operator-catalog also contains the catalogs for other CloudPaks. If multiple CloudPaks are installed on your cluster, then an automatic update is configured for all of them.
      If you want to disable automatic updates, run the following command to disable catalog polling for the ibm-operator-catalog catalog source:
      cat << EOF | oc apply -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM Content
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
      EOF
      If you want to enable automatic updates, run the following command to enable catalog polling for the ibm-operator-catalog catalog source:
      cat << EOF | oc apply -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: ibm-operator-catalog
        publisher: IBM Content
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
        updateStrategy:
          registryPoll:
            interval: 45m
      EOF
    2. Verify that the ibm-operator-catalog CatalogSource object is present, and is returned by the following command.
      oc get CatalogSources ibm-operator-catalog -n openshift-marketplace
      Example output:
      oc get CatalogSources ibm-operator-catalog -n openshift-marketplace
      NAME                   DISPLAY                TYPE   PUBLISHER   AGE
      ibm-operator-catalog   ibm-operator-catalog   grpc   IBM         94s
      

Create a PostgreSQL subscription

  1. You can create the PostgreSQL subscription with the Red Hat OpenShift console or the Red Hat OpenShift CLI.
    Red Hat OpenShift console: If you want to create the PostgreSQL subscription with the Red Hat OpenShift console, complete the following steps:
    1. Log in to your Red Hat OpenShift cluster's console.
    2. Add the PostgreSQL subscription.
      Click the plus icon in the upper right of the window to open the Import YAML dialog box, paste the following code block, and then click Create.
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: cloud-native-postgresql-catalog-subscription
        namespace: <namespace>
      spec:
        channel: stable-v1.18
        installPlanApproval: Automatic
        name: cloud-native-postgresql
        source: ibm-operator-catalog
        sourceNamespace: openshift-marketplace
      
      Where <namespace> is the namespace that you specified when you prepared your cluster. For more information, see Preparing your cluster.
    Red Hat OpenShift CLI:
    Draft comment: deirdrel@ie.ibm.com
    CLI section
    If you want to create the PostgreSQL subscription with the Red Hat OpenShift CLI, complete the following steps:
    1. Create a subscription called cloud-native-postgresql-catalog-subscription.
      cat << EOF | oc apply -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: cloud-native-postgresql-catalog-subscription
        namespace: <namespace>
      spec:
        channel: stable-v1.18
        installPlanApproval: Automatic
        name: cloud-native-postgresql
        source: ibm-operator-catalog
        sourceNamespace: openshift-marketplace
      EOF
      
      Where <namespace> is the namespace that you specified when you prepared your cluster. For more information, see Preparing your cluster.

Install the Netcool Operations Insight Operator

  1. Go to Operators > OperatorHub, and then search for and select IBM Cloud Pak for AIOps Event Manager and click Install.
    Note: Ensure that IBM Cloud Pak for AIOps Event Manager is selected. Do not select Netcool Operations Insight.
  2. Note: If you want to install the previous 1.6.12 version, select v1.16. For more information, see the 1.6.12 documentation: Installing cloud native components with the Operator Lifecycle Manager (OLM) user interface
    Draft comment: deirdrel@ie.ibm.com
    Update each release
    Select v1.17 in the Update channel section.
  3. Select A specific namespace on the cluster as your installation mode and select the namespace that you created in Preparing your cluster to install the operator into. Do not use namespaces that are owned by Kubernetes or Red Hat OpenShift, such as kube-system or default.
  4. Click Install.
  5. Go to Operators > Installed Operators, and view IBM Cloud Pak for AIOps Event Manager. It takes a few minutes to install. Ensure that the status of the installed IBM Cloud Pak for AIOps Event Manager is Succeeded before you continue.

Create a Netcool Operations Insight instance for a hybrid deployment.

  1. From the Red Hat OpenShift Container Platform OLM UI, go to Operators > Installed Operators, and select IBM Cloud Pak for AIOps Event Manager. Under Provided APIs > NOIHybrid, select Create Instance.
  2. From the Red Hat OpenShift Container Platform OLM UI, use the YAML or the Form view to configure the properties for the cloud native components deployment. For more information about properties that you can configure for a hybrid deployment, see Hybrid operator properties.
    CAUTION:
    Ensure that the name of the Netcool Operations Insight instance does not exceed 10 characters.
    Enter the following values:
    • Name: Specify the name that you want your Netcool Operations Insight instance to be called.
    • License: Expand the License section and read the agreement. Toggle the License Acceptance switch to True to accept the license.
    • Size: Select the size that you require for your Netcool Operations Insight installation.
    • storageClass: Specify the storage class. Check which storage classes are configured on your cluster by using the oc get sc command. For more information about storage, see Storage.

  3. Select Create.
  4. Under the All Instances tab, a Netcool Operations Insight hybrid instance appears.
    Go to Operators > Installed Operators and check that the status of your Netcool Operations Insight instance is Phase: OK. Click Netcool Operations Insight > All Instances to check it. This status means that IBM Cloud Pak for AIOps Event Manager started and is starting up the various pods.
    To monitor the status of the installation, see Monitoring installation progress.
    Note:
    • Changing an existing deployment from a Trial deployment type to a Production deployment type is not supported.
    • Changing an instance's deployment parameters in the Form view is not supported post deployment.
    • If you update custom secrets in the OLM console, the crypto key is corrupted, and the command to encrypt passwords does not work. Update custom secrets only with the CLI. For more information about storing a certificate as a secret, see Configuring observer security external link.

What to do next

To enable or disable an observer after installation, use the oc patch command, as in following example:
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/datadog", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'		
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'	
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'						
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'