Installing Event Manager in an air-gapped environment (offline) with the oc-ibm_pak plug-in and a bastion host

If your cluster is not connected to the internet, you can deploy a production installation of Event Manager (also called IBM® Netcool® Operations Insight® on Red Hat® OpenShift®) on your cluster by using a bastion host.

You can also deploy Event Manager in an air-gapped environment by using a portable compute device or a portable storage device. For more information, see Installing Event Manager in an air-gapped environment (offline) with the oc-ibm_pak plug-in and a portable compute or portable storage device.

Before you begin

Review the Preparing your cluster documentation. Your environment must meet the system requirements.

If you want to install Event Manager as a nonroot user, you must review the information in Install commands that require root or sudo access.

Note: The following procedure is based on a Red Hat OpenShift Container Platform 4.10 environment and includes links for that version. If your environment uses a different supported version of Red Hat OpenShift Container Platform, ensure that you follow the Red Hat OpenShift documentation for that version.

From a high level, an air-gapped installation of Event Manager consists of five steps:

1. Set up your mirroring environment

Before you install any component on an air-gapped environment, you must set up a host that can be connected to the internet to complete configuring your mirroring environment. To set up your mirroring environment, complete the following steps:

1.1. Prerequisites

No matter what medium you choose for your air-gapped installation, you must satisfy the following prerequisites:

  • A Red Hat OpenShift Container Platform cluster must be installed.
  • The Red Hat OpenShift Container Platform requires you to have cluster administrator access.
  • A Docker V2 type registry must be available and accessible from the Red Hat OpenShift Container Platform cluster nodes. For more information, see 1.2. Set up a target registry.
  • Access to the following sites and ports:
    • *.docker.io and *.docker.com: For more information about specific sites to allow access to, see Docker Hub Hosts for Firewalls and HTTP Proxy Servers
    • icr.io:443 for IBM Entitled Registry and Event Manager catalog source
    • quay.io:443 for Event Manager catalog and images
    • github.com for CASEs and tools
    • redhat.com for Red Hat OpenShift Container Platform upgrades

1.2. Set up a target registry

You must use a local Docker type registry to store all of your images in your intranet network. Many customers have one or more centralized, corporate registry servers to store production container images. If a local registry is not available, a production-grade registry needs to be installed and configured. To access your registries during an air-gapped installation, you must use an account that the username and password are associated with a user who can write to the target local registry and read from the target local registry that is on the Red Hat OpenShift cluster nodes. You must create such a registry and the registry must meet the following requirements:
  • Supports Docker Manifest V2, Schema 2 external link.
  • Supports multi-architecture images.
  • Is accessible from the Red Hat OpenShift Container Platform cluster nodes.
  • Allows path separators in the image name.
  • You have the username and password for a user who can read from and write to the registry.

For more information about creating a registry, see the Red Hat documentation external link.

Note: Do not use the Red Hat OpenShift image registry as your local, intranet registry. The Red Hat OpenShift registry does not support multi-architecture images or path separators in the image name.
After you create the internal Docker type registry, you must configure the registry:
  1. Create the registry namespaces. If you are using Podman or Docker for your internal registry, you can skip this step. Docker creates these namespaces for you when you mirror the images.

    A registry namespace is the first component in the image name. For example, in the image name, icr.io/cpopen/myproduct, the namespace portion of the image name is cpopen.

    If you are using different registry providers, you must create separate registry Kubernetes namespaces for each registry source. These Kubernetes namespaces are used as the location that the contents of the CASE file gets pushed into when you run your CASE commands.

    The following registry namespaces might be used by the CASE command and must be created:
    • cp namespace to store the IBM images from the cp.icr.io/cp repository. The cp namespace is for the images in the IBM Entitled Registry that require a product entitlement key and credentials to pull.
    • cpopen namespace to store the operator-related IBM images from the icr.io/cpopen repository. The cpopen namespace is for publicly available images that are hosted by IBM that don't require credentials to pull.
    • opencloudio namespace to store the images from quay.io/opencloudio. The opencloudio namespace is for select IBM open source component images that are available on quay.io external link.
    • openshift4 namespace to store the Red Hat images from redhat.io/openshift4 that require a pull secret.
  2. Verify that each namespace meets the following requirements:
    • Supports auto-repository creation.
    • Has credentials of a user who can write and create repositories. The external host uses these credentials.
    • Has credentials of a user who can read all repositories. The Red Hat OpenShift Container Platform cluster uses these credentials.

1.3. Prepare a host

You must prepare a bastion host that can connect to the internet and to the air-gapped network with access to the Red Hat OpenShift cluster and the target registry. Your host must be on a Linux® x86_64 or Mac platform with any operating system that the IBM Cloud Pak® CLI and the Red Hat OpenShift CLI support. If you are on a Windows platform, you must run the actions in a Linux® x86_64 VM or from a Windows Subsystem for Linux (WSL) terminal.

Complete the following steps on your host:

  1. Install Docker or Podman. One of these tools is needed for container management.
    yum check-update
    yum install docker
    Note: Docker is not included with or supported by Red Hat for Red Hat Enterprise Linux (RHEL) 8. The Podman container engine replaced Docker as the preferred, maintained, and supported container runtime of choice for Red Hat Enterprise Linux 8 systems. For more information, see Building, running, and managing containers external icon in the Red Hat documentation.
  2. Install the oc OpenShift Container Platform CLI tool. For more information, see OpenShift Container Platform CLI tools.

1.4. Install the IBM Catalog Management Plug-in for IBM Cloud Paks

Install the IBM Catalog Management Plug-in for IBM Cloud® Paks:
  1. Download and install the most recent version of IBM Catalog Management Plug-in for IBM Cloud Paks for your host operating system from github.com/IBM/ibm-pak-plugin.
  2. Run the following command to extract the files:
    tar -xf oc-ibm_pak-linux-amd64.tar.gz
  3. Run the following command to move the file to the /usr/local/bin directory:
    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    Note: If you are installing as a nonroot user, you must use sudo.
  4. Confirm that oc-ibm_pak is installed by running the following command:
    oc-ibm_pak --help

    Expected result: The plug-in usage is displayed.

2. Set environment variables and download CASE files

Set environment variables on the bastion host, and connect to the internet so that you can download the Event Manager CASE files.
Note: Save a copy of your environment variable values to a file that you can use as a reference when you are completing your air-gapped installation tasks.
  1. Create the following environment variables.
    export CASE_NAME=ibm-netcool-prod
    export CASE_VERSION=1.7.0
    export CASE_INVENTORY_SETUP=noiOperatorSetup
    export TARGET_REGISTRY_HOST=<IP_or_FQDN_of_TARGET_registry>
    export TARGET_REGISTRY_PORT=<port_number_of_TARGET_registry>
    export TARGET_REGISTRY=$TARGET_REGISTRY_HOST:$TARGET_REGISTRY_PORT
    export TARGET_REGISTRY_USER=<username>
    export TARGET_REGISTRY_PASSWORD=<password>
    export TARGET_NAMESPACE=<namespace>

    The target registry is the registry where the Event Manager images will be mirrored to, and accessed from by the Red Hat OpenShift cluster, as setup in 1.1. Prerequisites.

    If your bastion host must connect to the internet through a proxy, then also set the following environment variables.
    export https_proxy=http://proxy-server-hostname:port
    export http_proxy=http://proxy-server-hostname:port
    

    For more information, see Setting up proxy environment variables.

  2. Connect your host to the internet and disconnect it from the local air-gapped network.
  3. Check that the CASE repository URL is pointing to the default https://github.com/IBM/cloud-pak/raw/master/repo/case/ location by running the oc-ibm_pak config command.

    Example output:

    Repository Config
    
    Name                        CASE Repo URL
    ----                        -------------
    IBM Cloud-Pak Github Repo * https://github.com/IBM/cloud-pak/raw/master/repo/case/
    

    If the repository is not pointing to the default location (asterisk indicates default URL), then run the following command:

    oc-ibm_pak config repo 'IBM Cloud-Pak Github Repo' --enable

    If the URL is not displayed, then add the repository by running the following command:

    oc-ibm_pak config repo 'IBM Cloud-Pak Github Repo' --url https://github.com/IBM/cloud-pak/raw/master/repo/case/
  4. Note: If you want to install the previous 1.6.6 version, specify --version 1.6.0 in the oc-ibm_pak get command.
    Download the Event Manager installer and image inventory to your host:
    oc-ibm_pak get $CASE_NAME --version $CASE_VERSION
    The CASE is downloaded to the ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION directory. The logs files are available at ~/.ibm-pak/logs/oc-ibm_pak.log.
    Note: If you do not specify the CASE version, then the latest CASE is downloaded. The root directory that is used by ibm-pak-plugin is the ~/.ibm-pak directory. If required, the root directory can be configured by setting the IBMPAK_HOME environment variable.

    The logs files are available in the $IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log file.

    Your host is now configured and you are ready to mirror your images.

3. Mirror images

Complete the following steps to mirror the Event Manager images to the target registry.

Notes:

  • If you want to install subsequent updates to your air-gapped environment, you must do a CASE save to get the image list when you make updates.

Complete the following steps to mirror your images from your host to your air-gapped environment:

3.1. Generate mirror manifests

  1. Run the following command to generate mirror manifests to be used when mirroring the image to the target registry:

oc-ibm_pak generate mirror-manifests $CASE_NAME $TARGET_REGISTRY --version $CASE_VERSION
The preceding command generates the images-mapping.txt and image-content-source-policy.yaml files in the ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION directory.

You can use the following command to list all the images that are mirrored and the publicly accessible registries from where those images are pulled from:

oc-ibm_pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images

Tip: Note down the Registries found section at the end of output from the preceding command. You must log in to those registries so that the images can be pulled and mirrored to your local target registry. See the next steps on authentication.

Example ~/.ibm-pak directory structure

The ~/.ibm-pak directory structure is built over time as you save CASEs and mirror. The following tree shows an example of the ~/.ibm-pak directory structure with CASE version 1.7.0

[root@bastion ~]# tree ~/.ibm-pak
.
├── config
│   └── config.yaml
├── data
│   ├── cases
│   │   └── ibm-netcool-prod
│   │       └── 1.7.0
│   │           ├── caseDependencyMapping.csv
│   │           ├── charts
│   │           ├── ibm-netcool-prod-1.7.0-airgap-metadata.yaml
│   │           ├── ibm-netcool-prod-1.7.0-charts.csv
│   │           ├── ibm-netcool-prod-1.7.0-images.csv
│   │           ├── ibm-netcool-prod-1.7.0.tgz
│   │           └── resourceIndexes
│   │               └── ibm-netcool-prod-resourcesIndex.yaml
│   └── mirror
│       └── ibm-netcool-prod
│           └── 1.7.0
│               ├── catalog-sources-linux-amd64.yaml
│               ├── image-content-source-policy.yaml
│               └── images-mapping.txt
└── logs
    └── oc-ibm_pak.log

11 directories, 11 files

3.2. Authenticating the registry

Log in to the registries to generate an authentication file, which contains the registry credentials, and then create an environment variable that has the location of the authentication file. This file is used later to enable the oc image mirror command to pull the images from the IBM Entitled Registry, and push them to the target registry.

  1. Get the authentication credentials for the IBM Entitled Registry.
    1. To obtain the entitlement key that is assigned to your IBMid, log in to MyIBM Container Software Library external link with the IBMid and password details that are associated with the entitled software.
    2. In the Entitlement keys section, select Copy key to copy the entitlement key.
  2. Run the following command to create an environment variable, which contains your entitlement key.
    export ENTITLED_REGISTRY_PASSWORD=<key>
    Where <key> is the entitlement key that you copied in the previous step.
  3. Store the authentication credentials for the IBM Entitled Registry and the target registry.
    If you are using Podman, run the following commands:
    podman login cp.icr.io -u cp -p $ENTITLED_REGISTRY_PASSWORD
    podman login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD
    export REGISTRY_AUTH_FILE=${XDG_RUNTIME_DIR}/containers/auth.json
    unset ENTITLED_REGISTRY_PASSWORD
    
    Note: The authentication file is usually at ${XDG_RUNTIME_DIR}/containers/auth.json. For more information, see the Options section in the Podman documentation external link.
    If you are using Docker, run the following commands:
    docker login cp.icr.io -u cp -p $ENTITLED_REGISTRY_PASSWORD
    docker login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD
    export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
    unset ENTITLED_REGISTRY_PASSWORD
    
    Note: The authentication file is usually at $HOME/.docker/config.json. For more information, see the Docker documentation external link

3.3. Mirror images to final location

Complete these steps on your host that is connected to both the local Docker or Podman registry and the Red Hat OpenShift Container Platform cluster:

  1. Mirror images to the TARGET_REGISTRY:
    nohup oc image mirror \
    -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
    -a $REGISTRY_AUTH_FILE \
    --filter-by-os '.*' \
    --insecure \
    --skip-multiple-scopes \
    --max-per-registry=1 \
    --continue-on-error=true > my-mirror-progress.txt 2>&1 &

    If you want to monitor the progress of the mirroring, run the following command:

    tail -f my-mirror-progress.txt
    Note: If an error occurs during mirroring, the mirror command can be rerun.
  2. Update the global image pull secret for your Red Hat OpenShift cluster. Follow the steps in Updating the global cluster pull secret external icon. The steps enable your cluster to have proper authentication credentials in place to pull images from your TARGET_REGISTRY as specified in the image-content-source-policy.yaml file, which you apply to your cluster in the next step.
  3. Create the ImageContentSourcePolicy.

    Important: Before you run the command in this step, you must be logged in to your Red Hat OpenShift cluster.

    Using the oc login command, log in to the Red Hat OpenShift cluster of your final location. You can identify your specific oc login command by clicking the user menu in the Red Hat OpenShift console, then clicking Copy Login Command.

    Run the following command to create ImageContentSourcePolicy.

    oc apply -f  ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml

    Applying this file might cause your cluster nodes to drain and restart sequentially.

  4. Verify that the ImageContentSourcePolicy resource is created.
    oc get imageContentSourcePolicy
  5. Verify your cluster node status.
    oc get MachineConfigPool -w

    Important: After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes are updated sequentially. Wait until all MachineConfigPools are updated before you proceed to the next step.

  6. Complete the steps for registry access.

    Insecure registry: If you use an insecure registry, you must add the target registry to the cluster insecureRegistries list. If your offline registry is insecure, complete the following steps for registry access:

    • Login as kubeadmin with the following command:
      oc login -u kubeadmin -p <password> <server's REST API URL>
    • Run the following command on your bastion server to create a global image pull secret for the target registry, and create an ImageContentSourcePolicy.
      oc patch image.config.openshift.io/cluster --type=merge -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'

    Secure registry: If your offline registry is secure, complete the following steps for registry access. You can add certificate authorities (CA) to the cluster for use when pushing and pulling images. You must have cluster administrator privileges. You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory.

    • Login as kubeadmin with the following command:
      oc login -u kubeadmin -p <password> <server's REST API URL>
    • Create a configmap in the openshift-config namespace that contains the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure that the key in the configmap is the hostname of the registry, in the hostname[..port] format. To create the certificate authorities registry configmap, run the following command from your local registry server:
      oc create configmap registry-cas -n openshift-config
      --from-file=$TARGET_REGISTRY_HOST..$TARGET_REGISTRY_PORT=/etc/docker/certs.d/myregistry.corp.com:$TARGET_REGISTRY_PORT/ca.crt
    • Update the cluster image configuration:
      oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
  7. Verify your cluster node status.
    oc get MachineConfigPool -w

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes are updated sequentially. Wait until all MachineConfigPools are updated.

  8. Configure a network policy for the Event Manager routes.

    Some Red Hat OpenShift clusters have extra network policies that are configured to secure pod communication traffic, which can block external traffic and communication between projects (namespaces). Extra configuration is then required to configure a network policy for the Event Manager routes and allow external traffic to reach the routes.

    Run the following command to update endpointPublishingStrategy.type in your ingress controller if it is set to HostNetwork, to allow traffic.

    if [ $(oc get ingresscontroller default -n openshift-ingress-operator -o jsonpath='{.status.endpointPublishingStrategy.type}') = "HostNetwork" ]; then oc patch namespace default --type=json -p '[{"op":"add","path":"/metadata/labels","value":{"network.openshift.io/policy-group":"ingress"}}]'; fi

4. Install Netcool Operations Insight on Red Hat OpenShift Container Platform

Now that your images are mirrored to your air-gapped environment, you can deploy Event Manager to that environment. When you mirrored your environment, you created a parallel offline version of everything that you needed to install an operator into Red Hat OpenShift Container Platform. To install Event Manager, complete the following steps:

4.1. Create the catalog source

Important: Before you run the oc-ibm_pak launch \ command, you must be logged in to your cluster.

Using the oc login command, log in to the Red Hat OpenShift Container Platform cluster where your final location is. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

  1. Create and configure a catalog source.

    export CASE_INVENTORY_SETUP=noiOperatorSetup
    oc-ibm_pak launch \
    $CASE_NAME \
    --version $CASE_VERSION \
    --action install-catalog \
    --inventory $CASE_INVENTORY_SETUP \
    --namespace openshift-marketplace \
    --args "--registry $TARGET_REGISTRY/cpopen --recursive \
    --inputDir ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION"
  2. Verify that the CatalogSource was installed.
    oc get pods -n openshift-marketplace
    oc get catalogsource -n openshift-marketplace

4.2. Create the operator

  1. Create the operator by running the following command:
    oc-ibm_pak launch \
    $CASE_NAME \
    --version $CASE_VERSION \
    --namespace $TARGET_NAMESPACE \
    --inventory $CASE_INVENTORY_SETUP \
    --args "--registry $TARGET_REGISTRY" \
    --action install-operator

4.3. Create the target-registry-secret

  1. Create the target-registry-secret by running the following command:
    oc create secret docker-registry target-registry-secret \
        --docker-server=$TARGET_REGISTRY \
        --docker-username=$TARGET_REGISTRY_USER \
        --docker-password=$TARGET_REGISTRY_PASSWORD \
        --namespace=$TARGET_NAMESPACE

4.4. Create an air-gapped instance of Event Manager

  1. From the Red Hat OpenShift OLM UI, go to Operators > Installed Operators, and select IBM Cloud Pak for Watson™ AIOps Event Manager. Under Provided APIs > NOIHybrid, select Create Instance.
  2. From the Red Hat OpenShift OLM UI, use the YAML view or the Form view to configure the properties for the Event Manager deployment. For more information about configurable properties for a cloud-only deployment, see Hybrid operator properties.
    CAUTION:
    Ensure that the name of the Netcool Operations Insight instance does not exceed 10 characters.
    Enter the following values:
    • Name: Specify the name that you want your Netcool Operations Insight instance to be called.
    • License: Expand the License section and read the agreement. Toggle the License Acceptance switch to True to accept the license.
    • Size: Select the size that you require for your Netcool Operations Insight installation.
    • storageClass: Specify the storage class. Check which storage classes are configured on your cluster by using the oc get sc command. For more information about storage, see Hybrid storage.

  3. Edit the Event Manager properties to provide access to the target registry. Set spec.entitlementSecret to the target registry secret.
  4. Select Create.
  5. Under the All Instances tab, a Event Manager instance appears. To monitor the status of the installation, see Monitoring cloud installation progress.

    Navigate to Operators > Installed Operators and check that the status of your Netcool Operations Insight instance is Phase: OK. Click on Netcool Operations Insight > All Instances to check it. This means that IBM Cloud Pak for Watson AIOps Event Manager has started and is now in the process of starting up the various pods.

    Note:
    • Changing an existing deployment from a Trial deployment type to a Production deployment type is not supported.
    • Changing an instance's deployment parameters in the Form view is not supported post deployment.
    • If you update custom secrets in the OLM console, the crypto key is corrupted and the command to encrypt passwords does not work. Update custom secrets only with the CLI. For more information about storing a certificate as a secret, see https://www.ibm.com/docs/en/SS9LQB_1.1.16/LoadingData/t_asm_obs_configuringsecurity.html external link