Installing Event Manager in an air-gapped environment (offline) with the oc ibm-pak plug-in and a portable compute or portable storage device

If your cluster is not connected to the internet, you can deploy a nonproduction installation of Event Manager (also called IBM® Netcool® Operations Insight® on Red Hat® OpenShift®) on your cluster by using a portable compute or portable storage device.

You can also deploy Event Manager in an air-gapped environment by using a bastion host. For more information, see Installing Event Manager in an air-gapped environment (offline) with the oc ibm-pak plug-in and a bastion host.

Before you begin

Review the Preparing documentation. Your environment must meet the system requirements.

If you want to install Event Manager as a nonroot user, you must review the information in Install commands that require root or sudo access.

Important: The following procedure is based on a Red Hat OpenShift Container Platform 4.10 environment and includes links for that version. If your environment uses a different supported version of Red Hat OpenShift Container Platform, ensure that you follow the Red Hat OpenShift documentation for that version.

From a high level, an air-gapped installation of Event Manager consists of six steps:

1. Set up your mirroring environment

Before you install any component on an air-gapped environment, you must set up a host that can be connected to the internet to complete configuring your mirroring environment. To set up your mirroring environment, complete the following steps:

1.1. Prerequisites

No matter what medium you choose for your air-gapped installation, you must satisfy the following prerequisites:

  • A Red Hat OpenShift Container Platform cluster must be installed.
  • The Red Hat OpenShift Container Platform requires you to have cluster administrator access.
  • A Docker V2 type registry must be available and accessible from the Red Hat OpenShift Container Platform cluster nodes. For more information, see 1.2. Set up a target registry.
  • Access to the following sites and ports:
    • *.docker.io and *.docker.com: For more information about specific sites to allow access to, see Docker Hub Hosts for Firewalls and HTTP Proxy Servers external icon.
    • icr.io:443 for IBM Entitled Registry and Event Manager catalog source
    • quay.io:443 for Event Manager catalog and images
    • github.com for CASEs and tools
    • redhat.com for Red Hat OpenShift Container Platform upgrades

1.2. Set up a target registry

You must use a local Docker type registry to store all of your images in your intranet network. Many customers have one or more centralized, corporate registry servers to store production container images. If a local registry is not available, a production-grade registry must be installed and configured. To access your registries during an air-gapped installation, you must use an account with read write access. The username and password for the account are associated with a user who can write to the target local registry and read from the target local registry that is on the Red Hat OpenShift cluster nodes. You must create such a registry and the registry must meet the following requirements:
  • Supports Docker Manifest V2, Schema 2 external link.
  • Supports multi-architecture images.
  • Is accessible from the Red Hat OpenShift Container Platform cluster nodes.
  • Allows path separators in the image name.
  • You have the username and password for a user who can read from and write to the registry.

For more information about creating a registry, see the Red Hat documentation external link.

Note: Do not use the Red Hat OpenShift image registry as your local, intranet registry. The Red Hat OpenShift registry does not support multi-architecture images or path separators in the image name.
After you create the internal Docker type registry, you must configure the registry:
  1. Create the registry namespaces. If you are using Podman or Docker for your internal registry, you can skip this step. Docker creates these namespaces for you when you mirror the images.

    A registry namespace is the first component in the image name. For example, in the image name, icr.io/cpopen/myproduct, the namespace portion of the image name is cpopen.

    If you are using different registry providers, you must create separate registry Kubernetes namespaces for each registry source. These Kubernetes namespaces are used as the location that the contents of the CASE file gets pushed into when you run your CASE commands.

    The following registry namespaces might be used by the CASE command and must be created:
    • cp namespace to store the IBM images from the cp.icr.io/cp repository. The cp namespace is for the images in the IBM Entitled Registry that require a product entitlement key and credentials to pull.
    • cpopen namespace to store the operator-related IBM images from the icr.io/cpopen repository. The cpopen namespace is for publicly available images that are hosted by IBM that don't require credentials to pull.
    • opencloudio namespace to store the images from quay.io/opencloudio. The opencloudio namespace is for select IBM open source component images that are available on quay.io external link.
    • openshift4 namespace to store the Red Hat images from redhat.io/openshift4 that require a pull secret.
  2. Verify that each namespace meets the following requirements:
    • Supports auto-repository creation.
    • Has credentials of a user who can write and create repositories. The external host uses these credentials.
    • Has credentials of a user who can read all repositories. The Red Hat OpenShift Container Platform cluster uses these credentials.

1.3. Prepare a host

You must prepare a bastion host that can connect to the internet and to the air-gapped network with access to the Red Hat OpenShift cluster and the target registry. Your host must be on a Linux®® x86_64 or Mac platform with any operating system that the IBM Cloud Pak®® CLI and the Red Hat OpenShift CLI support. If you are on a Windows platform, you must run the actions in a Linux® x86_64 VM or from a Windows Subsystem for Linux (WSL) terminal.

Complete the following steps on your host:

  1. Install Docker or Podman. One of these tools is needed for container management.
    yum check-update
    yum install docker
    Note: Docker is not shipped or supported by Red Hat for Red Hat Enterprise Linux (RHEL) 8. The Podman container engine replaced Docker as the preferred, maintained, and supported container runtime of choice for Red Hat Enterprise Linux 8 systems. For more information, see Building, running, and managing containers external icon in the Red Hat documentation.
  2. Install the oc OpenShift Container Platform CLI tool. For more information, see OpenShift Container Platform CLI tools.

1.4. Install the IBM Catalog Management Plug-in for IBM Cloud Paks

Install the IBM Catalog Management Plug-in for IBM Cloud® Paks:
  1. Download and install the most recent version of IBM Catalog Management Plug-in for IBM Cloud Paks for your host operating system from github.com/IBM/ibm-pak-plugin.
  2. Run the following command to extract the files:
    tar -xf oc-ibm_pak-linux-amd64.tar.gz
  3. Run the following command to move the file to the /usr/local/bin directory:
    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    Note: If you are installing as a nonroot user, you must use sudo.
  4. Confirm that oc ibm-pak is installed by running the following command:
    oc ibm-pak --help

    Expected result: The plug-in usage is displayed.

2. Set environment variables and download CASE files

If your bastion host, portable compute device, or portable storage device must connect to the internet through a proxy, you must set environment variables on the machine that accesses the internet with the proxy server. For more information, see Setting up proxy environment variables external icon.

Before mirroring your images, you can set the environment variables on your mirroring device, and connect to the internet so that you can download the corresponding CASE files. To finish preparing your host, complete the following steps:

Note: Save a copy of your environment variable values to a text editor. You can use that file as a reference to cut and paste from when you complete your air-gapped installation tasks.
  1. Create the following environment variables with the installer image name and the version.
    export CASE_NAME=ibm-netcool-prod
    export CASE_VERSION=1.6.0
    export CASE_INVENTORY_SETUP=noiOperatorSetup
    export TARGET_REGISTRY_HOST=<IP_or_FQDN_of_TARGET_registry>
    export TARGET_REGISTRY_PORT=<port_number_of_TARGET_registry>
    export TARGET_REGISTRY=$TARGET_REGISTRY_HOST:$TARGET_REGISTRY_PORT
    export TARGET_REGISTRY_USER=<username>
    export TARGET_REGISTRY_PASSWORD=<password>
    export TARGET_NAMESPACE=<namespace>
  2. Connect your host to the internet and disconnect it from the local air-gapped network.
  3. Check that the CASE repository URL is pointing to the default https://github.com/IBM/cloud-pak/raw/master/repo/case/ location by running the oc ibm-pak config command.

    Example output:

    Repository Config
    
    Name                        CASE Repo URL
    ----                        -------------
    IBM Cloud-Pak Github Repo * https://github.com/IBM/cloud-pak/raw/master/repo/case/
    
    If the repository is not pointing to the default location (asterisk indicates default URL), then run the following command:
    oc ibm-pak config repo 'IBM Cloud-Pak Github Repo' --enable
    If the URL is not displayed, then add the repository by running the following command:
    oc ibm-pak config repo 'IBM Cloud-Pak Github Repo' --url https://github.com/IBM/cloud-pak/raw/master/repo/case/
  4. Download the Event Manager installer and image inventory to your host.
    Note: If you want to install the previous 1.6.5 version, specify --version 1.5.0 in the oc ibm-pak get command.

    Tip: If you do not specify the CASE version, it downloads the latest CASE.

    oc ibm-pak get $CASE_NAME --version $CASE_VERSION

    By default, the root directory that is used by plug-in is the ~/.ibm-pak directory. This default directory means that the preceding command downloads the CASE under the ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION directory. You can configure this root directory by setting the IBMPAK_HOME environment variable. Assuming that IBMPAK_HOME is set, the preceding command downloads the CASE under $IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION.

    The logs files are available at $IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log.

    Your host is now configured and you are ready to mirror your images.

3. Mirror images

The process of mirroring images takes the image from the internet to your host, then effectively copies that image on to your air-gapped environment. After you mirror your images, you can configure your cluster and complete air-gapped installation.

Notes®:

  • If you want to install subsequent updates to your air-gapped environment, you must do a CASE save to get the image list when you make updates.

Complete the following steps to mirror your images from your host to your air-gapped environment:

3.1. Generate mirror manifests

  1. Run the following command to generate mirror manifests to be used when mirroring the image to the target registry:
oc ibm-pak generate mirror-manifests $CASE_NAME file://cpfs --version $CASE_VERSION --final-registry $TARGET_REGISTRY/cpfs

The command generates the following files at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION:

  • images-mapping-to-filesystem.txt
  • images-mapping-from-filesystem.txt
  • image-content-source-policy.yaml
Argument Description
file://cpfs This argument indicates to the plug-in to first mirror images to a local file system in the cpfs directory when the oc image mirror command is run against the images-mapping-to-filesystem.txt file. For more information, see Mirror images to the file system external icon in the IBM Cloud Pak foundational services documentation.
$TARGET_REGISTRY/cpfs to --final-registry

This argument generates an image-mapping file that is used by the oc image mirror command to mirror the images to the TARGET_REGISTRY at namespace cpfs.

For example, if the CASE you are installing needs the quay.io/opencloudio/ibm-events-kafka-2.6.0@sha256:10d422dddd29ff19c87066fc6510eee05f5fa4ff608b87a06e898b3b6a3a13c7 image, then based on the value of $TARGET_REGISTRY/cpfs, its final URL in your target registry is$TARGET_REGISTRY/cpfs/opencloudio/ibm-events-kafka-2.6.0. Note the new namespace of cpfs in the URL.

If you specify only $TARGET_REGISTRY to --final-registry, then the image URL is $TARGET_REGISTRY/opencloudio/ibm-events-kafka-2.6.0.

Therefore, the extra path in the --final-registry argument beyond your target registry changes the namespace path. The namespace path can be multi-level if your target registry supports it.

|

Tip: If you do not know the value of the final registry where the images are mirrored, you can provide a placeholder value of TARGET_REGISTRY. For example, oc ibm-pak generate mirror-manifests $CASE_NAME file://cpfs --version $CASE_VERSION --final-registry TARGET_REGISTRY. Note TARGET_REGISTRY used without any environment variable expansion is just a plain string that is replaced later with the actual image registry URL when it is known to you.

Recommended: You can use the following command to list all the images that are mirrored and the publicly accessible registries from where those images are pulled from:

oc ibm-pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images

Tip: Note down the Registries found section at the end of output from the preceding command. You must log in to those registries so that the images can be pulled and mirrored to your local target registry. See the next steps on authentication.

3.2. Authenticating the registry

Log in to the registries to generate an authentication file containing the registry credentials, and then create an environment variable that has the location of the authentication file. This file is used later to enable the oc image mirror command to pull the images from the IBM Entitled Registry, and push them to the target registry.

  1. Get the authentication credentials for the IBM Entitled Registry.
    1. To obtain the entitlement key that is assigned to your IBMid, log in to MyIBM Container Software Library external link with the IBMid and password details that are associated with the entitled software.
    2. In the Entitlement keys section, select Copy key to copy the entitlement key.
  2. Run the following command to create an environment variable containing your entitlement key.
    export ENTITLED_REGISTRY_PASSWORD=<key>
    Where <key> is the entitlement key that you copied in the previous step.
  3. Store the authentication credentials for the IBM Entitled Registry and the target registry.
    If you are using Podman, run the following commands:
    podman login cp.icr.io -u cp -p $ENTITLED_REGISTRY_PASSWORD
    podman login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD
    export REGISTRY_AUTH_FILE=${XDG_RUNTIME_DIR}/containers/auth.json
    unset ENTITLED_REGISTRY_PASSWORD
    
    Note: The authentication file is usually at ${XDG_RUNTIME_DIR}/containers/auth.json. For more information, see the Options section in the Podman documentation external link.
    If you are using Docker, run the following commands:
    docker login cp.icr.io -u cp -p $ENTITLED_REGISTRY_PASSWORD
    docker login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD
    export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
    unset ENTITLED_REGISTRY_PASSWORD
    
    Note: The authentication file is usually at $HOME/.docker/config.json. For more information, see the Docker documentation external link

3.3. Mirror images to the file system

Complete these steps to mirror the images from the internet to a file system on your portable device.

  1. Create an environment variable to store the location of the file system where the images are to be stored.
    export IMAGE_PATH=<image-path>
    Where <image-path> is the directory where you want the images to be stored.
  2. Run the following command to mirror the images from the IBM Entitled Registry to the file system.
    nohup oc image mirror \
    -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
    -a $REGISTRY_AUTH_FILE \
    --filter-by-os '.*' \
    --insecure \
    --skip-multiple-scopes \
    --dir "$IMAGE_PATH" \
    --max-per-registry=1 > my-mirror-progress.txt 2>&1 &
    The nohup UNIX command is used to ensure that the mirroring process continues, even if there is a loss of network connection, and redirection of output to a file provides improved monitoring and error visibility.
    Run the following command if you want to see the progress of the mirroring:
    tail -f my-mirror-progress.txt
    Note: If an error occurs during mirroring, the mirror command can be re-run.

3.4. Set up the file system in the air-gapped environment

  1. Copy files to the air-gapped environment (portable storage device only)

    If you are using a portable storage device, you must copy the files from the portable storage device to a local compute device in the air-gapped environment that has access to the target registry. If you are using a portable compute device, then these items are already present, and you can proceed to the next step.

    Copy the following items to your local compute device:
    • the file system that is located at $IMAGE_PATH, which you specified earlier
    • the ~/.ibm-pak directory
  2. Disconnect the device which has your file system, (the portable compute device or the local compute device) from the internet and connect it to the air-gapped environment.
  3. Ensure that environment variables are set on the device in the air-gapped environment that has access to the target registry.
    If you are using a portable storage device, then set the following environment variables on your local compute device within the air-gapped environment.
    export CASE_NAME=ibm-netcool-prod
    export CASE_VERSION=1.6.0
    export TARGET_REGISTRY_HOST=<IP_or_FQDN_of_TARGET_registry>
    export TARGET_REGISTRY_PORT=<port_number_of_TARGET_registry>
    export TARGET_REGISTRY=$TARGET_REGISTRY_HOST:$TARGET_REGISTRY_PORT
    export TARGET_REGISTRY_USER=<username>
    export TARGET_REGISTRY_PASSWORD=<password>
    export IMAGE_PATH=<image_path>
    If you are using a portable compute device which you have restarted since mirroring the images, then your environment variables will have been lost and you will need to set the environment variables on your portable compute device again.

3.5. Authenticate with the target registry

Authenticate with the target registry in the air-gapped environment that you will be mirroring the images into.

If you are using Podman, run the following command:
podman login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $
export REGISTRY_AUTH_FILE=${XDG_RUNTIME_DIR}/containers/auth.json
Note: The authentication file is usually at ${XDG_RUNTIME_DIR}/containers/auth.json. For more information, see the Options section in the Podman documentation external icon.
If you are using Docker, run the following command:
docker login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
Note: The authentication file is usually at $HOME/.docker/config.json. For more information, see the Docker documentation external icon

3.6. Mirror the images to the target registry from the file system

Complete the steps in this section on the device which has your file system (the portable compute device or the local compute device) to copy the images from the file system to the $TARGET_REGISTRY. Your device with the file system must be connected to both the target registry and the Red Hat OpenShift cluster.

Run the following command to copy the images referenced in the images-mapping-from-filesystem.txt file from the $IMAGE_PATH file system to the final target registry.
nohup oc image mirror \
-f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \
--from-dir "$IMAGE_PATH" \
-a $REGISTRY_AUTH_FILE \
--filter-by-os '.*' \
--insecure  \
--skip-multiple-scopes \
--max-per-registry=1 > my-mirror-progress2.txt 2>&1 &

The UNIX command nohup is used to ensure that the mirroring process continues even if there is a loss of network connection, and redirection of output to a file provides improved monitoring and error visibility.

Run the following command if you want to see the progress of the mirroring:
tail -f my-mirror-progress2.txt
Note: If an error occurs during mirroring, the mirror command can be re-run.

3.7. Configure the cluster

  1. Log in to your Red Hat OpenShift cluster.

    You can identify your specific oc login command by clicking the user menu in the upper left corner of the Red Hat OpenShift console, and then clicking Copy Login Command.

    Example:
    oc login <server> -u <cluster username> -p <cluster pass>
  2. Update the global image pull secret for your Red Hat OpenShift cluster.

    Follow the steps in the Red Hat OpenShift documentation topic Updating the global cluster pull secret external icon.

    These steps enable your cluster to have authentication credentials in place to pull images from your TARGET_REGISTRY as specified in the image-content-source-policy.yaml file, which you will apply to your cluster in the next step.

  3. Create the ImageContentSourcePolicy.
    Run the following command to create an ImageContentSourcePolicy (ICSP).
    oc apply -f  ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
  4. Verify that the ImageContentSourcePolicy resource is created.
    oc get imagecontentsourcepolicy
    Example output, showing a newly created ibm-netcool-prod ICSP.
    oc get imagecontentsourcepolicy
    NAME                      AGE
    ibm-netcool-prod          95s
    redhat-operator-index-0   5d18h
  5. Complete the steps for registry access.

    Insecure registry: If you use an insecure registry, you must add the target registry to the cluster insecureRegistries list. If your offline registry is insecure, complete the following steps for registry access:

    • Login as kubeadmin with the following command:
      oc login -u kubeadmin -p <password> <server's REST API URL>
    • Run the following command on your bastion server to create a global image pull secret for the target registry, and create an ImageContentSourcePolicy.
      oc patch image.config.openshift.io/cluster --type=merge -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'

    Secure registry: If your offline registry is secure, complete the following steps for registry access. You can add certificate authorities (CA) to the cluster for use when pushing and pulling images. You must have cluster administrator privileges. You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory.

    • Login as kubeadmin with the following command:
      oc login -u kubeadmin -p <password> <server's REST API URL>
    • Create a configmap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure that the key in the configmap is the hostname of the registry, in the hostname[..port] format. To create the certificate authorities registry configmap, run the following command from your local registry server:
      oc create configmap registry-cas -n openshift-config
      --from-file=$TARGET_REGISTRY_HOST..$TARGET_REGISTRY_PORT=/etc/docker/certs.d/myregistry.corp.com:$TARGET_REGISTRY_PORT/ca.crt
    • Update the cluster image configuration:
      oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
  6. Verify your cluster node status.
    oc get MachineConfigPool -w
    Note: After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all of the MachineConfigPools are updated before proceeding to the next step.
  7. Configure a network policy for the Event Manager routes.

    Some Red Hat OpenShift clusters have extra network policies that are configured to secure pod communication traffic, which can block external traffic and communication between projects (namespaces). Extra configuration is then required to configure a network policy for the Event Manager routes and allow external traffic to reach the routes.

    Run the following command to update endpointPublishingStrategy.type in your ingresscontroller if it is set to HostNetwork, to allow traffic.
    if [ $(oc get ingresscontroller default -n openshift-ingress-operator -o jsonpath='{.status.endpointPublishingStrategy.type}') = "HostNetwork" ]; then oc patch namespace default --type=json -p '[{"op":"add","path":"/metadata/labels","value":{"network.openshift.io/policy-group":"ingress"}}]'; fi

4. Install IBM Netcool Operations Insight on Red Hat OpenShift Container Platform

Now that your images are mirrored to your air-gapped environment, you can deploy Event Manager to that environment. When you mirrored your environment, you created a parallel offline version of everything that you needed to install an operator into Red Hat OpenShift Container Platform. To install Event Manager, complete the following steps:

4.1 Create the catalog source

Important: Before you run the oc ibm-pak launch \ command, you must be logged in to your cluster.

Using the oc login command, log in to the Red Hat OpenShift Container Platform cluster of your final location. You can identify your specific oc login command by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

  1. Create and configure the catalog sources:
    export CASE_INVENTORY_SETUP=noiOperatorSetup
    oc ibm-pak launch \
    $CASE_NAME \
    --version $CASE_VERSION \
    --action install-catalog \
    --inventory $CASE_INVENTORY_SETUP \
    --namespace openshift-marketplace \
    --args "--registry $TARGET_REGISTRY/cpfs/cpopen --recursive \
    --inputDir ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION"
  2. Verify that the CatalogSource is installed:
    oc get pods -n openshift-marketplace
    oc get catalogsource -n openshift-marketplace

4.2. Create the operator

  1. Create the operator by running the following command:
    oc ibm-pak launch \
    $CASE_NAME \
    --version $CASE_VERSION \
    --namespace $NAMESPACE \
    --inventory $CASE_INVENTORY_SETUP \
    --args "--registry $TARGET_REGISTRY" \
    --action install-operator

4.3. Create the target-registry-secret

  1. Create the target-registry-secret by running the following command:
    oc create secret docker-registry target-registry-secret \
        --docker-server=$TARGET_REGISTRY \
        --docker-username=$TARGET_REGISTRY_USER \
        --docker-password=$TARGET_REGISTRY_PASSWORD \
        --namespace=$TARGET_NAMESPACE

4.4. Create an air-gapped instance of Event Manager

  1. From the Red Hat OpenShift OLM UI, go to Operators > Installed Operators, and select IBM Cloud Pak for Watson™ AIOps Event Manager. Under Provided APIs > NOI, select Create Instance.
  2. From the Red Hat OpenShift OLM UI, use the YAML view or the Form view to configure the properties for the Event Manager deployment. For more information about configurable properties for a cloud-only deployment, see Cloud operator properties.
    CAUTION:
    Ensure that the name of the Event Manager instance does not exceed 10 characters.
  3. Edit the Event Manager properties to provide access to the target registry. Set spec.entitlementSecret to the target registry secret.
  4. Select Create.
  5. Under the All Instances tab, a Event Manager instance appears. To monitor the status of the installation, see Monitoring cloud installation progress.

    Navigate to Operators > Installed Operators and check that the status of your Netcool Operations Insight instance is Phase: OK. Click on Netcool Operations Insight > All Instances to check it. This means that IBM Cloud Pak for Watson AIOps Event Manager has started and is now in the process of starting up the various pods.

    Note:
    • Changing an existing deployment from a Trial deployment type to a Production deployment type is not supported.
    • Changing an instance's deployment parameters in the Form view is not supported post deployment.
    • If you update custom secrets in the OLM console, the crypto key is corrupted and the command to encrypt passwords does not work. Update custom secrets only with the CLI. For more information about storing a certificate as a secret, see https://www.ibm.com/docs/en/SS9LQB_1.1.15/LoadingData/t_asm_obs_configuringsecurity.html external link