Installing IBM Netcool Operations Insight on Red Hat OpenShift
in an air-gapped environment (offline) with the
oc-ibm_pak
plug-in and a portable compute or portable storage device
If your cluster is not connected to the internet, you can deploy an installation of
IBM® Netcool® Operations Insight® on Red Hat® OpenShift® on your
cluster by using a portable compute or portable storage device. You can deploy in
SingleNamespace
mode or OwnNamespace
mode.
You can also deploy IBM Netcool Operations Insight on Red Hat OpenShift in an air-gapped environment by using a bastion host. For more information, see Installing IBM Netcool Operations Insight on Red Hat OpenShift in an air-gapped environment (offline) with the oc-ibm_pak plug-in and a bastion host.
Before you begin
SingleNamespace
mode.quay.io:443
for the Netcool Operations Insight on Red Hat OpenShift catalog and
images. The quay.io/openshift-release-dev/ocp-v4.0-art-dev
image is required and
must be present in the cluster to install the NOI operator.Review the Preparing your cluster documentation. Your environment must meet the system requirements.
If you want to install IBM Netcool Operations Insight on Red Hat OpenShift as a nonroot user, you must review the information in Install commands that require root or sudo access.
Important: The following procedure is based on a Red Hat OpenShift Container Platform Container Platform 4.14 environment and includes links for that version. If your environment uses a different supported version of Red Hat OpenShift Container Platform Container Platform, ensure that you follow the Red Hat OpenShift Container Platform documentation for that version.
cluster-admin
privileges for the following operations: - Create the ImageContentSourcePolicy
- Verify that the ImageContentSourcePolicy is created
- Insecure registry access and secure registry access
- Verify your cluster node status
- Update
endpointPublishingStrategy.type
in your ingresscontroller to allow traffic - Create the catalog source
- SingleNamespace mode
- Create the operator
OwnNamespace
mode
For OwnNamespace
mode, multiple instances of the
operator are allowed per cluster. One operator per namespace is supported.
SingleNamespace
mode
SingleNamespace
mode, only a single instance of the catalog source on the
cluster and a single instance of noihybrid
on the clusters are supported. From the
command line, the operators are deployed in one namespace and the instance of Netcool Operations Insight is deployed
in a different namespace. To enable or disable a feature or observer after installation, edit the
Netcool Operations Insight
instance by running the following command.
oc edit noihybrid <noi-instance-name> -n $TARGET_NAMESPACE
Where
$TARGET_NAMESPACE is the target namespace, for example,
netcool-operand
.namespace: $NAMESPACE
is the namespace where the operators are deployed, for example netcool-operator.targetNamespaces: - $TARGET_NAMESPACE
is the namespace where the operands are deployed, which is an instance ofnoihybrid
, for example netcool-operand.
Procedure
From a high level, an air-gapped installation of IBM Netcool Operations Insight on Red Hat OpenShift consists of six steps:
1. Set up your mirroring environment
Before you install any component on an air-gapped environment, you must set up a host that can be connected to the internet to complete configuring your mirroring environment. To set up your mirroring environment, complete the following steps:
1.1. Prerequisites
No matter what medium you choose for your air-gapped installation, you must satisfy the following prerequisites:
- A Red Hat OpenShift Container Platform Container Platform cluster must be installed.
- The Red Hat OpenShift Container Platform Container Platform requires you to have cluster administrator access.
- A Docker V2 type registry must be available and accessible from the Red Hat OpenShift Container Platform Container Platform cluster nodes. For more information, see 1.2. Set up a target registry.
- Access to the following sites and ports:
Site Description icr.io
cp.icr.io
dd0.icr.io
dd2.icr.io
dd4.icr.io
*dd6.icr.io
*
Allow access to these hosts on port 443 to enable access to the IBM Cloud Container Registry and CASE OCI artifact catalog source. Note: *dd4.icr.io
anddd6.icr.io
can be used as placeholders until they are available.dd1-icr.ibm-zh.com
dd3-icr.ibm-zh.com
dd5-icr.ibm-zh.com
*dd7-icr.ibm-zh.com
*
If you are located in China, also allow access to these hosts on port 443. Note: *dd5-icr.ibm-zh.com
anddd7-icr.ibm-zh.com
can be used as placeholders until they are available.github.com
GitHub houses CASE files, IBM Cloud Pak® tools, and scripts. redhat.com
Red Hat OpenShift Container Platform registries that are required for Red Hat OpenShift Container Platform, and for Red Hat OpenShift Container Platform upgrades. For more information, see Configuring your firewall for OpenShift Container Platform in the Red Hat OpenShift Container Platform documentation.
1.2. Set up a target registry
- Supports Docker Manifest V2, Schema 2 .
- Supports multi-architecture images.
- Is accessible from the Red Hat OpenShift Container Platform Container Platform cluster nodes.
- Allows path separators in the image name.
- You have the username and password for a user who can read from and write to the registry.
For more information about creating a registry, see the Mirroring images for a disconnected installation using the oc-mirror plugin topic in the Red Hat OpenShift Container Platform documentation.
- Create the registry namespaces. If you are using Podman or Docker for your internal registry,
you can skip this step. Docker creates these namespaces for you when you mirror the images.
A registry namespace is the first component in the image name. For example: in the image name,
icr.io/cpopen/myproduct
, the namespace portion of the image name iscpopen
.If you are using different registry providers, you must create separate registry Kubernetes namespaces for each registry source. These Kubernetes namespaces are used as the location that the contents of the CASE file gets pushed into when you run your CASE commands.
The CASE command can use the following registry namespaces and must be created:cp
- Namespace to store the IBM images from thecp.icr.io/cp
repository. Thecp
namespace is for the images in the IBM Entitled Registry that require a product entitlement key and credentials to pull.cpopen
- Namespace to store the operator-related IBM images from theicr.io/cpopen
repository. Thecpopen
namespace is for publicly available images that IBM hosts, which don't require credentials to pull.opencloudio
- Namespace to store the images fromquay.io/opencloudio
. Theopencloudio
namespace is for select IBM open source component images that are available on quay.io .openshift4
- Namespace to store the Red Hat images fromredhat.io/openshift4
that require a pull secret.
- Verify that each namespace meets the following requirements:
- Supports auto-repository creation.
- Has credentials of a user who can write and create repositories. The external host uses these credentials.
- Has credentials of a user who can read all repositories. The Red Hat OpenShift Container Platform Container Platform cluster uses these credentials.
1.3. Prepare a host
Prepare a bastion host that can connect to the internet and to the air-gapped network with access to the Red Hat OpenShift Container Platform cluster and the target registry. Your host must be on a Linux®® x86_64 or Mac platform with any operating system that the IBM Cloud Pak® CLI and the Red Hat OpenShift Container Platform CLI support. If you are on a Windows platform, you must run the actions in a Linux® x86_64 VM or from a Windows Subsystem for Linux (WSL) terminal.
Complete the following steps on your host:
- Install Docker or Podman. One of these tools is needed for container management.
- To install Podman, see the Podman Installation Instructions .
- To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
yum check-update yum install docker
Note: Docker is not shipped or supported by Red Hat for Red Hat Enterprise Linux (RHEL) 8. The Podman container engine replaced Docker as the preferred, maintained, and supported container runtime of choice for Red Hat Enterprise Linux 8 systems. For more information, see Building, running, and managing containers in the Red Hat documentation. - Install the oc OpenShift Container Platform CLI tool. For more information, see OpenShift Container Platform CLI tools.
1.4. Install the IBM Catalog Management Plug-in for IBM Cloud Paks
- Download and install version 1.11.2 or later of IBM Catalog Management Plug-in for IBM Cloud Paks for your host operating system from github.com/IBM/ibm-pak-plugin. Version 1.10.0 or earlier is also supported. Versions 1.11.0 and 1.11.1 are not supported.
- Run the following command to extract the
files:
tar -xf oc-ibm_pak-linux-amd64.tar.gz
- Run the following command to move the file to the /usr/local/bin
directory:
mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
Note: If you are installing as a nonroot user, you must use sudo. - Confirm that
oc-ibm_pak
is installed by running the following command:oc-ibm_pak --help
Expected result: The plug-in usage is displayed.
2. Set environment variables and download CASE files
If your bastion host or portable device must connect to the internet through a proxy, set environment variables on the machine that accesses the internet with the proxy server. For more information, see Setting up proxy environment variables .
Before mirroring your images, you can set the environment variables on your mirroring device, and connect to the internet so that you can download the corresponding CASE files. To finish preparing your host, complete the following steps:
- Create the following environment variables with the installer image name and the
version.
export CASE_NAME=ibm-netcool-prod export CASE_VERSION=1.11.0 export CASE_INVENTORY_SETUP=noiOperatorSetup export TARGET_REGISTRY_HOST=<IP_or_FQDN_of_TARGET_registry> export TARGET_REGISTRY_PORT=<port_number_of_TARGET_registry> export TARGET_REGISTRY=$TARGET_REGISTRY_HOST:$TARGET_REGISTRY_PORT export TARGET_REGISTRY_USER=<username> export TARGET_REGISTRY_PASSWORD=<password> export TARGET_NAMESPACE=<namespace>
- Connect your host to the internet and disconnect it from the local air-gapped network.
- Check that the CASE repository URL is pointing to the default
https://github.com/IBM/cloud-pak/raw/master/repo/case/
location by running the oc-ibm_pak config command.Example output:
If the repository is not pointing to the default location (asterisk indicates default URL), then run the following command.Repository Config Name CASE Repo URL ---- ------------- IBM Cloud-Pak Github Repo * https://github.com/IBM/cloud-pak/raw/master/repo/case/
If the URL is not displayed, then add the repository by running the following command.oc-ibm_pak config repo 'IBM Cloud-Pak Github Repo' --enable
oc-ibm_pak config repo 'IBM Cloud-Pak Github Repo' --url https://github.com/IBM/cloud-pak/raw/master/repo/case/
- Download the IBM Netcool Operations Insight on Red Hat OpenShift installer and
image inventory to your host.Note: If you want to install the previous 1.6.10 version, specify
--version 1.10.0
in the oc ibm-pak get command.Tip: If you do not specify the CASE version, it downloads the latest CASE.
oc-ibm_pak get $CASE_NAME --version $CASE_VERSION
By default, the root directory that is used by plug-in is the
~/.ibm-pak
directory. This default directory means that the preceding command downloads the CASE under the~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION
directory. You can configure this root directory by setting theIBMPAK_HOME
environment variable. Assuming thatIBMPAK_HOME
is set, the preceding command downloads the CASE under$IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION
.The logs files are available at
$IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log
.Your host is now configured and you are ready to mirror your images.
- Optional: The CASE bundle contains a list of all images for your deployment. You can remove any unwanted images from the images.csv file. After you download the case bundle, edit the images.csv file to control what images you want to mirror in your air-gapped environment. For example, you might want to remove some observer images. For a list of images for the optional AppDisco and NetDisco extensions, see Advanced Agile Discovery (AppDisco) and Network Manager (NetDisco) images.
3. Mirror images
The process of mirroring images takes the image from the internet to your host, then effectively copies that image on to your air-gapped environment. After you mirror your images, you can configure your cluster and complete air-gapped installation.
Notes®:
- If you want to install subsequent updates to your air-gapped environment, you must do a CASE save to get the image list when you make updates.
Complete the following steps to mirror your images from your host to your air-gapped environment:
3.1. Generate mirror manifests
- Run the following command to generate mirror manifests to be used when mirroring the image to the target registry:
oc-ibm_pak generate mirror-manifests $CASE_NAME file://cpfs --version $CASE_VERSION --final-registry $TARGET_REGISTRY/cpfs
The command generates the following files at
~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION
:
images-mapping-to-filesystem.txt
images-mapping-from-filesystem.txt
image-content-source-policy.yaml
Argument | Description |
---|---|
file://cpfs |
This argument mirrors images to a local file system in the cpfs directory,
when the oc image mirror command is run against the
images-mapping-to-filesystem.txt file. For more information, see Mirror images to the file system in
the IBM Cloud Pak foundational services
documentation. |
$TARGET_REGISTRY/cpfs to --final-registry |
This argument generates an image-mapping file that is used by the For example, if the CASE you are installing needs the
If you specify only Therefore, the extra path in the |
Tip: If you do not know the value of the final registry where the images
are mirrored, you can provide a placeholder value of TARGET_REGISTRY
. For example,
oc-ibm_pak generate mirror-manifests $CASE_NAME file://cpfs --version $CASE_VERSION
--final-registry TARGET_REGISTRY
. Note TARGET_REGISTRY
used without any
environment variable expansion is just a plain string that is replaced later with the actual image
registry URL when it is known to you.
Recommended: You can use the following command to list all the images that are mirrored and the publicly accessible registries from where those images are pulled from:
oc-ibm_pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images
Tip: Note down the Registries found section at the end of output from the preceding command. Log in to those registries so that the images can be pulled and mirrored to your local target registry. See the next steps on authentication.
Example ~/.ibm-pak directory structure
The ~/.ibm-pak
directory structure is built over time as you save CASEs and
mirror. The following tree shows an example of the ~/.ibm-pak
directory structure
with CASE version 1.11.0
[root@bastion ~]# tree ~/.ibm-pak
.
├── config
│ └── config.yaml
├── data
│ ├── cases
│ │ └── ibm-netcool-prod
│ │ └── 1.11.0
│ │ ├── caseDependencyMapping.csv
│ │ ├── charts
│ │ ├── component-set-config.yaml
│ │ ├── ibm-cloud-native-postgresql-4.17.0-airgap-metadata.yaml
│ │ ├── ibm-cloud-native-postgresql-4.17.0-charts.csv
│ │ ├── ibm-cloud-native-postgresql-4.17.0-images.csv
│ │ ├── ibm-cloud-native-postgresql-4.17.0.tgz
│ │ ├── ibm-netcool-prod-1.11.0-airgap-metadata.yaml
│ │ ├── ibm-netcool-prod-1.11.0-charts.csv
│ │ ├── ibm-netcool-prod-1.11.0-images.csv
│ │ ├── ibm-netcool-prod-1.11.0.tgz
│ │ └── resourceIndexes
│ │ ├── ibm-cloud-native-postgresql-resourcesIndex.yaml
│ │ └── ibm-netcool-prod-resourcesIndex.yaml
│ ├── mirror
│ │ └── ibm-netcool-prod
│ │ └── 1.11.0
│ │ ├── catalog-sources-linux-amd64.yaml
│ │ ├── catalog-sources.yaml
│ │ ├── image-content-source-policy.yaml
│ │ ├── images-mapping-from-filesystem.txt
│ │ └── images-mapping-to-filesystem.txt
│ ├── online
│ └── publish
├── logs
│ └── oc-ibm_pak.log
└── oc-mirror-storage
3.2. Authenticating the registry
Log in to the registries to generate an authentication file that contains the registry
credentials, and then create an environment variable that has the location of the authentication
file. This file is used later to enable the oc image mirror
command to pull the
images from the IBM Entitled Registry, and push them to the
target registry.
-
Get the authentication credentials for the IBM Entitled Registry.
- To obtain the entitlement key that is assigned to your IBMid, log in to MyIBM Container Software Library with the IBMid and password details that are associated with the entitled software.
- In the Entitlement keys section, select Copy key to copy the entitlement key.
- Run the following command to create an environment variable that contains your entitlement
key.
Where <key> is the entitlement key that you copied in the previous step.export ENTITLED_REGISTRY_PASSWORD=<key>
- Store the authentication credentials for the IBM Entitled
Registry and the target registry.If you are using Podman, run the following commands:
podman login cp.icr.io -u cp -p $ENTITLED_REGISTRY_PASSWORD podman login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD export REGISTRY_AUTH_FILE=${XDG_RUNTIME_DIR}/containers/auth.json unset ENTITLED_REGISTRY_PASSWORD
Note: The authentication file is usually at ${XDG_RUNTIME_DIR}/containers/auth.json. For more information, see the Options section in the Podman documentation .If you are using Docker, run the following commands:docker login cp.icr.io -u cp -p $ENTITLED_REGISTRY_PASSWORD docker login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD export REGISTRY_AUTH_FILE=$HOME/.docker/config.json unset ENTITLED_REGISTRY_PASSWORD
Note: The authentication file is usually at$HOME/.docker/config.json
. For more information, see the Docker documentation
3.3. Mirror images to the file system
Complete these steps to mirror the images from the internet to a file system on your portable device.
- Create an environment variable to store the location of the file system where the images are to
be stored.
Where <image-path> is the directory where you want the images to be stored.export IMAGE_PATH=<image-path>
- Run the following command to mirror the images from the IBM Entitled Registry to the file system.
The nohup UNIX command is used to ensure that the mirroring process continues, even if there is a loss of network connection. Redirection of output to a file provides improved monitoring and error visibility.nohup oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \ -a $REGISTRY_AUTH_FILE \ --filter-by-os '.*' \ --insecure \ --skip-multiple-scopes \ --dir "$IMAGE_PATH" \ --max-per-registry=1 > my-mirror-progress.txt 2>&1 &
Run the following command if you want to see the progress of the mirroring:tail -f my-mirror-progress.txt
Note: If an error occurs during mirroring, the mirror command can be rerun.
3.4. Setup the file system in the air-gapped environment
- Copy files to the air-gapped environment (portable storage device only)
If you are using a portable storage device, copy the files from the portable device to a local compute device, which has access to the target registry. If you are using a portable compute device, these items are already present and you can proceed to the next step.
Copy the following items to your local compute device:- The file system that is located at $IMAGE_PATH, which you specified earlier.
- The ~/.ibm-pak directory.
- Disconnect the device has your file system, (the portable compute device that or the local compute device) from the internet and connect it to the air-gapped environment.
- Ensure that environment variables are set on the device in the air-gapped environment that has
access to the target registry.
If you are using a portable storage device, then set the following environment variables on your local compute device within the air-gapped environment.
If you are using a portable compute device that you have restarted since mirroring the images, then your environment variables are lost and you must set the environment variables on your portable compute device again.export CASE_NAME=ibm-netcool-prod export CASE_VERSION=1.11.0 export TARGET_REGISTRY_HOST=<IP_or_FQDN_of_TARGET_registry> export TARGET_REGISTRY_PORT=<port_number_of_TARGET_registry> export TARGET_REGISTRY=$TARGET_REGISTRY_HOST:$TARGET_REGISTRY_PORT export TARGET_REGISTRY_USER=<username> export TARGET_REGISTRY_PASSWORD=<password> export IMAGE_PATH=<image_path>
3.5. Authenticate with the target registry
Authenticate with the target registry in the air-gapped environment that you will be mirroring the images into.
podman login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $
export REGISTRY_AUTH_FILE=${XDG_RUNTIME_DIR}/containers/auth.json
${XDG_RUNTIME_DIR}/containers/auth.json
. For more
information, see the Options section in the Podman documentation .docker login $TARGET_REGISTRY -u $TARGET_REGISTRY_USER -p $TARGET_REGISTRY_PASSWORD
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
$HOME/.docker/config.json
. For more information, see the Docker documentation 3.6. Mirror the images to the target registry from the file system
Complete the steps in this section on the device has your file system (the portable compute
device that or the local compute device) to copy the images from the file system to the
$TARGET_REGISTRY
. Your device with the file system must be connected to both the
target registry and the Red Hat
OpenShift Container Platform cluster.
images that are
referenced-mapping-from-filesystem.txt
file from the $IMAGE_PATH
file
system to the final target registry.nohup oc image mirror \
-f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \
--from-dir "$IMAGE_PATH" \
-a $REGISTRY_AUTH_FILE \
--filter-by-os '.*' \
--insecure \
--skip-multiple-scopes \
--max-per-registry=1 > my-mirror-progress2.txt 2>&1 &
The UNIX command nohup
is used to ensure
that the mirroring process continues even if there is a loss of network connection, and redirection
of output to a file provides improved monitoring and error visibility.
tail -f my-mirror-progress2.txt
3.7. Configure the cluster
- Log in to your Red Hat
OpenShift Container Platform cluster.
You can identify your specific
oc login
command by clicking the user menu in the upper left corner of the Red Hat OpenShift Container Platform console, and then clicking Copy Login Command.Example:oc login <server> -u <cluster username> -p <cluster pass>
- Update the global image pull secret for your Red Hat
OpenShift Container Platform cluster.
Follow the steps in the Red Hat OpenShift Container Platform documentation topic Updating the global cluster pull secret .
These steps enable your cluster to have authentication credentials in place to pull images from your
TARGET_REGISTRY
as specified in theimage-content-source-policy.yaml
file, which you will apply to your cluster in the next step. - Create the
ImageContentSourcePolicy
.Run the following command to create an ImageContentSourcePolicy (ICSP).oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
- Verify that the
ImageContentSourcePolicy
resource is created.oc get imagecontentsourcepolicy
Example output, showing a newly createdibm-netcool-prod
ICSP.oc get imagecontentsourcepolicy NAME AGE ibm-netcool-prod 95s redhat-operator-index-0 5d18h
- Complete the steps for registry access.
Insecure registry: If you use an insecure registry, you must add the target registry to the cluster
insecureRegistries
list. If your offline registry is insecure, complete the following steps for registry access:- Login as kubeadmin with the following
command:
oc login -u kubeadmin -p <password> <server's REST API URL>
- Run the following command on your bastion server to create a global image pull
secret for the target registry, and create an
ImageContentSourcePolicy
.oc patch image.config.openshift.io/cluster --type=merge -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'
Secure registry: If your offline registry is secure, complete the following steps for registry access. You can add certificate authorities (CA) to the cluster for use when pushing and pulling images. You must have cluster administrator privileges. You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory.
- Login as kubeadmin with the following
command:
oc login -u kubeadmin -p <password> <server's REST API URL>
- Create a configmap in the openshift-config namespace containing the trusted certificates for the
registries that use self-signed certificates. For each CA file, ensure that the key in the configmap
is the hostname of the registry, in the
hostname[..port]
format. To create the certificate authorities registry configmap, run the following command from your local registry server:oc create configmap registry-cas -n openshift-config --from-file=$TARGET_REGISTRY_HOST..$TARGET_REGISTRY_PORT=/etc/docker/certs.d/myregistry.corp.com:$TARGET_REGISTRY_PORT/ca.crt
- Update the cluster image
configuration:
oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
- Login as kubeadmin with the following
command:
- Verify your cluster node
status.
oc get MachineConfigPool -w
Note: After theImageContentsourcePolicy
and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all theMachineConfigPools
are updated before you proceed to the next step. - Configure a network policy for the IBM Netcool Operations Insight on Red Hat OpenShift routes.
Some Red Hat OpenShift Container Platform clusters have extra network policies that are configured to secure pod communication traffic, which can block external traffic and communication between projects (namespaces). Extra configuration is then required to configure a network policy for the IBM Netcool Operations Insight on Red Hat OpenShift routes and allow external traffic to reach the routes.
Run the following command to updateendpointPublishingStrategy.type
in youringresscontroller
if it is set toHostNetwork
, to allow traffic.if [ $(oc get ingresscontroller default -n openshift-ingress-operator -o jsonpath='{.status.endpointPublishingStrategy.type}') = "HostNetwork" ]; then oc patch namespace default --type=json -p '[{"op":"add","path":"/metadata/labels","value":{"network.openshift.io/policy-group":"ingress"}}]'; fi
4. Install IBM Netcool Operations Insight on Red Hat OpenShift Container Platform
After your images are mirrored to your air-gapped environment, you can deploy IBM Netcool Operations Insight on Red Hat OpenShift to that environment. When you mirrored your environment, you created a parallel offline version of everything that you needed to install an operator into Red Hat OpenShift Container Platform Container Platform. To install IBM Netcool Operations Insight on Red Hat OpenShift, complete the following steps:
4.1 Create the catalog source
Important: Before you run the oc-ibm_pak launch \
command, you must be
logged in to your cluster.
Using the oc login
command, log in to the Red Hat
OpenShift Container Platform Container Platform
cluster of your final location. You can identify your specific oc login
command by
clicking the user drop-down menu in the Red Hat
OpenShift Container Platform Container Platform
console, then clicking Copy Login Command.
- Note: ForSet the project (namespace) to install the catalog, as in the following example:
SingleNamespace
mode, only a single instance of the catalog source on the cluster and a single instance ofnoihybrid
on the clusters are supported.
You can also set the project to a custom namespace.export CATALOG_NAMESPACE=openshift-marketplace
- Create and configure the catalog sources:
export CASE_INVENTORY_SETUP=noiOperatorSetup oc-ibm_pak launch \ $CASE_NAME \ --version $CASE_VERSION \ --action install-catalog \ --inventory $CASE_INVENTORY_SETUP \ --namespace openshift-marketplace \ --args "--registry $TARGET_REGISTRY/cpfs --recursive \ --inputDir ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION"
- Verify that the CatalogSource is installed:
oc get pods -n openshift-marketplace oc get catalogsource -n openshift-marketplace
- SingleNamespace mode: When the
catalog source is created, choose the namespace where you want to deploy the NOI operator and the
NOI operands. Ensure that these namespaces are already created. Run the following command to create
the
ibm-noi-catalog-group
operator group.cat << EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha2 kind: OperatorGroup metadata: name: ibm-noi-catalog-group namespace: $NAMESPACE spec: targetNamespaces: - $TARGET_NAMESPACE EOF
Note:namespace: $NAMESPACE
is the namespace where the operators are deployed, for example
.netcool-operator
targetNamespaces: - $TARGET_NAMESPACE
is the namespace where the operands are deployed, which is an instance ofnoihybrid
, for example
.netcool-operand
4.2. Create the operator
- Create the operator by running the following
command:
Where, $NAMESPACE is the custom namespace to be used for your deployment. Foroc ibm-pak launch \ $CASE_NAME \ --version $CASE_VERSION \ --namespace $NAMESPACE \ --inventory $CASE_INVENTORY_SETUP \ --action install-operator \ --args "--registry $TARGET_REGISTRY --recursive \ --inputDir $IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION"
SingleNamespace
mode, setnetcool-operand
as the namespace.
4.3. Create the target-registry-secret
- Create the target-registry-secret by running the following
command:
oc create secret docker-registry target-registry-secret \ --docker-server=$TARGET_REGISTRY \ --docker-username=$TARGET_REGISTRY_USER \ --docker-password=$TARGET_REGISTRY_PASSWORD \ --namespace=$TARGET_NAMESPACE
4.4. Create an air-gapped instance of IBM Netcool Operations Insight on Red Hat OpenShift
- From the Red Hat OpenShift Container Platform OLM UI, go to , and select IBM Cloud Pak for AIOps Event Manager. Under , select Create Instance.
- From the Red Hat
OpenShift Container Platform OLM
UI, use the YAML view or the Form view to configure the properties for the IBM Netcool Operations Insight on Red Hat OpenShift deployment. For
more information about configurable properties for a hybrid deployment, see Hybrid operator properties.CAUTION:Ensure that the name of the Netcool Operations Insight instance does not exceed 10 characters.Enter the following values:
- Name: Specify the name that you want your Netcool Operations Insight instance to be called.
- License: Expand the License section and read the agreement. Toggle the License Acceptance switch to True to accept the license.
- Size: Select the size that you require for your Netcool Operations Insight installation.
-
storageClass: Specify the storage class. Check which storage classes are configured on your cluster by using the
oc get sc
command. For more information about storage, see Hybrid storage.
- Edit the IBM Netcool Operations Insight on Red Hat OpenShift properties to
provide access to the target registry. Set
spec.entitlementSecret
to the target registry secret. - Select Create.
- Under the All Instances tab, a IBM Netcool Operations Insight on Red Hat OpenShift instance appears.
To monitor the status of the installation, see Monitoring installation progress.
Go to Netcool Operations Insight instance is Phase: OK. Click to check it. This status means that IBM Cloud Pak for AIOps Event Manager started and is starting up the various pods.
and check that the status of yourNote:- Changing an existing deployment from a Trial deployment type to a Production deployment type is not supported.
- Changing an instance's deployment parameters in the Form view is not supported post deployment.
- If you update custom secrets in the OLM console, the crypto key is corrupted and the command to encrypt passwords does not work. Update custom secrets only with the CLI. For more information about storing a certificate as a secret, see Configuring observer job security .
- For a
SingleNamespace
mode deployment, disable the healthcron cronjob. Run the following command, where <release-name> is the name of the hybrid release.oc patch cronjob <release-name>-healthcron -p '{"spec":{"suspend": true}}'
What to do next
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/datadog", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/netDisco", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/aaionap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/alm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ansibleawx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/appdynamics", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/aws", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/azure", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigcloudfabric", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/bigfixinventory", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/cienablueplanet", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ciscoaci", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/contrail", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dns", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/docker", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/dynatrace", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/file", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/gitlab", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/googlecloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/hpnfvd", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/ibmcloud", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/itnm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/jenkins", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/junipercso", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/kubernetes", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/newrelic", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/openstack", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rancher", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/rest", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sdconap", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/servicenow", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/sevone", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/taddm", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/viptela", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmvcenter", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/vmwarensx", "value": 'true' }]'
oc patch noihybrid $noi_instance_name -n $NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/topology/observers/zabbix", "value": 'true' }]'