Installing your IBM Cloud Pak by mirroring images directly to a private container registry
If your OpenShift® Container Platform cluster is not connected to the internet, you can install your IBM Cloud Pak® by mirroring the software images to a private container registry that is accessible from the cluster.
This type of restricted network environment is also known as an air-gapped environment. You can use a host (also called a bastion host) that is connected to the internet and to the private container registry to mirror the images from the IBM Cloud® Container Registry. This mirroring action is known as connected mirroring.
Overview
1. Set up your mirroring environment
1.1 Prerequisites
- Install OpenShift Container
Platform on your cluster.
For the supported OpenShift Container Platform versions, see Software requirements.
For information about how to install OpenShift Container Platform, see Installing Red Hat software.
- Configure an LDAP connection for your Red Hat
OpenShift cluster. After you connect an LDAP directory to your cluster, you can add users and
groups from your LDAP directory into your cluster.
For information about how to connect to your LDAP directory, see Configuring an LDAP connection in the IBM Cloud Paks documentation.
If you want to use SAML or OIDC, they are available with IBM Cloud Pak foundational services. For more information, see IBM Cloud Pak foundational services Authentication types.
If you want to use object-based access control (OBAC), see Managing object groups.
- Configure storage on your cluster.
- A Docker V2 container registry must be available and accessible from the OpenShift Container Platform cluster nodes. The registry is available to aid in mirroring to the final location. For more information, see 1.3 Prepare the private container registry.
- Provide access to the following sites and ports:
*.docker.io
and*.docker.com
. For more information about specific sites to allow access to, see Docker Hub Hosts for Firewalls and HTTP Proxy Servers.icr.io
,cp.icr.io
,dd0.icr.io
,dd2.icr.io
,dd4.icr.io
, anddd6.icr.io
for IBM Cloud Container Registry, CASE OCI artifact, and IBM Cloud Pak foundational services catalog sources. If you're located in China, you must also allow the following hosts:dd1-icr.ibm-zh.com
,dd3-icr.ibm-zh.com
,dd5-icr.ibm-zh.com
, anddd7-icr.ibm-zh.com
*.quay.io
andquay.io
or all ofquay.io
,cdn.quay.io
,cdn01.quay.io
,cdn02.quay.io
, andcdn03.quay.io
ports 443 and 80 for IBM Cloud Pak foundational services catalog and images.github.com
for CASEs and tools.redhat.com
. For more information about specific sites to allow access to, see Configuring your firewall (Red Hat OpenShift Container Platform 4.16).
1.2 Prepare a host for mirroring the images
Prepare a bastion host that can access the Red Hat OpenShift cluster, the private container registry, and the internet.
The bastion host must be on a Linux x86_64
platform, or any operating system
that the Red Hat OpenShift CLI supports. The bastion
host locale
must be set to English.
The following table shows the software that you must install on the bastion host to mirror the IBM Cloud Pak images.
Software | Purpose |
---|---|
Docker | Container management |
Podman | Container management |
Red Hat®
OpenShift CLI (oc ) |
Red Hat OpenShift Container Platform administration |
oc ibm-pak |
IBM Catalog Management Plug-in for IBM Cloud Paks |
Skopeo | Working with container images and registries |
OpenSSL | Validating certificates when you run the installation scripts |
Apache httpd-tools |
Creating an account when you run the air-gapped install scripts |
- Install Docker or
Podman
.- To install Docker (for example, on Red Hat Enterprise
Linux®), run the following commands:
yum check-update yum install docker
- To install
Podman
, see Installing Podman.
- To install Docker (for example, on Red Hat Enterprise
Linux®), run the following commands:
- Install the
oc
Red Hat OpenShift CLI tool.See Getting started with the OpenShift CLI (Red Hat OpenShift Container Platform 4.16). If you are using a different version of Red Hat OpenShift, select the appropriate version on the Red Hat OpenShift documentation page.
- Install the IBM Catalog Management Plug-in by completing the following steps:
- Download the latest version of the plug-in from this location: IBM Catalog Management plug-in.
- Install the plug-in by running the following
commands:
tar zxvf oc-ibm_pak-linux-amd64.tar.gz chmod 775 oc-ibm_pak-linux-amd64 mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
- To confirm that the
ibm-pak
plug-in is installed, run the following command:oc ibm-pak
- Install the
skopeo
CLI1.0.0
or later. See Installing Skopeo. - Install OpenSSL. For more information about which version to install, see Software requirements.
- Install
httpd-tools
.yum install httpd-tools
- Download
jq
from this location: jq version 1.6.
1.3 Prepare the private container registry
A local Docker registry must be used to store your images in your air-gapped environment. You might have one or more centralized, corporate registry servers to store production container images. If a local registry is not already available, a production-grade registry must be installed and configured.To access your registries during an air-gapped installation, use an account that can write to the target local registry. To access your registries at runtime, use an account that can read from the target local registry.
- Supports Docker Manifest V2, Schema 2.
- Is accessible from both the bastion host and your Red Hat OpenShift cluster nodes.
- Allows path separators in the image name.
- Has the credentials of a user who can write to the target registry from the bastion host.
- Has the credentials of a user who can read from the target registry that is on the Red Hat OpenShift cluster nodes.
- Has sufficient storage to hold all the software images that are transferred to the local registry.
2. Set environment variables and download CASE files
There is one CASE package for each product component and dependency in IBM Cloud Pak for Network Automation Orchestration Manager. The CASE packages contain metadata about the component, including the container images that are needed to deploy the component and information about its dependencies. Each CASE package also contains the scripts that are needed to mirror images to the private container registry, and to configure the target cluster to use the private registry as a mirror.
Before you can mirror the images to your private container registry, you must download the CASE packages for the software that you plan to install.
export https_proxy=http://proxy-server-hostname:port
export http_proxy=http://proxy-server-hostname:port
For
example:export https_proxy=http://server.proxy.xyz.com:5018
export http_proxy=http://server.proxy.xyz.com:5018
- Create the following environment variables with the installer image name and the image inventory
on your bastion host:
export CASE_NAME=ibm-tnc-orchestration export CASE_VERSION=2.7.6 export CASE_ARCHIVE=${CASE_NAME}-${CASE_VERSION}.tgz export CASE_INVENTORY_SETUP=orchestrationOperatorSetup
- Connect your host to the internet and disconnect it from the local air-gapped network.
- Download the image inventory for your IBM Cloud Pak to your host by running the following command: Tip: Specify the CASE version. Otherwise, the latest version is downloaded.
oc ibm-pak get $CASE_NAME --version $CASE_VERSION
- Verify
that the images are downloaded successfully by running the following command:
oc ibm-pak list --downloaded
If the images are downloaded successfully, a message like this is shown:* Downloaded CASEs * CASE Current Version Latest Version ibm-tnc-orchestration 2.7.6 2.7.6
3. Mirror the images to your private container registry
3.1 Generate mirror manifests
- Define the
$TARGET_REGISTRY
environment variable by running the following command:export TARGET_REGISTRY=<target-registry>
The <target-registry> refers to the private container registry (hostname and port) where your images are mirrored to and accessed by the oc cluster. For example:export TARGET_REGISTRY=myregistry.com:5000
- Generate mirror manifests by running the following command:
oc ibm-pak generate mirror-manifests $CASE_NAME $TARGET_REGISTRY --version $CASE_VERSION
If the mirror manifests are generated successfully, a message like this is shown:Generating mirror manifests of CASE: ibm-tnc-orchestration, version: 2.7.6 is complete
The following files are generated at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION:- catalog-sources.yaml
- catalog-sources-linux-<arch>.yaml (if there are architecture-specific catalog sources)
- image-content-source-policy.yaml
- images-mapping.txt
3.2 Authenticate the registries
The IBM Cloud Container Registry (also known as the cp.icr.io
registry) is the
source registry that stores the software container images. The target registry is the private
container registry on the bastion host where the images are mirrored to.
Complete the following steps to authenticate to the source and target registries:
- Export the path to the file that stores the authentication credentials. For example:
export REGISTRY_AUTH_FILE=~/.ibm-pak/auth.json
- Authenticate to the
cp.icr.io
registry:
The username ispodman login cp.icr.io
cp
and the password is your entitlement key for the IBM Cloud Container Registry. For more information about how to generate your entitlement key, see Obtaining an entitlement key. - Authenticate to the target registry where your images are mirrored to. For example:
Use an account that can write images to the target registry.podman login myregistry.com:5000
- Authenticate to the
docker.io
andquay.io
registries:podman login docker.io podman login quay.io
- Verify that your credentials are added to the credentials file by running the following command:
cat $REGISTRY_AUTH_FILE
If your credentials are set, the output looks like this:{ "auths": { "myregistry.com:5000": { "auth": "Y.................................................WI=" }, "cp.icr.io": { "auth": "Y3A6ZX........................................GM=" }, "docker.io": { "auth": "am..................................................=" }, "quay.io": { "auth": "b3...................................................lW" } } }
3.3 Mirror the images
- Copy the IBM Cloud Pak images to your local Docker registry by running the following command:
nohup oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ -a $REGISTRY_AUTH_FILE \ --filter-by-os '.*' \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true > my-mirror-progress.txt 2>&1 &
Depending on the speed of your network connection, it might take several hours to mirror all the images. The time can vary depending on conditions in the registry's network.
Thecontinue-on-error
parameter is used so that the command continues to mirror the images even when an error occurs. You can run theoc image mirror --help
command to see all the command options.Tip:You can view the progress and check for errors by running the following command:If you see errors that indicate a network or bandwidth problem, you can run thetail -f my-mirror-progress.txt
nohup oc image mirror
command again. When you run the command again, the command attempts to copy only those images that are not already copied. - Copy the
ocp-v4.0-art-dev
images fromquay.io
to your local Docker registry.Important: If your deployment of Red Hat OpenShift Container Platform is managed by third party software, you might not need to complete this step. For example, your deployment might be managed by Red Hat software.To copy the images to your local Docker registry, complete the following steps:
- Log in to your OpenShift Container
Platform cluster by
using the
oc
CLI. - Identify your OpenShift Container Platform server
version:
oc version
- Check that a folder exists for your version in the Red Hat OpenShift mirror registry.
- Create an environment variable with the version. For example:
export OCP_RELEASE="4.12.0"
- Create an environment variable with the local Docker registry host and port information. For
example:
export LOCAL_DOCKER_REGISTRY="myregistry.com:5000"
- Download the OpenShift Container Platform client
release file:
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_RELEASE/release.txt
- Gather the
ocp-v4.0-art-dev
image versions for your OpenShift Container Platform release:image_shas=`cat release.txt | awk -F"sha256:" '{print $2 }' | sort -u`
- Log in to
quay.io
by running the following commands:export QUAY_USERNAME=$(oc get secret/pull-secret -n openshift-config -o go-template='{{index .data ".dockerconfigjson" | base64decode}}' | jq -r '.auths."quay.io".auth' | base64 -d | awk -F: '{print $1}') export QUAY_PASSWORD=$(oc get secret/pull-secret -n openshift-config -o go-template='{{index .data ".dockerconfigjson" | base64decode}}' | jq -r '.auths."quay.io".auth' | base64 -d | awk -F: '{print $2}') podman login quay.io -u $QUAY_USERNAME --password-stdin <<< $QUAY_PASSWORD
If the login is successful, the following output is shown: Login Succeeded.
- Copy the images to your local Docker registry:
for sha in $image_shas; do skopeo copy --all --dest-tls-verify=false --src-tls-verify=false docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:$sha docker://${LOCAL_DOCKER_REGISTRY}/openshift-release-dev/ocp-v4.0-art-dev@sha256:$sha --preserve-digests ; done
- Log in to your OpenShift Container
Platform cluster by
using the
3.4 Configure the cluster
- Log in to your OpenShift Container
Platform cluster by
using the
oc
CLI. - Update the global image pull secret for your Red Hat
OpenShift cluster.
To configure your cluster with the authentication credentials to pull images from the private container registry, follow the steps in Updating the global cluster pull secret (Red Hat OpenShift Container Platform 4.16).
The authentication credentials are specified in the image-content-source-policy.yaml file. Ensure that credentials are included for each of the mirrors that are specified in the following example configuration.
In this example,myregistry.com:5000
is the hostname and port of the private container registry.apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ibm-tnc-orchestration spec: repositoryDigestMirrors: - mirrors: - myregistry.com:5000/cp source: cp.icr.io/cp - mirrors: - myregistry.com:5000/cpopen source: icr.io/cpopen - mirrors: - myregistry.com:5000/opencloudio source: quay.io/opencloudio - mirrors: - myregistry.com:5000/openshift-release-dev source: quay.io/openshift-release-dev
- Configure an
ImageContentSourcePolicy
for the images that are listed in the component CASEs. Create theImageContentSourcePolicy
by running the following command:oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
- Verify that the
ImageContentSourcePolicy
resource is created by running the following command:oc get imageContentSourcePolicy
- Verify your cluster node status and wait for all the nodes to be restarted before proceeding.
Run the following
command:
oc get MachineConfigPool -w
After the
ImageContentsourcePolicy
and the global image pull secret are applied, the configuration of your nodes is updated sequentially. Wait until allMachineConfigPools
resources are in theUPDATED=True
status before proceeding. - Create the namespace where you install OpenShift Container
Platform. For example, the following commands
create a namespace that is called
cp4na
.export NAMESPACE="cp4na" oc new-project ${NAMESPACE}
4. Install and configure a certificate manager
Certificate managers manage the lifecycles of digital certificates, such as TLS and SSL certificates. To ensure the stability of network services, certificate managers automatically issue, renew, and deploy these certificates to endpoints. Before you install IBM Cloud Pak for Network Automation, you must install a certificate manager.
The recommended certificate manager for the IBM Cloud Pak is IBM Certificate Manager. For information about how to install IBM Certificate Manager, see Installing IBM Cert Manager. For information about how to configure IBM Certificate Manager resources after installation, including the steps to customize the hardware profile, see Configuring IBM Cert Manager.
Alternatively, you can install the cert-manager
operator for Red Hat
OpenShift Container
Platform. For more information, see
Installing the cert-manager Operator for Red Hat OpenShift (Red Hat OpenShift Container
Platform 4.16).
5. Install IBM Cloud Pak for Network Automation
Now that your images are mirrored to your air-gapped environment, you can deploy IBM Cloud Pak for Network Automation to that environment.
5.1 Configure your environment
Complete the following steps on the bastion host:
- Disable the default Operator Sources.
By default, the cluster is configured with operator catalogs that rely on remote content. Run the following commands to disable the catalogs:
oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Important: If your deployment of Red Hat OpenShift Container Platform is managed by third party software, you might not need to complete this step. For example, your deployment might be managed by Red Hat software. - Extract the files from the CASE package by running the following
command:
The following directory is created: /root/ibm-tnc-orchestration.tar -xvf ~/.ibm-pak/data/cases/ibm-tnc-orchestration/2.7.6/$CASE_ARCHIVE
- Create a trusted X.509 certificate to connect to the target registry.
- If you are using a secure local registry, run the following command:
airgap.sh cluster add-ca-cert --registry ${LOCAL_DOCKER_REGISTRY}
Important: If your deployment of Red Hat OpenShift Container Platform is managed by third party software, you might not need to run this airgap.sh command. For example, your deployment might be managed by Red Hat software. - If you are using an insecure local registry, add the local registry to the
insecureRegistries
list for your cluster:oc patch image.config.openshift.io/cluster --type=merge \ -p '{"spec":{"registrySources":{"insecureRegistries":["'${LOCAL_DOCKER_REGISTRY}'"]}}}'
- If you are using a secure local registry, run the following command:
Expose metrics for Prometheus
Prometheus is a monitoring and alerting toolkit and is deployed by default on Red Hat OpenShift Container Platform clusters. Before the orchestration metrics can be collected and stored in Prometheus, you must expose the metric endpoints to Prometheus.
To enable Prometheus to collect the metrics, you must deploy the
cluster-monitoring-config
and user-workload-monitoring-config
configmaps, as documented in the following steps.
- Expose metrics for Prometheus with the OpenShift Container Platform console.
- Click the Import YAML plus (+) icon in the console toolbar to open the Import YAML page.
- Paste the following text:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true --- apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |
- Click Create.
- Expose metrics for Prometheus with the Red Hat OpenShift CLI.
- Create a YAML file and add the following configuration information:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true --- apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |
- Deploy the configmaps by running the following command:
oc apply -f <filename>
- Create a YAML file and add the following configuration information:
For more information about how to configure monitoring for your cluster, see Enabling monitoring for user-defined projects (Red Hat OpenShift Container Platform 4.16).
5.3 Add the catalog sources
- IBM Automation Foundation
- IBM Operator Catalog
openshift-marketplace
namespace.- Log in to your Red Hat OpenShift cluster as a
cluster administrator:
oc login <cluster host:port> --username=<cluster admin user> --password=<cluster admin password>
- Run the following command to add the catalog sources:
oc ibm-pak launch $CASE_NAME \ --version $CASE_VERSION \ --inventory $CASE_INVENTORY_SETUP \ --action install-catalog \ --namespace $NAMESPACE \ --args "--registry ${LOCAL_DOCKER_REGISTRY} \ --recursive --inputDir ~/.ibm-pak/data/cases/"
- Verify that the catalog sources are added, and are returned with the following command:
oc get CatalogSources opencloud-operators ibm-tnc-orchestration-catalog \ -n openshift-marketplace
- Verify that the
ibm-tnc-orchestration-catalog
is READY:oc get CatalogSource ibm-tnc-orchestration-catalog -n openshift-marketplace -o jsonpath='{.status.connectionState.lastObservedState}'
It might take several minutes before the catalog source is ready. If the command does not return
READY
, wait a few minutes and try to verify the status again.
5.4 Install the operator
The Cloud Pak operator is installed by using the Operator Lifecycle Manager (OLM) in Red Hat OpenShift Container Platform.
- Run the following command to install the operator:
oc ibm-pak launch $CASE_NAME \ --version $CASE_VERSION \ --action install-operator \ --inventory $CASE_INVENTORY_SETUP \ --namespace $NAMESPACE \ --args "--registry ${LOCAL_DOCKER_REGISTRY} \ --inputDir ~/.ibm-pak/data/cases/"
- Verify that the operator is installed by running the following command:
oc get pod -n ${NAMESPACE} -l app.kubernetes.io/component=tnc-orchestration-operator
When the
Status
value is Running, the operator is installed and running.
5.5 Create the instance with a custom resource
- Create a YAML file and add the custom resource (CR).
The following example CR is used to create an instance of IBM Cloud Pak for Network Automation Orchestration Manager. The license acceptance must be specified. All the other keys, such as cpu, memory, and replicas that are not specified in the CR, use the default settings.
Important:The default CR settings for CPU, memory, and replicas are for a starter instance.
apiVersion: tnc.ibm.com/v1beta1 kind: Orchestration metadata: name: <instance_name> namespace: <namespace> spec: license: accept: <license_acceptance> version: 2.7.6 featureconfig: siteplanner: true logging: true advanced: imagePullPolicy: "Always" podSettings: zookeeperlocks: replicas: 3 storage: kafka: storageClassName: rook-ceph-block zookeeper: storageClassName: rook-ceph-block zookeeperlocks: storageClassName: rook-ceph-block
Where:<instance_name>
is the name that you want your instance of IBM Cloud Pak for Network Automation Orchestration Manager to be called.<namespace>
is the namespace that you created in the step Create the namespace.<license_acceptance>
. Set to true to accept the license.
Customizing the CR for your environment: In some scenarios, you might want to customize the CR. For example:- For production environments, you might want to increase the settings for CPU, memory, and replicas for the microservices. For example, you might want to increase the number of pod replicas to three for the Ishtar microservice.
- If the storage class is not set in the CR, the default storage class that is set on the cluster is used. However, if the default storage class is not set on the cluster, you must set the storage class in the CR. For more information about setting the storage class, see Custom resource structure and settings.
- Site Planner is not installed by default. To install this component when the instance is
created, set the following key in the custom resource to true:
featureconfig: siteplanner: true
- Application logging is enabled automatically when you install the Cloud
Pak. To disable application logging during an installation, set the
spec.featureconfig.logging
attribute to false as follows:spec featureconfig: logging: false
- In the CR, in the
zenBlock
andzenFile
storage configuration, set the storage classes.- Set the
zenBlock.storageClassName
attribute to the name of a block storage class that is present on your cluster, such as rook-ceph-block orlvms-vg1
. - Set the
zenFile.storageClassName
attribute to the name of a file storage class that is present on your cluster, such as rook-cephfs orlvms-vg1
.
- Set the
- To enable multitenancy mode, set the
spec.advanced.multitenant
attribute to true. If you enable multitenancy mode, then later disable it, the data that is created by users when they were in tenants is no longer available to the users. - Customize the OpenSearch settings.OpenSearch is used to store and index application log data and is installed automatically when you install the Cloud Pak. You can update the default OpenSearch settings, such as the index name, the number of index replicas, and the number of primary index shards, in the
spec.advanced.opensearch
subsection.Important: You cannot easily change the number of primary shards for an index that already contains data. Therefore, configure the number of shards that you need, based on your storage requirements, before you install the Cloud Pak. For more information, see Custom resources. - Create the instance by running the following command, replacing
<filename>
with the file that you created in step 1:oc apply -f <filename>
The instance might take some time to create.
- Run the following command to verify that your instance is successfully created:
oc get orchestration -n ${NAMESPACE}
When the
Status
value is Ready, your instance is created.
What to do next
- Configure multitenancy
- You can enable multitenancy and configure tenant administrators and users after you install IBM Cloud Pak for Network Automation. For more information, see Configuring multitenancy.
- Log in to IBM Automation
- Log in to the IBM Automation UI to access IBM Cloud Pak for Network Automation features, such as the orchestration and Site Planner components.
- Deploy resource drivers
- Before you can use the orchestration component to automate your lifecycle processes, you must deploy the resource drivers. Resource drivers run lifecycle and operation requests from the orchestration component. See Resource drivers.