Installing Infrastructure Automation in an air-gapped environment (offline) using a portable device

If your cluster is not connected to the internet, you can deploy an installation of Infrastructure Automation on your Red Hat® OpenShift® Container Platform cluster by using a portable compute device such as a laptop, or a portable storage device such as a USB device.

In this scenario, your air-gapped (offline) environment has a target registry, and a Red Hat OpenShift cluster on which Infrastructure Automation is to be installed. Infrastructure Automation images are mirrored from the internet to a file system on a portable compute device or a portable storage device. The portable device is then disconnected from the internet and connected in the offline environment, where the images are loaded to the target registry. Infrastructure Automation can then be installed in the offline environment by using the target registry.

Before you begin

Important: The following procedure is based on an Red Hat OpenShift 4.14 environment and includes links for that version. If your environment uses a different supported version of Red Hat OpenShift, ensure that you follow the Red Hat OpenShift documentation for that version.

Installation procedure

  1. Set up the mirroring environment
  2. Download the CASE
  3. Mirror images
  4. Configure storage
  5. Install Infrastructure Automation

1. Set up the mirroring environment

Prerequisites

Allow access to the following sites and ports:

Table 1. Sites and ports that must be accessible
Site Description
icr.io
cp.icr.io
dd0.icr.io
dd2.icr.io
dd4.icr.io
dd6.icr.io
Allow access to these hosts on port 443 to enable access to the IBM Cloud Container Registry, CASE OCI artifact, and IBM Cloud Pak® foundational services catalog source.
dd1-icr.ibm-zh.com
dd3-icr.ibm-zh.com
dd5-icr.ibm-zh.com
dd7-icr.ibm-zh.com
If you are located in China, also allow access to these hosts on port 443.
github.com Github houses CASE files, IBM Cloud Pak tools and scripts.
redhat.com Red Hat OpenShift registries that are required for Red Hat OpenShift, and for Red Hat OpenShift upgrades.

1.1 Download documentation for offline access

Download the following documents that you might need to access during your Infrastructure Automation installation, and copy them to your air-gapped environment.

  1. IBM Cloud Pak for AIOps 4.6.1 documentation

    Download the IBM Cloud Pak for AIOps 4.6.1 PDF (this documentation) so that you can access it offline.

  2. Red Hat OpenShift documentation

    The Red Hat OpenShift documentation can be downloaded for offline access from Red HatOpens in a new tab. The Security and compliance, Installing, CLI Tools, Images, and Operators sections are referenced by this documentation.

1.2 Install and configure Red Hat OpenShift

Infrastructure Automation requires Red Hat OpenShift to be installed and running on your target cluster. You must have administrative access to your Red Hat OpenShift cluster.

For more information about the supported versions of Red Hat OpenShift, see Supported Red Hat OpenShift Container Platform versions.

  1. Install Red Hat OpenShift by using the instructions in the Red Hat OpenShift documentation Opens in a new tab. Information about installing a cluster in a restricted network is given in Mirroring images for a disconnected installation Opens in a new tab.

  2. Install the Red Hat OpenShift command line interface (oc) on your cluster's boot node and run oc login. For more information, see the instructions in Getting started with the Red Hat OpenShift CLIOpens in a new tab.

  3. Ensure that the clocks on your Red Hat OpenShift cluster are synchronized. Each Red Hat OpenShift node in the cluster must have access to an NTP server. Red Hat OpenShift nodes use NTP to synchronize their clocks. Infrastructure Automation runs on Red Hat OpenShift and also has this requirement. Discrepancies between the clocks on the Red Hat OpenShift nodes can cause Infrastructure Automation to experience operational issues. See the Red Hat OpenShift documentation Opens in a new tab for information about how to use a MachineConfig custom resource to configure chrony to connect to your NTP servers.

  4. Optionally configure a custom certificate for Infrastructure Automation to use. You can use either of the following methods:

1.3 Set up a target registry

You must have a local Docker type production-grade registry available in the air-gapped environment to store the Infrastructure Automation images in. The registry must meet the following requirements:

  • supports Docker Manifest V2, Schema 2.
  • supports multi-architecture images.
  • is accessible from the Red Hat OpenShift cluster nodes.
  • allows path separators in the image name.
  • you have the username and password for a user who can read from and write to the registry.
  • must have 87 GB of storage to hold all of the software that is to be transferred to the target registry.

If you do not already have a suitable production-grade registry available, then you must install and configure one. For more information, see About the mirror registry Opens in a new tab in the Red Hat OpenShift documentation.

Important: Do not use the Red Hat OpenShift image registry as your target registry. The Red Hat OpenShift registry does not support multi-architecture images or path separators in the image name.

1.4 Prepare a host

Prepare a portable compute device, or if you are using a portable storage device then a connected compute device, as follows.

You must be able to connect your host to the internet. Your host must be on a Linux® x86_64 or Mac platform with any operating system that the IBM Cloud Pak CLI and the Red Hat OpenShift CLI support. If you are on a Windows® platform, you must run the actions in a Linux® x86_64 VM or from a Windows Subsystem for Linux® (WSL) terminal.

Your portable device and any intermediary devices must have 87 GB of storage to hold all the software that is to be transferred to the local target registry.

Complete the following steps on your host.

  1. Install Podman, see the Podman Installation InstructionsOpens in a new tab.

    Note: Docker is not shipped or supported for Red Hat Enterprise Linux (RHEL) 8 and (RHEL) 9. The Podman container engine replaced docker as the preferred, maintained, and supported container runtime of choice for Red Hat Enterprise Linux 8 and 9 systems. For more information, see Running containers without DockerOpens in a new tab in the Red Hat documentation.

  2. Install the Red Hat OpenShift CLI tool, oc.

    oc is required for Red Hat OpenShift management. For more information, see Getting started with the Red Hat OpenShift CLIOpens in a new tab in the Red Hat OpenShift documentation.

1.5 Install the IBM Catalog Management Plug-in for IBM Cloud Pak®

The IBM Catalog Management Plug-in for IBM Cloud Pak (ibm-pak-plugin) is used for the deployment of IBM Cloud Paks® in a disconnected environment. It simplifies the process for discovering required IBM product images and uses standard tools for registry and cluster access. The ibm-pak-plugin also extends the Red Hat OpenShift CLI (oc) capability to streamline the process of delivering installation images to the IBM Cloud Pak in an air-gapped environment.

  1. Download and install the most recent version of the ibm-pak-plugin for your host operating system from github.com/IBMOpens in a new tab.

  2. Run the following command to extract the files.

    tar -xf oc-ibm_pak-linux-amd64.tar.gz
    
  3. Run the following command to move the file to the /usr/local/bin directory.

    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    

    Note: If you are installing as a non-root user, you must use sudo.

  4. Confirm that ibm-pak-plugin is installed by running the following command.

    oc ibm-pak --help
    

    Expected result: The ibm-pak-plugin usage is displayed.

2. Download the CASE

Set environment variables on the portable device, and connect it to the internet so that you can download the Infrastructure Automation CASE files.

Note: Save a copy of your environment variable values to a file that you can use as a reference when you are completing your air-gapped installation tasks.

  1. Create the following environment variables.

    export IA_CASE_NAME=ibm-ia-installer
    export IA_CASE_VERSION=1.10.1
    export IA_CASE_INVENTORY_SETUP=ibmInfrastructureAutomationOperatorSetup
    export TARGET_REGISTRY_HOST=<IP_or_FQDN_of_target_registry>
    export TARGET_REGISTRY_PORT=<port_number_of_target_registry>
    export TARGET_REGISTRY=$TARGET_REGISTRY_HOST:$TARGET_REGISTRY_PORT
    export TARGET_REGISTRY_USER=<username>
    export TARGET_REGISTRY_PASSWORD=<password>
    

    The target registry is the registry where the Infrastructure Automation images are mirrored to, and accessed from by the Red Hat OpenShift cluster, as setup in 1.3 Set up a target registry.

    If your portable device must connect to the internet through a proxy, then also set the following environment variables.

    export https_proxy=http://proxy-server-hostname:port
    export http_proxy=http://proxy-server-hostname:port
    
  2. Connect your portable device to the internet and disconnect it from the air-gapped environment.

  3. Download the Infrastructure Automation installer and image inventory to your portable device.

    oc ibm-pak get ${IA_CASE_NAME} --version ${IA_CASE_VERSION}
    

    The CASE is downloaded to ~/.ibm-pak/data/cases/$IA_CASE_NAME/$IA_CASE_VERSION. The log files are available at ~/.ibm-pak/logs/oc-ibm_pak.log.

    Note: If you do not specify the CASE version, then the latest CASE is downloaded. The root directory that is used by the ibm-pak-plugin is ~/.ibm-pak. If required, the root directory can be configured by setting the IBMPAK_HOME environment variable.

3. Mirror images

Complete the following steps to mirror the Infrastructure Automation, IBM Cloud Pak foundational services Cert Manager, and IBM Cloud Pak foundational services License Service images from the internet to the file system on your portable device, and then from the file system to the target registry in the air-gapped environment.

3.1. Generate mirror manifests

Run the following command to generate mirror manifests to be used when mirroring the images to the target registry.

oc ibm-pak generate mirror-manifests ${IA_CASE_NAME} file://cpfs --version ${IA_CASE_VERSION} --final-registry ${TARGET_REGISTRY}/cpfs
Table 2. Parameter description
Argument Description
file://cpfs Specifies the path extension where images are mirrored to. Images are mirrored to $IMAGE_PATH/cpfs when the oc image mirror command is run with images-mapping-to-filesystem.txt. For more information, see Mirror the images to the file system.
$TARGET_REGISTRY/cpfs This argument generates an image-mapping file that is used by the oc image mirror commands to mirror images to the TARGET_REGISTRY at namespace cpfs. For example, if the CASE you are installing needs the image, quay.io/opencloudio/ibm-events-kafka-2.6.0@sha256:10d422dddd29ff19c87066fc6510eee05f5fa4ff608b87a06e898b3b6a3a13c7, then its final URL in your target registry will be $TARGET_REGISTRY/cpfs/opencloudio/ibm-events-kafka-2.6.0. Note the new namespace of cpfs in the URL. The namespace path can be multi level if your target registry supports it.

The command generates the following files at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION:

  • images-mapping-to-filesystem.txt
  • images-mapping-from-filesystem.txt
  • image-content-source-policy.yaml

3.2. Authenticate with the IBM Entitled Registry

Log in to the IBM Entitled Registry to generate an authentication file containing the IBM Entitled Registry credentials, and then create an environment variable that has the location of the authentication file. This file is used later to enable the oc image mirror command to pull the images from the IBM Entitled Registry.

  1. Get the authentication credentials for the IBM Entitled Registry.

    1. To obtain the entitlement key that is assigned to your IBMid, log in to MyIBM Container Software Library Opens in a new tab with the IBMid and password details that are associated with the entitled software.

    2. In the Entitlement keys section, select Copy key to copy the entitlement key.

  2. Run the following command to create an environment variable that contains your entitlement key.

    export ENTITLED_REGISTRY_PASSWORD=<key>
    

    Where <key> is the entitlement key that you copied in the previous step.

  3. Store the authentication credentials for the IBM Entitled Registry.

    Run the following command:

    podman login cp.icr.io -u cp -p ${ENTITLED_REGISTRY_PASSWORD}
    export REGISTRY_AUTH_FILE=${XDG_RUNTIME_DIR}/containers/auth.json
    unset ENTITLED_REGISTRY_PASSWORD
    

    Note: The authentication file is usually at ${XDG_RUNTIME_DIR}/containers/auth.json. For more information, see the Options section in the Podman documentationOpens in a new tab.

3.3. Mirror the images to the file system

Complete these steps to mirror the images from the internet to a file system on your portable device.

  1. Create an environment variable to store the location of the file system where the images are to be stored.

    export IMAGE_PATH=<image-path>
    

    Where <image-path> is the directory where you want the images to be stored.

  2. Run the following command to mirror the images from the IBM Entitled Registry to the file system.

    nohup oc image mirror \
    -f ~/.ibm-pak/data/mirror/${IA_CASE_NAME}/${IA_CASE_VERSION}/images-mapping-to-filesystem.txt \
    -a ${REGISTRY_AUTH_FILE} \
    --filter-by-os '.*' \
    --insecure \
    --skip-multiple-scopes \
    --dir "${IMAGE_PATH}" \
    --max-per-registry=1 > my-mirror-progress.txt 2>&1 &
    

    The UNIX® command nohup is used to ensure that the mirroring process continues even if there is a loss of network connection, and redirection of output to a file provides improved monitoring and error visibility.

    Run the following command if you want to see the progress of the mirroring:

    tail -f my-mirror-progress.txt
    

    Note: If an error occurs during mirroring, the mirror command can be rerun.

3.4 Setup the file system in the air-gapped environment

  1. Copy files to the air-gapped environment (portable storage device only)

    If you are using a portable storage device, you must copy the files from the portable storage device to a local compute device in the air-gapped environment that has access to the target registry. If you are using a portable compute device, then these items are already present, and you can proceed to the next step.

    Copy the following items to your local compute device:

    • the file system located at $IMAGE_PATH, which you specified earlier
    • ~/.ibm-pak directory
  2. Disconnect the device that has your file system, (the portable compute device or the local compute device) from the internet and connect it to the air-gapped environment.

  3. Ensure that environment variables are set on the device in the air-gapped environment that has access to the target registry.

    If you are using a portable storage device, then set the following environment variables on your local compute device within the air-gapped environment.

    If you are using a portable compute device that you have restarted since mirroring the images, then your environment variables will have been lost and you will need to set the following environment variables on your portable compute device again.

    export IA_CASE_NAME=ibm-ia-installer
    export IA_CASE_VERSION=1.10.1
    export IA_CASE_INVENTORY_SETUP=ibmInfrastructureAutomationOperatorSetup
    export TARGET_REGISTRY_HOST=<IP_or_FQDN_of_target_registry>
    export TARGET_REGISTRY_PORT=<port_number_of_target_registry>
    export TARGET_REGISTRY=${TARGET_REGISTRY_HOST}:${TARGET_REGISTRY_PORT}
    export TARGET_REGISTRY_USER=<username>
    export TARGET_REGISTRY_PASSWORD=<password>
    export IMAGE_PATH=<image_path>
    

3.5 Authenticate with the target registry

Authenticate with the target registry in the air-gapped environment that you are mirroring the images into.

Run the following command:

podman login ${TARGET_REGISTRY} -u ${TARGET_REGISTRY_USER} -p ${TARGET_REGISTRY_PASSWORD}
export REGISTRY_AUTH_FILE=${XDG_RUNTIME_DIR}/containers/auth.json

Note: The authentication file is usually at ${XDG_RUNTIME_DIR}/containers/auth.json. For more information, see the Options section in the Podman documentationOpens in a new tab.

3.6 Mirror the images to the target registry from the file system

Complete the steps in this section on the device that has your file system (the portable compute device or the local compute device) to copy the images from the file system to the $TARGET_REGISTRY. Your device with the file system must be connected to both the target registry and the Red Hat OpenShift cluster.

Run the following command to copy the images referenced in the images-mapping-from-filesystem.txt from the $IMAGE_PATH file system to the final target registry.

nohup oc image mirror \
-f ~/.ibm-pak/data/mirror/${IA_CASE_NAME}/${IA_CASE_VERSION}/images-mapping-from-filesystem.txt \
--from-dir "${IMAGE_PATH}" \
-a ${REGISTRY_AUTH_FILE} \
--filter-by-os '.*' \
--insecure  \
--skip-multiple-scopes \
--max-per-registry=1 > my-mirror-progress2.txt 2>&1 &

The UNIX command nohup is used to ensure that the mirroring process continues even if there is a loss of network connection, and redirection of output to a file provides improved monitoring and error visibility.

Run the following command if you want to see the progress of the mirroring:

tail -f my-mirror-progress2.txt

Note: If an error occurs during mirroring, the mirror command can be rerun.

3.7 Configure the cluster

  1. Log in to your Red Hat OpenShift cluster.

    You can identify your specific oc login command by clicking the user menu in the upper left corner of the Red Hat OpenShift console, and then clicking Copy Login Command.

    Example:

    oc login <server> -u <cluster username> -p <cluster pass>
    
  2. Update the global image pull secret for your Red Hat OpenShift cluster.

    Follow the steps in the Red Hat OpenShift documentation topic Updating the global cluster pull secret Opens in a new tab.

    These steps enable your cluster to have authentication credentials in place to pull images from your TARGET_REGISTRY as specified in the image-content-source-policy.yaml, which you will apply to your cluster in the next step.

  3. Create the ImageContentSourcePolicy.

    Run the following command to create an ImageContentSourcePolicy (ICSP).

    oc apply -f  ~/.ibm-pak/data/mirror/${IA_CASE_NAME}/${IA_CASE_VERSION}/image-content-source-policy.yaml
    
  4. Verify your cluster node status.

    oc get MachineConfigPool -w
    

    Important: After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all of the MachineConfigPools are updated before proceeding to the next step.

  5. (Optional) If you use an insecure registry, you must add the target registry to the cluster's insecureRegistries list.

    oc patch image.config.openshift.io/cluster --type=merge \
    -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'
    

4. Configure storage

The storage configuration must satisfy your sizing requirements. Two storage classes are needed for installing Infrastructure Automation. For more information, see Storage.

5. Install Infrastructure Automation

Now that the images are mirrored to your air-gapped environment, you can deploy Infrastructure Automation to that environment. To install Infrastructure Automation, complete the following steps.

5.1 Create a custom project (namespace)

Run the following command to create a project (namespace) called cp4aiops to deploy Infrastructure Automation into.

oc create namespace cp4aiops

5.2 Create the catalog source

  1. Run the following command:

    oc apply -f ~/.ibm-pak/data/mirror/$IA_CASE_NAME/$IA_CASE_VERSION/catalog-sources.yaml
    
  2. Run the following command to verify that you have all the required catalog sources created.

    oc get pods -n openshift-marketplace
    oc get catalogsource -n openshift-marketplace
    

    The output must include:

    cam-install-operator-controller-manager-catalog
    cloud-native-postgresql-catalog
    ibm-cert-manager-catalog
    ibm-infra-management-install-operator-catalog
    ibm-infrastructure-automation-operator-catalog
    ibm-licensing-catalog
    opencloud-operators
    

5.3 Install Cert Manager

Skip this step if you already have a certificate manager installed on the Red Hat OpenShift cluster that you are installing Infrastructure Automation on. If you do not have a certificate manager then you must install one. The IBM Cloud Pak® foundational services Cert Manager is recommended, and can be installed with the following steps.

Note: If you are installing Infrastructure Automation and IBM Cloud Pak for AIOps on the same Red Hat OpenShift cluster, then you already installed a certificate manager as part of the IBM Cloud Pak for AIOps installation process. If you are installing Infrastructure Automation and IBM Cloud Pak for AIOps on different clusters, then you must ensure that a certificate manager is installed on each cluster.

For more information about IBM Cloud Pak® foundational services Cert Manager hardware requirements, see IBM Certificate Manager (cert-manager) hardware requirements Opens in a new tab in the IBM Cloud Pak foundational services documentation.

  1. Run the following command to create the resource definitions that you need:

    cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: ibm-cert-manager
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ibm-cert-manager-operator-group
      namespace: ibm-cert-manager
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: ibm-cert-manager-operator
      namespace: ibm-cert-manager
    spec:
      channel: v4.2
      installPlanApproval: Automatic
      name: ibm-cert-manager-operator
      source: ibm-cert-manager-catalog
      sourceNamespace: openshift-marketplace
    EOF
    
  2. Run the following command to ensure that the IBM Cloud Pak foundational services Cert Manager pods have a STATUS of Running before proceeding to the next step.

    oc -n ibm-cert-manager get pods
    

    Example output for a successful IBM Cloud Pak foundational services Cert Manager installation:

    NAME                                        READY   STATUS    RESTARTS   AGE
    cert-manager-cainjector-674854c49d-vstq4    1/1     Running   0          8d
    cert-manager-controller-646d4bd6fd-zwmqm    1/1     Running   0          8d
    cert-manager-webhook-8598787c8-s4lkt        1/1     Running   0          8d
    ibm-cert-manager-operator-c96957695-dkxnm   1/1     Running   0          8d
    

5.4 Install the License Service

Skip this step if the IBM Cloud Pak foundational services License Service is already installed on the Red Hat OpenShift cluster that you are installing Infrastructure Automation on.

Note: If you are installing Infrastructure Automation and IBM Cloud Pak for AIOps on the same Red Hat OpenShift cluster, then you already installed the IBM Cloud Pak foundational services License Service as part of the IBM Cloud Pak for AIOps installation process. If you are installing Infrastructure Automation and IBM Cloud Pak for AIOps on different clusters, then you must ensure that IBM Cloud Pak foundational services License Service is installed on each cluster.

Infrastructure Automation requires the installation of the IBM Cloud Pak foundational services License Service. You must install the IBM Cloud Pak foundational services License Service on the Red Hat OpenShift cluster that you are installing Infrastructure Automation on.

  1. Run the following command to create the resource definitions that you need:

    cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: ibm-licensing
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ibm-licensing-operator-group
      namespace: ibm-licensing
    spec:
      targetNamespaces:
      - ibm-licensing
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: ibm-licensing-operator-app
      namespace: ibm-licensing
    spec:
      channel: v4.2
      installPlanApproval: Automatic
      name: ibm-licensing-operator-app
      source: ibm-licensing-catalog
      sourceNamespace: openshift-marketplace
    EOF
    
  2. Run the following command to ensure that the IBM Cloud Pak foundational services License Server pods have a STATUS of Running before proceeding to the next step.

    oc -n ibm-licensing get pods
    

    Example output for a successful IBM Cloud Pak foundational services License Service installation:

    NAME                                              READY   STATUS    RESTARTS   AGE
    ibm-licensing-operator-db4cd746c-xzmlf            1/1     Running   0          8d
    ibm-licensing-service-instance-596b99588f-76cc5   1/1     Running   0          8d
    

For more information about the IBM Cloud Pak® foundational services License Service, see License Service Opens in a new tab in the IBM Cloud Pak foundational services documentation.

5.5 Install Infrastructure Automation

If you want to install Infrastructure Automation with Cloud Pak for AIOps, follow step 4 and onwards Online installation of Infrastructure Automation for use with IBM Cloud Pak for AIOps (CLI).

If you want to install stand-alone Infrastructure Automation, follow step 8 and onwards Online installation of Infrastructure Automation (CLI).