Installing with a bastion host
You can use a bastion host to perform an air-gapped installation of the latest version of API Connect 10.0.1.x-eus on Red Hat OpenShift Container Platform (OCP) when your cluster has no internet connectivity.
Before you begin
This task must be performed by a Red Hat OpenShift administrator.
Complete the following tasks to prepare for deploying API Connect:
About this task
If your cluster is not connected to the internet, you can mirror product images to a registry in your network-restricted environment by using a bastion host. A bastion host has access to both the public internet and the network-restricted environment where the target clusters reside. You can fetch product images from the internet and push them to a local registry; then you can pull the images from the local registry to the target cluster for installation.
Procedure
- Set up the mirroring environment.
- Prepare the target cluster:
- Deploy a supported version of Red Hat OpenShift Container Platform (OCP)
as a cluster.
For information, see Table 2 "API Connect and OpenShift Container Platform (OCP) compatibility matrix" in IBM API Connect Version 10 software product compatibility requirements.
- Configure storage on the cluster and make sure that it is available.
- Deploy a supported version of Red Hat OpenShift Container Platform (OCP)
as a cluster.
- Prepare the bastion host:
You must be able to connect your bastion host to the internet and to the restricted network environment (with access to the Red Hat OpenShift Container Platform (OCP) cluster and the local registry) at the same time. Your host must be on a Linux x86_64 or Mac platform with any operating system that the Red Hat OpenShift Client supports (in Windows, execute the actions in a Linux x86_64 VM or from a Windows Subsystem for Linux terminal).
- Ensure that the sites and ports listed in Table 1 can be reached from the bastion host:
Table 1. Sites that must be reached from the bastion host Site Description icr.io:443
IBM entitled registry quay.io:443
Local API Connect image repository github.com
CASE files and tools redhat.com
Red Hat OpenShift Container Platform (OCP) upgrades - On the bastion host, install either Docker or Podman (not both).
Docker and Podman are used for managing containers; you only need to install one of these applications.
- To install Docker (for example, on Red Hat Enterprise
Linux), run the following commands:
yum check-update yum install docker
- To install Podman, see the Podman
installation instructions. For example, on Red Hat Enterprise Linux 9, install Podman with the following command:
yum install podman
- To install Docker (for example, on Red Hat Enterprise
Linux), run the following commands:
- Install the Red Hat OpenShift Client tool (
oc
) as explained in Getting started with the OpenShift CLI.The
oc
tool is used for managing Red Hat OpenShift resources in the cluster. - Download the IBM Catalog Management Plug-in for IBM Cloud Paks version
1.1.0 or later from GitHub.The
ibm-pak
plug-in enables you to access hosted product images, and to runoc ibm-pak
commands against the cluster. To confirm thatibm-pak
is installed, run the following command and verify that the response lists the command usage:oc ibm-pak --help
- Ensure that the sites and ports listed in Table 1 can be reached from the bastion host:
- Set up a local image registry and credentials.
The local Docker registry stores the mirrored images in your network-restricted environment.
- Install a registry, or get access to an existing registry.
You might already have access to one or more centralized, corporate registry servers to store the API Connect images. If not, then you must install and configure a production-grade registry before proceeding.
The registry product that you use must meet the following requirements:- Supports multi-architecture images through Docker Manifest V2, Schema
2
For details, see Docker Manifest V2, Schema 2.
- Is accessible from the Red Hat OpenShift Container Platform cluster nodes
- Allows path separators in the image name
Note: Do not use the Red Hat OpenShift image registry as your local registry because it does not support multi-architecture images or path separators in the image name. - Supports multi-architecture images through Docker Manifest V2, Schema
2
- Configure the registry to meet the following requirements:
- Supports auto-repository creation
- Has sufficient storage to hold all of the software that is to be transferred
- Has the credentials of a user who can create and write to repositories (the mirroring process uses these credentials)
- Has the credentials of a user who can read all repositories (the Red Hat OpenShift Container Platform cluster uses these credentials)
To access your registries during an air-gapped installation, use an account that can write to the target local registry. To access your registries during runtime, use an account that can read from the target local registry.
- Install a registry, or get access to an existing registry.
- Prepare the target cluster:
-
Set environment variables and download the CASE file.
- Create the following environment variables with the installer image name and the image
inventory on your host:
export CASE_NAME=ibm-apiconnect export CASE_VERSION=2.1.17 export ARCH=amd64
For information on API Connect CASE versions and their corresponding operators and operands, see Operator, operand, and CASE version.
- Connect your host to the internet (it does not need to be connected to the network-restricted environment at this time).
- Download the CASE file to your host:
oc ibm-pak get $CASE_NAME --version $CASE_VERSION
If you omit the
--version
parameter, the command downloads the latest version.
- Create the following environment variables with the installer image name and the image
inventory on your host:
-
Mirror the images.
The process of mirroring images pulls the images from the internet and pushes them to your local registry. After mirroring your images, you can configure your cluster and pull the images to it before installing API Connect.
- Generate mirror manifests.
- Define the environment variable $TARGET_REGISTRY by running the following command:
Replaceexport TARGET_REGISTRY=<target-registry>
<target-registry>
with the IP address (or host name) and port of the local registry; for example:172.16.0.10:5000
. If you want the images to use a specific namespace within the target registry, you can specify it here; for example:172.16.0.10:5000/registry_ns
. - Generate mirror manifests by running the following
command:
oc ibm-pak generate mirror-manifests $CASE_NAME --version $CASE_VERSION $TARGET_REGISTRY
If you need to filter for a specific image group, add the parameter
--filter <image_group>
to the command.
Thegenerate
command creates the following files at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION:- catalog-sources.yaml
- catalog-sources-linux-<arch>.yaml (if there are architecture-specific catalog sources)
- image-content-source-policy.yaml
- images-mapping.txt
The files are used when mirroring the images to the
TARGET_REGISTRY
. - Define the environment variable $TARGET_REGISTRY by running the following command:
- Obtain an entitlement key for the entitled registry where the images are hosted:
- Log in to the IBM Container Library.
- In the Container software library, select Get entitlement key.
- In the "Access your container software" section, click Copy key.
- Copy the key to a safe location; you will use it to log in to
cp.icr.io
in the next step.
- Authenticate with the entitled registry where the images are hosted.
The image pull secret allows you to authenticate with the entitled registry and access product images.
- Run the following command to export the path to the file that will store the authentication
credentials that are generated on a Podman or Docker
login:
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
The authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows.
- Log in to the
cp.icr.io
registry with Podman or Docker; for example:podman login cp.icr.io
Use
cp
as the username and your entitlement key as the password.
- Run the following command to export the path to the file that will store the authentication
credentials that are generated on a Podman or Docker
login:
- Authenticate with the local registry.
Log in to the local registry using an account that can write images to that registry; for example:
podman login $TARGET_REGISTRY
If the registry is insecure, add the following flag to the command:
--tls-verify=false
. - Update the CASE manifest to correctly reference the DataPower Operator image.
Files for the DataPower Operator are now hosted on
icr.io
; however, the CASE manifest still refers todocker.io
as the image host. To work around this issue, visit Airgap install failure due to 'unable to retrieve source image docker.io' in the DataPower documentation and update the manifest as instructed. After the manifest is updated, continue to the next step in this procedure. - Mirror the product images.
- Connect the bastion host to both the internet and the restricted-network environment that contains the local registry.
- Run the following command to copy the images to the local registry:
oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ -a $REGISTRY_AUTH_FILE \ --filter-by-os '.*' \ --skip-multiple-scopes \ --max-per-registry=1
Note: If the local registry is not secured by TLS, or the certificate presented by the local registry is not trusted by your device, add the--insecure
option to the command.There might be a slight delay before you see a response to the command.
- Configure the target cluster.
Now that images have been mirrored to the local registry, the target cluster must be configured to pull the images from it. Complete the following steps to configure the cluster's global pull secret with the local registry's credentials and then instruct the cluster to pull the images from the local registry.
- Log in to your Red Hat OpenShift Container Platform
cluster:
oc login <openshift_url> -u <username> -p <password> -n <namespace>
- Update the global image pull secret for the cluster as
explained in the Red Hat OpenShift Container Platform
documentation.
Updating the image pull secret provides the cluster with the credentials needed for pulling images from your local registry.
- Create the ImageContentSourcePolicy, which instructs the cluster to pull the images from your
local registry (run both
commands):
oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
- Verify that the ImageContentSourcePolicy resource was
created:
oc get imageContentSourcePolicy
- Verify your cluster node status:
oc get MachineConfigPool -w
Wait for all nodes to be updated before proceeding to the next step.
- Log in to your Red Hat OpenShift Container Platform
cluster:
- Generate mirror manifests.
-
Apply the catalog sources.
Now that you have mirrored images to the target cluster, apply the catalog sources.
In the following steps, replace
<Architecture>
with eitheramd64
,s390x
orppc64le
as appropriate for your environment.- Export the variables for the command line to use:
export CASE_NAME=ibm-apiconnect export CASE_VERSION=2.1.17 export ARCH=amd64
- Generate the catalog sources and save them in another directory in case you need to
replicate this installation in the future.
- Get the catalog
source:
cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
- Get any architecture-specific catalog sources that you need to back up as
well:
cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml
You can also navigate to the directory in your file browser to copy these artifacts into files that you can keep for re-use or for pipelines.
- Get the catalog
source:
- Apply the catalog sources to the cluster.
- Apply the universal catalog
sources:
oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
- Apply any architecture-specific catalog
sources:
oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml
- Confirm that the catalog sources were created in the
openshift-marketplace
namespace:oc get catalogsource -n openshift-marketplace
- Apply the universal catalog
sources:
- Export the variables for the command line to use:
- Create the namespace where you will install API
Connect.
- Specify the namespace where you want to install the operator:
export NAMESPACE=<APIC-namespace>
The namespace where you install API Connect must meet the following requirements:
- Red Hat OpenShift Container Platform (OCP): Only one top-level CR (APIConnectCluster) can be deployed in each namespace.
- Cloud Pak for Integration: Only one API Connect capability can be deployed in each namespace.
- The following namespaces cannot be used to install API Connect because
Red Hat OpenShift Container Platform (OCP) restricts the use of default
namespaces for installing non-cluster services:
default
kube-system
kube-public
openshift-node
openshift-infra
openshift
- Create a new namespace for installing the operator:
oc new-project $NAMESPACE
- Create an
OperatorGroup
:- Create a YAML file called apiconnect-operator-group.yaml similar to the
following example, replacing
<APIC-namespace>
with your new namespace:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ibm-apiconnect-operatorgroup spec: targetNamespaces: - <APIC-namespace>
- Add the new operator group to your
namespace:
oc apply -f apiconnect-operator-group.yaml -n ${NAMESPACE}
- Create a YAML file called apiconnect-operator-group.yaml similar to the
following example, replacing
- Specify the namespace where you want to install the operator:
- Create a
Subscription
for theIBM APIConnect
operator.- Create a YAML file called apic-sub.yaml similar to the following
example:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ibm-apiconnect spec: channel: v2.1.11-eus name: ibm-apiconnect source: ibm-apiconnect-catalog sourceNamespace: openshift-marketplace
- Apply the new subscription to your namespace:
oc apply -f apic-sub.yaml -n ${NAMESPACE}
- Create a YAML file called apic-sub.yaml similar to the following
example:
-
Install API Connect (the operand).
-
Create a YAML file to use for deploying the top-level
APIConnectCluster
CR. Use the template that applies to your deployment (non-production or production).Note: The values shown in the following examples might not be suitable for your deployment. For information on the license, profile, and version settings, as well as additional configuration settings, see API Connect configuration settings.- Example CR settings for a one replica
deployment:
apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-minimum name: <name_of_your_instance> namespace: <APIC-namespace> spec: license: accept: true license: L-GVEN-GFUPVE metric: PROCESSOR_VALUE_UNIT use: nonproduction profile: n1xc17.m48 version: 10.0.1.12-eus storageClassName: <default-storage-class>
- Example CR settings for a three replica deployment:
apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-production name: <name_of_your_instance> namespace: <APIC-namespace> spec: license: accept: true license: L-GVEN-GFUPVE metric: PROCESSOR_VALUE_UNIT use: production profile: n3xc16.m48 version: 10.0.1.12-eus storageClassName: <default-storage-class>
- Example CR settings for a one replica
deployment:
-
Apply the YAML file:
oc apply -f <your_yaml_file>
-
To verify your API Connect cluster is successfully installed, run the following command:
oc get apic -n <APIC-namespace>
-
Verify that you can log in to the API Connect Cloud Manager UI:
To determine the location for logging in, view all the endpoints:
oc get routes -n <APIC-namespace>
-
Locate the
mgmt-admin-apic
endpoint, and access the Cloud Manager UI. -
Login as the API Connect administrator.
When you install with the top-level CR, the password is auto-generated. To get the password:
oc get secret -n <APIC-namespace> | grep mgmt-admin-pass oc get secret -n <APIC-namespace> <secret_name_from_previous command> -o jsonpath="{.data.password}" | base64 -d && echo
-
Create a YAML file to use for deploying the top-level
- Optional: Increase the timeout settings for the API Connect management endpoints, particularly for large deployments. See Configuring timeouts for management endpoints on OpenShift or Cloud Pak for Integration.
What to do next
When you finish installing API Connect, prepare your deployment for disaster recovery so that your data can be restored in the event of an emergency.