Deprecated: Installing with a bastion host using cloudctl
You can use a bastion host to perform an air-gapped installation of IBM® API Connect on OpenShift Container Platform (OCP) when your cluster has no internet connectivity.
Before you begin
Ensure that your environment is ready for API Connect by preparing for installation.
About this task
A bastion host is a machine that has connectivity both to the internet and to the cluster in the restricted environment.
- This task requires you to use Red Hat Skopeo for moving container images. Skopeo is not available for Microsoft Windows, so you cannot perform this task on Windows.
- Don't use the tilde ~ within double quotation marks in any command because the tilde doesn’t expand and your commands might fail.
Procedure
-
Prepare a local Docker registry for use with the cluster in the restricted environment.
A local Docker registry is used to store all images in your restricted environment. You must create such a registry and ensure that it meets the following requirements:
- Supports Docker Manifest V2, Schema 2.
The internal Red Hat OpenShift registry is not compliant with Docker Manifest V2, Schema 2, so it is not suitable for use as a private registry for restricted environments.
- Is accessible from your OpenShift cluster nodes.
- Has the username and password of a user who can write to the target registry from the internal host.
- Has the username and password of a user who can read from the target registry that is on the OpenShift cluster nodes.
- Allows path separators in the image name.
Verify that you have the following credentials available:
- The credentials of a user who can write and create repositories. The internal host uses these credentials.
- The credentials of a user who can read all repositories. The OpenShift cluster uses these credentials.
- Supports Docker Manifest V2, Schema 2.
-
Prepare a bastion host that can be connected to the internet.
The host must be on a Linux x86_64 platform, or any operating system that the IBM Cloud Pak CLI, the OpenShift CLI. and RedHat Skopeo support.
Complete the following steps to set up your bastion host:- Install OpenSSL version 1.1.1 or higher.
-
Install Docker or Podman:
- To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
yum check-update yum install docker
- To install Podman, see Podman Installation Instructions.
- To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
-
Install
httpd-tools
by running the following commands:yum install httpd-tools
-
Install the IBM Cloud Pak CLI by completing the following steps:
Install the latest version of the binary file for your platform. For more information, see cloud-pak-cli.
- Download the binary file by running the following
command:
wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/<binary_file_name>
For example:wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/cloudctl-linux-amd64.tar.gz
- Extract the binary file by running the following
command:
tar -xf <binary_file_name>
- Run the following commands to modify and move the
file:
chmod 755 <file_name> mv <file_name> /usr/local/bin/cloudctl
- Confirm that
cloudctl
is installed by running the following command:cloudctl --help
The
cloudctl
usage is displayed.
- Download the binary file by running the following
command:
-
Install the
oc
OpenShift Container Platform CLI tool.For more information, see Getting started with the CLI in the Red Hat OpenShift documentation.
-
Install RedHat Skopeo CLI version 1.0.0 or higher.
For more information, see Installing Skopeo from packages.
-
Run the following command to create a directory that serves as the offline store.
The following example creates a directory called "offline", which is used in the subsequent steps.
mkdir $HOME/offline
Note: This offline store must be persistent to avoid transferring data more than once. The persistence also helps to run the mirroring process multiple times or on a schedule.
-
Log in to the cluster as a cluster administrator.
Following is an example command to log in to the OpenShift Container Platform cluster:
oc login <cluster_host:port> --username=<cluster_admin_user> --password=<cluster_admin_password>
-
Create a namespace where you will install API Connect.
Create an environment variable with a namespace, and then create the namespace. For example:
whereexport NAMESPACE=<APIC-namespace> oc create namespace $NAMESPACE
<APIC-namespace>
is the namespace where you will install API Connect.The namespace where you install API Connect must meet the following requirements:- OpenShift restricts the use of default namespaces for installing non-cluster services. The
following namespaces cannot be used to install API Connect:
default
,kube-system
,kube-public
,openshift-node
,openshift-infra
,openshift
. = - OpenShift: Only one top-level CR (APIConnectCluster) can be deployed in each namespace.
- Cloud Pak for Integration (Platform Navigator): Only one API Connect capability can be deployed in each namespace.
- OpenShift restricts the use of default namespaces for installing non-cluster services. The
following namespaces cannot be used to install API Connect:
-
On the bastion host, create environment variables for the installer and image inventory.
Create the following environment variables with the installer image name and the image inventory:
export CASE_NAME=ibm-apiconnect export CASE_VERSION=4.0.4 export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz export CASE_INVENTORY_SETUP=apiconnectOperatorSetup export OFFLINEDIR=$HOME/offline export OFFLINEDIR_ARCHIVE=offline.tgz export CASE_REMOTE_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case/$CASE_NAME/$CASE_VERSION/$CASE_ARCHIVE export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE export BASTION_DOCKER_REGISTRY_HOST=localhost export BASTION_DOCKER_REGISTRY_PORT=443 export BASTION_DOCKER_REGISTRY=$BASTION_DOCKER_REGISTRY_HOST:$BASTION_DOCKER_REGISTRY_PORT export BASTION_DOCKER_REGISTRY_USER=username export BASTION_DOCKER_REGISTRY_PASSWORD=password export BASTION_DOCKER_REGISTRY_PATH=$OFFLINEDIR/imageregistry
Note: SetCASE_VERSION
to the value for your API Connect release. See Operator, operand, and CASE versions. -
Download the API Connect installer and image inventory by running the following command:
cloudctl case save \ --case $CASE_REMOTE_PATH \ --outputdir $OFFLINEDIR
-
Mirror the images from the ICR (source) registry to the bastion (destination) registry.
-
Store the credentials for the ICR (source) registry.
The following command stores and caches the IBM Entitled Registry credentials in a file on your file system in the
$HOME/.airgap/secrets
location:cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --namespace $NAMESPACE \ --args "--registry cp.icr.io --user cp --pass <entitlement-key> --inputDir $OFFLINEDIR"
-
Store the credentials for the bastion (destination) registry.
The following command stores and caches the Docker registry credentials in a file on your file system in the
$HOME/.airgap/secrets
location:cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --args "--registry $BASTION_DOCKER_REGISTRY --user $BASTION_DOCKER_REGISTRY_USER --pass $BASTION_DOCKER_REGISTRY_PASSWORD"
-
Start the Docker registry service on the bastion host.
- Initialize the Docker registry by running the following
command:
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action init-registry \ --args "--registry $BASTION_DOCKER_REGISTRY_HOST --user $BASTION_DOCKER_REGISTRY_USER --pass $BASTION_DOCKER_REGISTRY_PASSWORD --dir $BASTION_DOCKER_REGISTRY_PATH"
- Start the Docker registry by running the following
command:
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action start-registry \ --args "--registry $BASTION_DOCKER_REGISTRY_HOST --port $BASTION_DOCKER_REGISTRY_PORT --user $BASTION_DOCKER_REGISTRY_USER --pass $BASTION_DOCKER_REGISTRY_PASSWORD --dir $BASTION_DOCKER_REGISTRY_PATH"
- Initialize the Docker registry by running the following
command:
-
If you use an insecure registry, add the local registry to the cluster insecureRegistries
list.
oc patch image.config.openshift.io/cluster --type=merge -p '{"spec":{"registrySources":{"insecureRegistries":["'$BASTION_DOCKER_REGISTRY'"]}}}'
-
Mirror the images to the registry on the bastion host.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action mirror-images \ --args "--registry $BASTION_DOCKER_REGISTRY --inputDir $OFFLINEDIR"
Note: If your Docker registry location has a subdirectory, do not modify$BASTION_DOCKER_REGISTRY
to include it. Instead, add the argument--nsPrefix
to the--args
statement and specify the subdirectory; for example:--args "--registry $BASTION_DOCKER_REGISTRY --inputDir $OFFLINEDIR --nsPrefix apiconnect/"
- If your deployment is running on an s390x architecture,
re-tag the
EDB
image toapic
:skopeo copy --all docker://$BASTION_DOCKER_REGISTRY/cp/cpd/postgresql:12.11-4.8.0 docker://$BASTION_DOCKER_REGISTRY/cp/apic/ibm-apiconnect-management-edb-postgresql:12.11-4.8.0 --src-creds $BASTION_DOCKER_REGISTRY_USER:$BASTION_DOCKER_REGISTRY_PASSWORD --dest-creds $BASTION_DOCKER_REGISTRY_USER:$BASTION_DOCKER_REGISTRY_PASSWORD
If you are using an insecure registry, add the
--tls-verify=false
option.
-
Store the credentials for the ICR (source) registry.
- Optional:
Save the Docker registry image that you stored on the bastion host.
If your air-gapped network doesn’t have a Docker registry image, you can save the image on the bastion host and copy it later to the host in your air-gapped environment.
docker save docker.io/library/registry:2.6 -o $BASTION_DOCKER_REGISTRY_PATH/registry-image.tar
-
Configure the image pull secret and verify node status.
- Reconfigure the bastion registry environment variables to be relative to your
OpenShift cluster.
export BASTION_DOCKER_REGISTRY_HOST=<IP_or_FQDN_of_bastion_docker_registry> export BASTION_DOCKER_REGISTRY_PORT=443 export BASTION_DOCKER_REGISTRY=$BASTION_DOCKER_REGISTRY_HOST:$BASTION_DOCKER_REGISTRY_PORT export BASTION_DOCKER_REGISTRY_USER=username export BASTION_DOCKER_REGISTRY_PASSWORD=password
- Store the credentials for the bastion (destination) registry.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --args "--registry $BASTION_DOCKER_REGISTRY --user $BASTION_DOCKER_REGISTRY_USER --pass $BASTION_DOCKER_REGISTRY_PASSWORD"
-
Configure a global image pull secret and ImageContentSourcePolicy.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --namespace $NAMESPACE \ --inventory $CASE_INVENTORY_SETUP \ --action configure-cluster-airgap \ --args "--registry $BASTION_DOCKER_REGISTRY --inputDir $OFFLINEDIR"
-
Verify that the ImageContentSourcePolicy resource is created.
oc get imageContentSourcePolicy
- Optional:
If you use an insecure registry, you must add the local registry to the cluster
insecureRegistries list.
oc patch image.config.openshift.io/cluster --type=merge -p '{"spec":{"registrySources":{"insecureRegistries":["'$BASTION_DOCKER_REGISTRY'"]}}}'
-
Verify your cluster node status.
oc get nodes
After the
imageContentsourcePolicy
and global image pull secret are applied, you might see the node status asReady
,Scheduling
, orDisabled
. Wait until all the nodes show aReady
status.
- Reconfigure the bastion registry environment variables to be relative to your
OpenShift cluster.
-
Create the CatalogSource.
API Connect can be installed by adding the
CatalogSource
for the mirrored operators to your cluster and then using OLM to install the operators.-
Install the catalog by running the following command:
This command adds the
CatalogSource
for the components to your cluster, so the cluster can access them from the private registry:cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory $CASE_INVENTORY_SETUP \ --action install-catalog \ --namespace $NAMESPACE \ --args "--registry $BASTION_DOCKER_REGISTRY --inputDir $OFFLINEDIR --recursive"
-
Verify that the
CatalogSource
for the API Connect installer operator is created by running the following commands:oc get pods -n openshift-marketplace oc get catalogsource -n openshift-marketplace
Known issue: In version 10.0.4.0-ifix2, the install-catalog action installs the CatalogSource with the public host (icr.io) still attached to the image name. To fix this, patch theibm-apiconnect-catalog
CatalogSource with the following command:oc patch catalogsource ibm-apiconnect-catalog -n openshift-marketplace --type=merge -p='{"spec":{"image":"'${BASTION_DOCKER_REGISTRY}'/cpopen/ibm-apiconnect-catalog:latest-amd64"}}'
-
Install the catalog by running the following command:
-
Install the API Connect Operator by running the following command:
cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory $CASE_INVENTORY_SETUP \ --action install-operator \ --namespace $NAMESPACE
Note: There is a known issue on Kubernetes version 1.19.4 or higher that can cause the DataPower operator to fail to start. In this case, the DataPower Operator pods can fail to schedule, and will display the status message:no nodes match pod topology spread constraints (missing required label)
. For example:0/15 nodes are available: 12 node(s) didn't match pod topology spread constraints (missing required label), 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
You can workaround the issue by completing the following steps:- Uninstall the DataPower operator deployment.
- Download the DataPower case bundle with DataPower operator v1.2.1 or later. These versions fix
the limitation. Follow these instructions: https://ibm.github.io/datapower-operator-doc/install/case.
You can download the DataPower CASE bundle tar file from the following location: https://github.com/IBM/cloud-pak/raw/master/repo/case/ibm-datapower-operator-cp4i/1.2.1
-
Install API Connect (the operand).
API Connect provides one top-level CR that includes all of the API Connect subsystems (Management, Developer Portal, Analytics, and Gateway Service).
Attention: Before installing API Connect, you can customize the configuration of the Analytics subsystem. Review the information in Planning your analytics deployment to choose configuration options, and then configure the settings as explained in Installing the Analytics subsystem. Remember that OpenShift uses a single, top-level CR for installation, so you will add your configuration settings to thespec.analytics
section of the CR.-
Create a YAML file to use for deploying the top-level CR. Use the template that applies to your
deployment (non-production or production).
Note: The values shown in the following examples might not be suitable for your deployment. For information on the license, profile, and version settings, as well as additional configuration settings, see API Connect configuration settings.
- Example CR settings for a one replica
deployment:
apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-minimum name: <name_of_your_instance> namespace: <APIC-namespace> spec: license: accept: true license: L-GVEN-GFUPVE metric: PROCESSOR_VALUE_UNIT use: nonproduction profile: n1xc17.m48 version: 10.0.5.3 storageClassName: <default-storage-class>
- Example CR settings for a three replica deployment:
apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-production name: <name_of_your_instance> namespace: <APIC-namespace> spec: license: accept: true license: L-GVEN-GFUPVE metric: PROCESSOR_VALUE_UNIT use: production profile: n3xc16.m48 version: 10.0.5.3 storageClassName: <default-storage-class>
- Example CR settings for a one replica
deployment:
-
Apply the YAML file:
oc apply -f <your_yaml_file>
-
To verify your API Connect cluster is successfully installed, run the following command:
oc get apic -n <APIC-namespace>
-
Verify that you can log in to the API Connect Cloud Manager UI:
To determine the location for logging in, view all the endpoints:
oc get routes -n <APIC-namespace>
-
Locate the
mgmt-admin-apic
endpoint, and access the Cloud Manager UI. -
Login as the API Connect administrator.
When you install with the top-level CR, the password is auto-generated. To get the password:
oc get secret -n <APIC-namespace> | grep mgmt-admin-pass oc get secret -n <APIC-namespace> <secret_name_from_previous command> -o jsonpath="{.data.password}" | base64 -d && echo
-
Create a YAML file to use for deploying the top-level CR. Use the template that applies to your
deployment (non-production or production).
- Optional: Configure additional features related to inter-subsystem communication security, such as CA verification and JWT security: Enable additional features post-install.
- Optional: Increase the timeout settings for the API Connect management endpoints, particularly for large deployments. See Configuring timeouts for management endpoints.
What to do next
When you finish installing API Connect, prepare your deployment for disaster recovery so that your data can be restored in the event of an emergency.