Deprecated: Installing with a portable computer using cloudctl
You can use a portable computing device such as a laptop to install API Connect on OpenShift when your cluster has no internet connectivity
Before you begin
Ensure that your environment is ready for API Connect by preparing for installation.
About this task
You can store the product code and images to a portable compute device such as laptop and transfer them to a local, air-gap network. By doing so, you can install in your air-gapped environment without using a bastion host.
Procedure
- Install an OpenShift Container Platform cluster.
-
Prepare a Docker registry.
A local Docker registry is used to store all images in your restricted environment. You must create such a registry and ensure that it meets the following requirements:
- Supports Docker Manifest V2, Schema 2.
The internal Red Hat OpenShift registry is not compliant with Docker Manifest V2, Schema 2, so it is not suitable for use as a private registry for restricted environments.
- Is accessible from your OpenShift cluster nodes.
- Has the username and password of a user who can write to the target registry from the internal host.
- Has the username and password of a user who can read from the target registry that is on the OpenShift cluster nodes.
- Allows path separators in the image name.
An example of a simple registry is included in Mirroring images for a disconnected installation in the OpenShift documentation.
Verify that you:
- Have the credentials of a user who can write and create repositories. The internal host uses these credentials.
- Have the credentials of a user who can read all repositories. The OpenShift cluster uses these credentials.
- Supports Docker Manifest V2, Schema 2.
-
Prepare the portable host.
Prepare a portable host that can be connected to the internet.
- The external host must be on a Linux x86_64 platform, or any operating system that the IBM Cloud Pak CLI and the OpenShift CLI supports.
- The external host
locale
must be set to English.
The portable device must have sufficient storage to hold all of the software that is to be transferred to the local Docker registry.
Complete these steps on your external host:
- Install OpenSSL version 1.1.1 or higher.
- Install Docker or Podman.
- To install Docker (for example, on Red Hat® Enterprise Linux®), run these commands:
yum check-update yum install docker
- To install Podman, see Podman Installation Instructions.
- To install Docker (for example, on Red Hat® Enterprise Linux®), run these commands:
- Install
httpd-tools
.yum install httpd-tools
- Install the IBM Cloud Pak CLI. Install the latest version of the binary file for your platform.
For more information, see cloud-pak-cli.
- Download the binary
file.
wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/<binary_file_name>
For example,wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/cloudctl-linux-amd64.tar.gz
- Extract the binary
file.
tar -xf <binary_file_name>
- Run the following commands to modify and move the
file.
chmod 755 <file_name> mv <file_name> /usr/local/bin/cloudctl
- Confirm that
cloudctl
is installed:cloudctl --help
The cloudctl usage is displayed.
- Install the
oc
OpenShift Container Platform CLI tool.For more information, see Openshift Container Platform CLI tools.
- Install the
skopeo
CLI version1.0.0
or higher.For more information, see Installing skopeo from packages.
- Download the binary
file.
- Create a directory that serves as the offline store.
Following is an example directory, which is used in the subsequent steps.
mkdir $HOME/offline
Note: This offline store must be persistent to avoid transferring data more than once. The persistence also helps to run the mirroring process multiple times or on a schedule.
-
Log in to the OpenShift Container Platform cluster as a cluster administrator; for
example:
oc login <cluster_host:port> --username=<cluster_admin_user> --password=<cluster_admin_password>
-
Create a namespace where you will install API Connect.
Create an environment variable with a namespace, and then create the namespace. For example:
export NAMESPACE=<APIC-namespace> oc create namespace $NAMESPACE
where
<APIC-namespace>
is the namespace where you will install API Connect.The namespace where you install API Connect must meet the following requirements:- OpenShift restricts the use of default namespaces for installing non-cluster services. The
following namespaces cannot be used to install API Connect:
default
,kube-system
,kube-public
,openshift-node
,openshift-infra
,openshift
. - OpenShift: Only one top-level CR (APIConnectCluster) can be deployed in each namespace.
- Cloud Pak for Integration (Platform Navigator): Only one API Connect capability can be deployed in each namespace.
- OpenShift restricts the use of default namespaces for installing non-cluster services. The
following namespaces cannot be used to install API Connect:
- Create environment variables for the installer and image inventory.
On your portable host, create the following environment variables with the installer image name and the image inventory.
Set
CASE_VERSION
to the value for your API Connect release. See Operator, operand, and CASE version.For example:
export CASE_NAME=ibm-apiconnect export CASE_VERSION=2.1.17 export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz export CASE_INVENTORY_SETUP=apiconnectOperatorSetup export OFFLINEDIR=$HOME/offline export OFFLINEDIR_ARCHIVE=offline.tgz export CASE_REMOTE_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case/$CASE_NAME/$CASE_VERSION/$CASE_ARCHIVE export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE export PORTABLE_DOCKER_REGISTRY_HOST=localhost export PORTABLE_DOCKER_REGISTRY_PORT=443 export PORTABLE_DOCKER_REGISTRY=$PORTABLE_DOCKER_REGISTRY_HOST:$PORTABLE_DOCKER_REGISTRY_PORT export PORTABLE_DOCKER_REGISTRY_USER=username export PORTABLE_DOCKER_REGISTRY_PASSWORD=password export PORTABLE_DOCKER_REGISTRY_PATH=$OFFLINEDIR/imageregistry
- Connect the portable host to the internet
Connect the portable host to the internet and disconnect it from the local, air-gapped network.
- Download the API Connect installer and image inventory:
Download the installer and image inventory to the portable registry host.
cloudctl case save \ --case $CASE_REMOTE_PATH \ --outputdir $OFFLINEDIR
-
Mirror the images from the ICR (source) registry to the portable host's (destination)
registry.
- Store the credentials for the ICR (source) registry.
The following command stores and caches the IBM Entitled Registry credentials in a file on your file system in the
$HOME/.airgap/secrets
location:cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --namespace $NAMESPACE \ --args "--registry cp.icr.io --user cp --pass <entitlement key> --inputDir $OFFLINEDIR"
- Store the credentials for the portable host's (destination) registry.
The following command stores and caches the registry credentials in a file on your file system in the
$HOME/.airgap/secrets
location:cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --args "--registry $PORTABLE_DOCKER_REGISTRY --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD"
- Run a Docker registry service on localhost.
- Initialize the Docker registry.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action init-registry \ --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"
- Start the Docker registry.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action start-registry \ --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --port $PORTABLE_DOCKER_REGISTRY_PORT --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"
- Initialize the Docker registry.
- Mirror the images to the registry on the portable host.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action mirror-images \ --args "--registry $PORTABLE_DOCKER_REGISTRY --inputDir $OFFLINEDIR"
- Store the credentials for the ICR (source) registry.
- Optional: Save the Docker registry image.
If your air-gapped network doesn’t have a Docker registry image, you can save the image on the portable device and copy it later to the host in your air-gapped environment.
docker save docker.io/library/registry:2.6 -o $PORTABLE_DOCKER_REGISTRY_PATH/registry-image.tar
-
Connect the portable host to the air-gapped network.
Connect the portable host to the air-gapped network and disconnect it from the internet.
- Mirror the images and configure the cluster. Note: Don't use the tilde ~ within double quotation marks in any command. For example, don’t use:
args "--registry <registry> --user <registry_userid> --pass {registry password} --inputDir ~/offline"
The tilde doesn’t expand and your commands might fail.
- Create environment variables with the local Docker registry connection
information.
export CASE_NAME=ibm-apiconnect export CASE_VERSION=2.1.17 export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz export CASE_INVENTORY_SETUP=apiconnectOperatorSetup export OFFLINEDIR=$HOME/offline export OFFLINEDIR_ARCHIVE=offline.tgz export CASE_REMOTE_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case/$CASE_NAME/$CASE_VERSION/$CASE_ARCHIVE export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE export LOCAL_DOCKER_REGISTRY_HOST=<IP_or_FQDN_of_local_docker_registry> export LOCAL_DOCKER_REGISTRY_PORT=443 export LOCAL_DOCKER_REGISTRY=$LOCAL_DOCKER_REGISTRY_HOST:$LOCAL_DOCKER_REGISTRY_PORT export LOCAL_DOCKER_REGISTRY_USER=username> export LOCAL_DOCKER_REGISTRY_PASSWORD=password>
- Set up registry credentials for mirroring.
Store credentials of the registry that is running on the internal host (created in previous step).
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --args "--registry $LOCAL_DOCKER_REGISTRY --user $LOCAL_DOCKER_REGISTRY_USER --pass $LOCAL_DOCKER_REGISTRY_PASSWORD"
- Configure a global image pull secret and
ImageContentSourcePolicy.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --namespace $NAMESPACE \ --inventory $CASE_INVENTORY_SETUP \ --action configure-cluster-airgap \ --args "--registry $LOCAL_DOCKER_REGISTRY --inputDir $OFFLINEDIR"
Note: In OpenShift Container Platform version 4.6, this step restarts all cluster nodes. The cluster resources might be unavailable until the time the new pull secret is applied. - Verify that the ImageContentSourcePolicy resource is
created.
oc get imageContentSourcePolicy
- Optional: If you use an insecure registry, you must add the local registry to the cluster
insecureRegistries
list.
oc patch image.config.openshift.io/cluster --type=merge -p '{"spec":{"registrySources":{"insecureRegistries":["'$LOCAL_DOCKER_REGISTRY'"]}}}'
- Verify your cluster node status.
oc get nodes
After the
imageContentsourcePolicy
and global image pull secret are applied, you might see the node status asReady
,Scheduling
, orDisabled
. Wait until all the nodes show aReady
status.
- Create environment variables with the local Docker registry connection
information.
- Create the CatalogSource.
API Connect can be installed by adding the
CatalogSource
for the mirrored operators to your cluster and using OLM to install the operators.- Install the catalog.
This command adds the
CatalogSource
for the components to your cluster, so the cluster can access them from the private registry:cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory $CASE_INVENTORY_SETUP \ --action install-catalog \ --namespace $NAMESPACE \ --args "--registry $LOCAL_DOCKER_REGISTRY --inputDir $OFFLINEDIR --recursive"
- Verify that the
CatalogSource
for the API Connect installer operator is created.oc get pods -n openshift-marketplace oc get catalogsource -n openshift-marketplace
Known issue: In version 10.0.1.6, the install-catalog action installs the CatalogSource with the public host (icr.io) still attached to the image name. To fix this, patch theibm-apiconnect-catalog
CatalogSource with the following command:oc patch catalogsource ibm-apiconnect-catalog -n openshift-marketplace --type=merge -p='{"spec":{"image":"'${LOCAL_DOCKER_REGISTRY}'/cpopen/ibm-apiconnect-catalog:latest-amd64"}}'
- Install the catalog.
-
Install API Connect Operator.
-
Run the following command to install the operator:
cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory $CASE_INVENTORY_SETUP \ --action install-operator \ --namespace $NAMESPACE
Known issue: If you are installing API Connect version 10.0.1.6-eus, 10.0.1.6-ifix1-eus, 10.0.1.7-eus, or 10.0.1.8-eus, you might encounter the following error while installing the operator:
Resolve the error by completing the following steps to update the ImageContentSourcePolicy for your deployment:Message: unpack job not completed: Unpack pod(openshift-marketplace/e9f169cee8bffacf9ab35d276a48b7207d9606e2b7a0a8087bc58b4ff7tx22l) container(pull) is pending. Reason: ImagePullBackOff, Message: Back-off pulling image "ibmcom/ibm-apiconnect-operator-bundle@sha256:ef0ce455270189c37a5dc0500219061959c041f88110f601f6e7bf8072df4943" Reason: JobIncomplete
- Log in to the OpenShift cluster UI as an administrator of your cluster.
- Click Search > Resources and search for ICSP.
- In the list of ICSPs, click ibm-apiconnect to edit it.
- In the
ibm-apiconnect
ICSP, click the YAML tab. - In the
spec.repositoryDigestMirrors
section, locate the-mirrors:
subsection containingsource: docker.io/ibmcom)
. - Add a new mirror ending with
/ibmcom
to the section as in the following example:- mirrors: - <AIRGAP_REGISTRY_ADDRESS>/ibmcom - <AIRGAP_REGISTRY_ADDRESS>/cpopen source: docker.io/ibmcom
- If the job does not automatically continue, uninstall and reinstall the API Connect operator.
-
Verify that the operator pods are running correctly:
oc get pods -n $NAMESPACE
The response looks like the following example:
NAME READY STATUS RESTARTS AGE datapower-operator-58745ffd96-fj9hz 1/1 Running 0 9m21s datapower-operator-conversion-webhook-566f565cd8-rzmfz 1/1 Running 0 8m43s ibm-apiconnect-76847dbb67-7b5t5 1/1 Running 0 9m15s ibm-common-service-operator-6f45487b59-8mg4w 1/1 Running 0 9m18s
Note: There is a known issue on Kubernetes version 1.19.4 or higher that can cause the DataPower operator to fail to start. In this case, the DataPower Operator pods can fail to schedule, and will display the status message:no nodes match pod topology spread constraints (missing required label)
. For example:0/15 nodes are available: 12 node(s) didn't match pod topology spread constraints (missing required label), 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
You can workaround the issue by completing the following steps:- Uninstall the DataPower operator deployment with the following
command:
cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory datapowerOperator \ --action=uninstallOperator \ --tolerance 1 \ --namespace $NAMESPACE
- Download the DataPower case bundle with DataPower operator v1.2.1 or later with the following
command:
cloudctl case save \ --case <path-to-operator-tgz> \ --outputdir $OFFLINEDIR/$CASE_ARCHIVE
You can download the DataPower CASE bundle tar file from the following location: https://github.com/IBM/cloud-pak/raw/master/repo/case/ibm-datapower-operator-cp4i/1.2.1.
For more information, on downloading the operator, see Follow these instructions: https://ibm.github.io/datapower-operator-doc/install/case.
- Install the newer operator with the following command:
cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory datapowerOperator \ --action install-operator \ --namespace $NAMESPACE
-
Run the following command to install the operator:
-
Install the subsystems (operands).
API Connect provides one top-level CR that includes all of the API Connect subsystems (Management, Developer Portal, Analytics, and Gateway Service).
- Create a YAML file to use for deploying the top-level CR. Use the template that applies to your
deployment (non-production or production).
For information on the license, profile, and version settings, as well as additional configuration settings, see API Connect configuration settings.
- Non-production>
apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-minimum name: <name_of_your_instance> namespace: <APIC-namespace> spec: license: accept: true use: nonproduction profile: n3xc4.m16 version: 10.0.1.12-eus storageClassName: <default-storage-class>
- Production
apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-production name: <name_of_your_instance> namespace: <APIC-namespace> spec: license: accept: true use: production profile: n12xc4.m12 version: 10.0.1.12-eus storageClassName: <default-storage-class>
- Non-production>
- Apply the YAML
file:
oc apply -f <your_yaml_file>
- To verify your API Connect cluster is successfully installed, run:
oc get apic -n <APIC-namespace>
- Verify that you can log in to the API Connect Cloud Manager UI:
To determine the location for logging in, view all the endpoints:
oc get routes -n <APIC-namespace>
- Locate the
mgmt-admin-apic
endpoint, and access the Cloud Manager UI. - Login as admin.
When you install with the top-level CR, the password is auto-generated. To get the password:
oc get secret -n <APIC-namespace> | grep mgmt-admin-pass oc get secret -n <APIC-namespace> <secret_name_from_previous command> -o jsonpath="{.data.password}" | base64 -d && echo
- Create a YAML file to use for deploying the top-level CR. Use the template that applies to your
deployment (non-production or production).
- Optional: Increase the timeout settings for management endpoints, particularly for large deployments. See Configuring timeouts for management endpoints on OpenShift or Cloud Pak for Integration.
What to do next
When you finish installing API Connect, prepare your deployment for disaster recovery so that your data can be restored in the event of an emergency.