Upgrading with a portable computer or storage device

Use a portable computer or portable storage device to perform an air-gapped upgrade of IBM® API Connect on Red Hat OpenShift Container Platform (OCP) using either the top-level APIConnectCluster CR or individual subsystem CRs. First the operator is updated, and then the operands (IBM API Connect itself).

Before you begin

  • If you are upgrading an installation online (connected to the internet), see Upgrading on OpenShift in an online environment.
  • The upgrade procedure requires you to use Red Hat Skopeo for moving container images. Skopeo is not available for Microsoft Windows, so you cannot perform this task using a Windows host.
  • If you are upgrading to a version of API Connect that supports a newer version of Red Hat OpenShift, complete the API Connect upgrade before upgrading Red Hat OpenShift.
  • Upgrading from 10.0.5.2 or earlier: If you did not verify that your Portal customizations are compatible with Drupal 10, do that now.

    In API Connect 10.0.5.3, the Developer Portal moved from Drupal 9 to Drupal 10 (this upgrade also requires PHP 8.1). The upgrade tooling will update your Developer Portal sites; however, if you have any custom modules or themes, it is your responsibility to ensure their compatibility with Drupal 10 and PHP 8.1 before starting the upgrade. Review the Guidelines on upgrading your Developer Portal from Drupal 9 to Drupal 10 to ensure that any customizations to the Developer Portal are compatible with Drupal 10 and PHP 8.1.

About this task

  • The Gateway subsystem remains available during the upgrade of the Management, Portal, and Analytics subsystems.
  • Don't use the tilde ~ within double quotation marks in any command because the tilde doesn’t expand and your commands might fail.

Procedure

  1. Ensure that you have completed all of the steps in Preparing to upgrade on OpenShift, including reviewing the Upgrade considerations on OpenShift.

    Do not attempt an upgrade until you have reviewed the considerations and prepared your deployment.

  2. Set up the mirroring environment.
    1. Prepare the target cluster:
    2. Prepare the portable device:

      You must be able to connect your portable device to the internet and to the restricted network environment (with access to the Red Hat OpenShift Container Platform (OCP) cluster and the local registry). The portable device must be on a Linux x86_64 or Mac platform with any operating system that the Red Hat OpenShift Client supports (in Windows, execute the actions in a Linux x86_64 VM or from a Windows Subsystem for Linux terminal).

      1. Ensure that the portable device has sufficient storage to hold all of the software that is to be transferred to the local registry.
      2. On the portable device, install either Docker or Podman (not both).

        Docker and Podman are used for managing containers; you only need to install one of these applications.

        • To install Docker (for example, on Red Hat Enterprise Linux), run the following commands:
          yum check-update
          yum install docker
        • To install Podman, see the Podman installation instructions.
          For example, on Red Hat Enterprise Linux 9, install Podman with the following command:
          yum install podman
      3. Install the Red Hat OpenShift Client tool (oc) as explained in Getting started with the OpenShift CLI.

        The oc tool is used for managing Red Hat OpenShift resources in the cluster.

      4. Download the IBM Catalog Management Plug-in for IBM Cloud Paks version 1.1.0 or later from GitHub.
        The ibm-pak plug-in enables you to access hosted product images, and to run oc ibm-pak commands against the cluster. To confirm that ibm-pak is installed, run the following command and verify that the response lists the command usage:
        oc ibm-pak --help
    3. Set up a local image registry and credentials.

      The local Docker registry stores the mirrored images in your network-restricted environment.

      1. Install a registry, or get access to an existing registry.

        You might already have access to one or more centralized, corporate registry servers to store the API Connect images. If not, then you must install and configure a production-grade registry before proceeding.

        The registry product that you use must meet the following requirements:
        • Supports multi-architecture images through Docker Manifest V2, Schema 2

          For details, see Docker Manifest V2, Schema 2.

        • Is accessible from the Red Hat OpenShift Container Platform cluster nodes
        • Allows path separators in the image name
        Note: Do not use the Red Hat OpenShift image registry as your local registry because it does not support multi-architecture images or path separators in the image name.
      2. Configure the registry to meet the following requirements:
        • Supports auto-repository creation
        • Has sufficient storage to hold all of the software that is to be transferred
        • Has the credentials of a user who can create and write to repositories (the mirroring process uses these credentials)
        • Has the credentials of a user who can read all repositories (the Red Hat OpenShift Container Platform cluster uses these credentials)

      To access your registries during an air-gapped installation, use an account that can write to the target local registry. To access your registries during runtime, use an account that can read from the target local registry.

  3. Set environment variables and download CASE files.

    Create environment variables to use while mirroring images, connect to the internet, and download the API Connect CASE files.

    1. Create the following environment variables with the installer image name and the image inventory on your portable device:
      export CASE_NAME=ibm-apiconnect
      export CASE_VERSION=4.0.4
      export ARCH=amd64

      For information on API Connect CASE versions and their corresponding operators and operands, see Operator, operand, and CASE versions.

    2. Connect your portable device to the internet (it does not need to be connected to the network-restricted environment at this time).
    3. Download the CASE file to your portable device:
      oc ibm-pak get $CASE_NAME --version $CASE_VERSION

      If you omit the --version parameter, the command downloads the latest version.

  4. Mirror the images.

    The process of mirroring images pulls the images from the internet and pushes them to your local registry. After mirroring your images, you can configure your cluster and pull the images to it before installing API Connect.

    1. Generate mirror manifests.
      1. Define the environment variable $TARGET_REGISTRY by running the following command:
        export TARGET_REGISTRY=<target-registry>
        Replace <target-registry> with the IP address (or host name) and port of the local registry; for example: 172.16.0.10:5000. If you want the images to use a specific namespace within the target registry, you can specify it here; for example: 172.16.0.10:5000/registry_ns.
      2. Generate mirror manifests by running the following commands:
        oc ibm-pak generate mirror-manifests $CASE_NAME file://integration --version $CASE_VERSION --final-registry $TARGET_REGISTRY

        If you need to filter for a specific image group, add the parameter --filter <image_group> to this command.

      The generate command creates the following files at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION:
      • catalog-sources.yaml
      • catalog-sources-linux-<arch>.yaml (if there are architecture-specific catalog sources)
      • image-content-source-policy.yaml
      • images-mapping-to-filesystem.txt
      • images-mapping-from-filesystem.txt

      The files are used when mirroring the images to the TARGET_REGISTRY.

    2. Obtain an entitlement key for the entitled registry where the images are hosted:
      1. Log in to the IBM Container Library.
      2. In the Container software library, select Get entitlement key.
      3. In the "Access your container software" section, click Copy key.
      4. Copy the key to a safe location; you will use it to log in to cp.icr.io in the next step.
    3. Authenticate with the entitled registry where the images are hosted.

      The image pull secret allows you to authenticate with the entitled registry and access product images.

      1. Run the following command to export the path to the file that will store the authentication credentials that are generated on a Podman or Docker login:
        export REGISTRY_AUTH_FILE=$HOME/.docker/config.json

        The authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows.

      2. Log in to the cp.icr.io registry with Podman or Docker; for example:
        podman login cp.icr.io

        Use cp as the username and your entitlement key as the password.

    4. Update the API Connect CASE manifest to correctly reference the DataPower Operator image.

      Files for the DataPower Operator are now hosted on icr.io; however, the CASE manifest still refers to docker.io as the image host. To work around this issue, visit Airgap install failure due to 'unable to retrieve source image docker.io' in the DataPower documentation and update the manifest as instructed. The manifest for API Connect (which uses the DataPower Operator) is stored in ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION.

      After the manifest is updated, continue to the next step in this procedure.

    5. Mirror the images from the internet to the portable device.
      1. Define the environment variable $IMAGE_PATH by running the following command:
        export IMAGE_PATH=<image-path>
        where <image-path> indicates where the files will be stored on the portable device's file system.
      2. Mirror the images to the portable device:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
          --filter-by-os '.*' \
          -a $REGISTRY_AUTH_FILE \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --dir "$IMAGE_PATH"

        There might be a slight delay before you see a response to the command.

    6. Move the portable device to the restricted-network environment.
      The procedure depends on the type of device that you are using:

      If you are using a portable computer, disconnect the device from the internet and connect it to the restricted-network environment. The same environment variables can be used.

      If you are using portable storage, complete the following steps:
      1. Transfer the following files to a device in the restricted-network environment:
        • The ~/.ibm-pak directory.
        • The contents of the <image-path> that you specified in the previous step.
      2. Create the same environment variables as on the original device; for example:
        export CASE_NAME=ibm-apiconnect
        export CASE_VERSION=4.0.4
        export ARCH=amd64
        export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
        export IMAGE_PATH=<image-path>
    7. Authenticate with the local registry.

      Log in to the local registry using an account that can write images to that registry; for example:

      podman login $TARGET_REGISTRY

      If the registry is insecure, add the following flag to the command: --tls-verify=false.

    8. Mirror the product images to the target registry.
      1. If you are using a portable computer, connect it to the restricted-network environment that contains the local registry.

        If you are using portable storage, you already transferred files to a device within the restricted-network environment.

      2. Run the following commands to copy the images to the local registry:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \
          -a $REGISTRY_AUTH_FILE \
          --filter-by-os '.*' \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --from-dir "$IMAGE_PATH"
        Note: If the local registry is not secured by TLS, or the certificate presented by the local registry is not trusted by your device, add the --insecure option to the command.

        There might be a slight delay before you see a response to the command.

    9. Configure the target cluster.

      Now that images have been mirrored to the local registry, the target cluster must be configured to pull the images from it. Complete the following steps to configure the cluster's global pull secret with the local registry's credentials and then instruct the cluster to pull the images from the local registry.

      1. Log in to your Red Hat OpenShift Container Platform cluster:
        oc login <openshift_url> -u <username> -p <password> -n <namespace>
      2. Update the global image pull secret for the cluster as explained in the Red Hat OpenShift Container Platform documentation.

        Updating the image pull secret provides the cluster with the credentials needed for pulling images from your local registry.

      3. Create the ImageContentSourcePolicy, which instructs the cluster to pull the images from your local registry (run both commands):
        oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
      4. Verify that each ImageContentSourcePolicy resource was created:
        oc get imageContentSourcePolicy
      5. Verify your cluster node status:
        oc get MachineConfigPool -w

        Wait for all nodes to be updated before proceeding to the next step.

  5. Apply the catalog sources.

    Now that you have mirrored images to the target cluster, apply the catalog sources.

    In the following steps, replace <Architecture> with either amd64, s390x or ppc64le as appropriate for your environment.

    1. Export the variables for the command line to use:
      export CASE_NAME=ibm-apiconnect
      export CASE_VERSION=4.0.4
      export ARCH=amd64
    2. Generate the catalog source and save it in another directory in case you need to replicate this installation in the future.
      1. Get the catalog source:
        cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
      2. Get any architecture-specific catalog sources that you need to back up as well:
        cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml

      You can also navigate to the directory in your file browser to copy these artifacts into files that you can keep for re-use or for pipelines.

    3. Apply the catalog sources to the cluster.
      1. Apply the universal catalog sources:
        oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
      2. Apply any architecture-specific catalog source:
        oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml
      3. Confirm that the catalog source was created in the openshift-marketplace namespace:
        oc get catalogsource -n openshift-marketplace
  6. Update the operator channel.
    1. Open the Red Hat OpenShift web console and click Operators > Installed Operators > IBM API connect > Subscriptions.
    2. Change the channel to the new version (v3.3), which triggers an upgrade of the API Connect operator.
      If you are upgrading from 10.0.4-ifix3 and the API Connect operator does not begin its upgrade within a few minutes, perform the following workaround to delete the ibm-ai-wmltraining subscription and associated csv:
      1. Run the following command to get the name of the subscription:
        oc get subscription -n <APIC_namespace> --no-headers=true | grep ibm-ai-wmltraining | awk '{print $1}'
      2. Run the following command to delete the subscription:
        oc delete subscription <subscription-name> -n <APIC_namespace>
      3. Run the following command to get the name of the csv:
        oc get csv --no-headers=true -n <APIC_namespace> | grep ibm-ai-wmltraining | awk '{print $1}'
      4. Run the following command to delete the csv:
        oc delete csv <csv-name> -n <APIC_namespace>

      Deleting the subscription and csv triggers the API Connect operator upgrade.

    When the API Connect operator is updated, the new operator pod starts automatically.

  7. Verify that the API Connect operator was updated by completing the following steps:
    1. Get the name of the pod that hosts the operator by running the following command:
      oc get po -n <APIC_namespace> | grep apiconnect
      The response looks like the following example:
      ibm-apiconnect-7bdb795465-8f7rm                                   1/1     Running     0          4m23s
    2. Get the API Connect version deployed on that pod by running the following command:
      oc describe po <ibm-apiconnect-operator-podname> -n <APIC_namespace> | grep -i productversion
      The response looks like the following example:
      productVersion: 10.0.5.3
  8. If using a top-level CR: Update the top-level APIConnectCluster CR:

    The spec section of the apiconnectcluster looks like the following example:

    apiVersion: apiconnect.ibm.com/v1beta1
    kind: APIConnectCluster
    metadata:
      labels:
        app.kubernetes.io/instance: apiconnect
        app.kubernetes.io/managed-by: ibm-apiconnect
        app.kubernetes.io/name: apiconnect-production
      name: prod
      namespace: <APIC_namespace>
    spec:
      allowUpgrade: true
      license:
        accept: true
        use: production
        license: L-GVEN-GFUPVE
      profile: n12xc4.m12
      version: 10.0.5.3
      storageClassName: rook-ceph-block
    1. Edit the apiconnectcluster CR by running the following command:
      oc -n <APIC_namespace> edit apiconnectcluster
    2. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): In the spec section, add a new allowUpgrade attribute and set it to true:
      spec:
        allowUpgrade: true

      The allowUpgrade attribute enables the upgrade to 10.0.5.x. Because the upgrade deletes your analytics data, the attribute is required to prevent an accidental upgrade.

    3. In the spec section, update the API Connect version:
      Change the version setting to 10.0.5.3.
    4. In the spec.gateway section of the CR, delete any template or dataPowerOverride sections.

      You cannot perform an upgrade if the CR contains an override.

    5. Save and close the CR.
      The response looks like the following example:
      apiconnectcluster.apiconnect.ibm.com/prod configured
      Known issue: Webhook error for incorrect license.

      If you did not update the license ID in the CR, then when you save your changes, the following webhook error might display:

      admission webhook "vapiconnectcluster.kb.io" denied the request: 
      APIConnectCluster.apiconnect.ibm.com "<instance-name>" is invalid: spec.license.license: 
      Invalid value: "L-RJON-BYGHM4": License L-RJON-BYGHM4 is invalid for the chosen version version. 
      Please refer license document https://ibm.biz/apiclicenses

      To resolve the error, see API Connect licenses for the list of the available license IDs and select the appropriate License IDs for your deployment. Update the CR with the new license value as in the following example, and then save and apply your changes again.

  9. If using individual subsystem CRs: Start with the Management subsystem. Update the Management CR as follows:
    1. Edit the ManagementCluster CR:
      oc edit ManagementCluster -n <mgmt_namespace>
    2. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): In the spec section, add a new allowUpgrade attribute and set it to true.
      spec:
        allowUpgrade: true

      The allowUpgrade attribute enables the upgrade to 10.0.5.x. Because the upgrade to 10.0.5.x. deletes your analytics data, the attribute is required to prevent an accidental upgrade.

    3. In the spec section, update the API Connect version:
      Change the version setting to 10.0.5.3.
    4. If you are upgrading to a version of API Connect that requires a new license, update the license value now.

      For the list of licenses, see API Connect licenses.

    5. Save and close the CR to apply your changes.
      The response looks like the following example:
      managementcluster.management.apiconnect.ibm.com/management edited
    6. Confirm that the Management subsystem upgrade is complete before proceeding to the next subsystem.

      Check the status of the upgrade with: oc get ManagementCluster -n <mgmt_namespace>, and wait until all pods are running at the new version. For example:

      oc get ManagementCluster -n <mgmt_namespace>
      NAME         READY   STATUS    VERSION    RECONCILED VERSION   AGE
      management   18/18   Running   10.0.5.3   10.0.5.3-1281        97m
    7. Repeat the process for the remaining subsystem CRs: GatewayCluster, PortalCluster, and then AnalyticsCluster.
      Important:
      • In the GatewayCluster CR, delete any template or dataPowerOverride sections. You cannot perform an upgrade if the CR contains an override.
      • If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): The allowUpgrade attribute set in the Management CR must also be set in the AnalyticsCluster CR. It is not required for Gateway or Portal CRs.
  10. Validate that the upgrade was successfully deployed by running the following command:
    oc get apic -n <APIC_namespace>

    The response looks like the following example:

    NAME                                                     READY   STATUS    VERSION              RECONCILED VERSION       AGE
    analyticscluster.analytics.apiconnect.ibm.com/prod-a7s   5/5     Running   10.0.5.3   10.0.5.3   21h
    
    NAME                                        READY   STATUS   VERSION              RECONCILED VERSION       AGE
    apiconnectcluster.apiconnect.ibm.com/prod   4/4     Ready    10.0.5.3   10.0.5.3   22h
    
    NAME                                         PHASE     READY   SUMMARY                           VERSION    AGE
    datapowerservice.datapower.ibm.com/prod-gw   Running   True    StatefulSet replicas ready: 3/3   10.0.5.3   21h
    
    NAME                                         PHASE     LAST EVENT   WORK PENDING   WORK IN-PROGRESS   AGE
    datapowermonitor.datapower.ibm.com/prod-gw   Running                false          false              21h
    
    NAME                                                READY   STATUS    VERSION              RECONCILED VERSION       AGE
    gatewaycluster.gateway.apiconnect.ibm.com/prod-gw   2/2     Running   10.0.5.3   10.0.5.3   21h
    
    NAME                                                        READY   STATUS    VERSION              RECONCILED VERSION       AGE
    managementcluster.management.apiconnect.ibm.com/prod-mgmt   16/16   Running   10.0.5.3   10.0.5.3   22h
    
    NAME                                                                STATUS   ID                                  CLUSTER     TYPE   CR TYPE   AGE
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-0f583bd9   Ready    20210505-141020F_20210506-011830I   prod-mgmt   incr   record    11h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-10af02ee   Ready    20210505-141020F                    prod-mgmt   full   record    21h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-148f0cfa   Ready    20210505-141020F_20210506-012856I   prod-mgmt   incr   record    11h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-20bd6dae   Ready    20210505-141020F_20210506-090753I   prod-mgmt   incr   record    3h28m
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-40efdb38   Ready    20210505-141020F_20210505-195838I   prod-mgmt   incr   record    16h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-681aa239   Ready    20210505-141020F_20210505-220302I   prod-mgmt   incr   record    14h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-7f7150dd   Ready    20210505-141020F_20210505-160732I   prod-mgmt   incr   record    20h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-806f8de6   Ready    20210505-141020F_20210505-214657I   prod-mgmt   incr   record    14h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-868a066a   Ready    20210505-141020F_20210506-090140I   prod-mgmt   incr   record    3h34m
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-cf9a85dc   Ready    20210505-141020F_20210505-210119I   prod-mgmt   incr   record    15h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-ef63b789   Ready    20210506-103241F                    prod-mgmt   full   record    83m
    
    NAME                                                                   STATUS     MESSAGE                                                     AGE
    managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-649mc   Complete   Upgrade is Complete (DB Schema/data are up-to-date)         142m
    managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-9mjhk   Complete   Fresh install is Complete (DB Schema/data are up-to-date)   22h
    
    NAME                                               READY   STATUS    VERSION              RECONCILED VERSION       AGE
    portalcluster.portal.apiconnect.ibm.com/prod-ptl   3/3     Running   10.0.5.3   10.0.5.3   21h
  11. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): Enable analytics as explained in Enabling Analytics after upgrading
  12. Upgrade to Red Hat OpenShift Container Platform 4.12 if you have not already done so.
    Red Hat OpenShift requires you to upgrade in stages, so that you install every version between your starting point and your ending point. For example, to upgrade from 4.10 to 4.12, you must complete 2 upgrades:
    1. Upgrade to 4.11
    2. Upgrade to 4.12
    For upgrade instructions, see the Red Hat OpenShift documentation.
  13. If upgrading to 10.0.5.3 (or later) from an earlier 10.0.5.x release: Review and configure the new inter-subsystem communication features: Optional post-upgrade steps for upgrade to 10.0.5.3 from earlier 10.0.5 release.
  14. Restart all nats-server pods by running the following command:
    oc -n <namespace> delete po -l app.kubernetes.io/name=natscluster

What to do next

Update your toolkit CLI by downloading it from FixCentral or from the Cloud Manager UI, see Installing the toolkit.

If upgrading from v10.0.5.1 or earlier to v10.0.5.2: The change in deployment profile CPU and memory limits that are introduced in 10.0.5.2 (see New deployment profiles and CPU licensing) can result in a change in the performance of your Management component. If you notice any obvious reduction in performance of the Management UI or toolkit CLI where you have multiple concurrent users, open a support case.