Upgrading on OpenShift using portable storage

You can use a portable storage device to perform an air-gapped upgrade of IBM® API Connect on OpenShift Container Platform (OCP) when your cluster has no internet connectivity.

Before you begin

If you are upgrading an installation that is connected to the internet, see Upgrading on OpenShift and Cloud Pak for Integration in an online environment.

Attention: If you plan to upgrade to API Connect 10.0.5, first upgrade to API Connect to10.0.4-ifix3, and then upgrade OpenShift to 4.10 as explained in this procedure.
  • This upgrade procedure uses a single, top-level API Connect Cluster CR to deploy a newer version of API Connect. Only use these instructions if your current deployment was also installed with the top-level CR.
  • The Gateway subsystem remains available during the upgrade of the Management, Portal, and Analytics subsystems.
  • The upgrade procedure requires you to use Red Hat Skopeo for moving container images. Skopeo is not available for Microsoft Windows, so you cannot perform this task using a Windows host.

About this task

You can store the product code and images to a portable storage device and transfer them to a local, air-gapped network. By doing so, you can install in your air-gapped environment without using a bastion host.

Notes:
  • This task requires you to use Red Hat Skopeo for moving container images. Skopeo is not available for Microsoft Windows, so you cannot perform this task on Windows.
  • Don't use the tilde ~ within double quotation marks in any command because the tilde doesn’t expand and your commands might fail.

Procedure

  1. Ensure that you have completed all of the steps in Preparing to upgrade on OpenShift and Cloud Pak for Integration, including reviewing the Upgrade considerations on OpenShift and Cloud Pak for Integration.

    Do not attempt an upgrade until you have reviewed the considerations and prepared your deployment.

  2. Prepare a host that can be connected to the internet.
    Note: If you are using the same host that you used for installing API Connect, skip this step.

    The host must satisfy the following requirements:

    • The host must be on a Linux x86_64 platform, or any operating system that the IBM Cloud Pak CLI, the OpenShift CLI. and RedHat Skopeo support.
    • The host locale must be set to English.
    • The host must have sufficient storage to hold all of the software that is to be transferred to the local Docker registry.

    Complete the following steps to set up your external host:

    1. Install OpenSSL version 1.1.1 or higher.
    2. Install Docker or Podman:
      • To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
        yum check-update
        yum install docker
        
      • To install Podman, see Podman Installation Instructions.
    3. Install httpd-tools by running the following commands:
      yum install httpd-tools
      
    4. Install the IBM Cloud Pak CLI by completing the following steps:
      Install the latest version of the binary file for your platform. For more information, see cloud-pak-cli.
      1. Download the binary file by running the following command:
        wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/<binary_file_name>
        
        For example:
        wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/cloudctl-linux-amd64.tar.gz
      2. Extract the binary file by running the following command:
        tar -xf <binary_file_name>
      3. Run the following commands to modify and move the file:
        chmod 755 <file_name>
        mv <file_name> /usr/local/bin/cloudctl
      4. Confirm that cloudctl is installed by running the following command:
        cloudctl --help

        The cloudctl usage is displayed.

    5. Install the oc OpenShift Container Platform CLI tool.

      For more information, see Getting started with the CLI in the Red Hat OpenShift documentation.

    6. Install RedHat Skopeo CLI version 1.0.0 or higher.

      For more information, see Installing Skopeo from packages.

    7. Run the following command to create a directory that serves as the offline store.

      The following example creates a directory called "upgrade_offline", which is used in the subsequent steps.

      mkdir $HOME/upgrade_offline
      Notes:
      • The $HOME/upgrade_offline store must be persistent to avoid transferring data more than once. The persistence also helps to run the mirroring process multiple times or on a schedule.
      • The $HOME/upgrade_offline store must not use the same name that you used for the previous installation. If you use the same name as original directory ($HOME/upgrade_offline), then the api-connect-catalog tag will not get updated correctly, the catalog source pods will not pick up the new image, and as a result, the operator will not upgrade. The environment variable should be updated in all the steps.
  3. On the bastion host, create environment variables for the installer and image inventory.
    Note: If you are using the same bastion host that you used for installing API Connect, you can re-use the environment variables from the installation, but you must update the following variables for the new release:
    • CASE_VERSION - update to the newest CASE.
    • OFFLINEDIR - update to reflect new folder created for upgrade; for example, $HOME/upgrade_offline.

    Create the following environment variables with the installer image name and the image inventory. Set the CASE_VERSION to the value for the new API Connect release. The CASE version shown in the example might not be correct for your deployment – refer to Operator, operand, and CASE versions for the correct CASE version.

    For example:

    export CASE_NAME=ibm-apiconnect
    export CASE_VERSION=3.0.7
    export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz
    export CASE_INVENTORY_SETUP=apiconnectOperatorSetup
    export OFFLINEDIR=$HOME/upgrade_offline
    export OFFLINEDIR_ARCHIVE=offline.tgz
    export CASE_REMOTE_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case/$CASE_NAME/$CASE_VERSION/$CASE_ARCHIVE
    export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE
    
    export BASTION_DOCKER_REGISTRY_HOST=localhost
    export BASTION_DOCKER_REGISTRY_PORT=443
    export BASTION_DOCKER_REGISTRY=$BASTION_DOCKER_REGISTRY_HOST:$BASTION_DOCKER_REGISTRY_PORT
    export BASTION_DOCKER_REGISTRY_USER=username
    export BASTION_DOCKER_REGISTRY_PASSWORD=password
    export BASTION_DOCKER_REGISTRY_PATH=$OFFLINEDIR/imageregistry
    
  4. Connect the portable host to the internet.

    Connect the portable host to the internet and disconnect it from the local, air-gapped network.

  5. Download the API Connect installer and image inventory by running the following command:
    cloudctl case save \
      --case $CASE_REMOTE_PATH \
      --outputdir $OFFLINEDIR
    
  6. Mirror the images from the ICR (source) registry to the portable device's (destination) registry.
    1. Store the credentials for the ICR (source) registry.

      The following command stores and caches the IBM Entitled Registry credentials in a file on your file system in the $HOME/.airgap/secrets location:

      cloudctl case launch \
         --case $OFFLINEDIR/$CASE_ARCHIVE \
         --inventory $CASE_INVENTORY_SETUP \
         --action configure-creds-airgap \
         --namespace $NAMESPACE \
         --args "--registry cp.icr.io --user cp --pass <entitlement-key> --inputDir $OFFLINEDIR"
      
    2. Store the credentials for the portable device's (destination) registry.

      The following command stores and caches the Docker registry credentials in a file on your file system in the $HOME/.airgap/secrets location:

      cloudctl case launch \
        --case $CASE_LOCAL_PATH \
        --inventory $CASE_INVENTORY_SETUP \
        --action configure-creds-airgap \
        --args "--registry $PORTABLE_DOCKER_REGISTRY --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD"
      
    3. If needed, start the Docker registry service on the portable device.

      If you are using the same portable device that you used for installing API Connect, the Docker registry service might already be running.

      1. Initialize the Docker registry by running the following command:
        cloudctl case launch \
          --case $CASE_LOCAL_PATH \
          --inventory $CASE_INVENTORY_SETUP \
          --action init-registry \
          --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"
        
      2. Start the Docker registry by running the following command:
        cloudctl case launch \
          --case $CASE_LOCAL_PATH \
          --inventory $CASE_INVENTORY_SETUP \
          --action start-registry \
          --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --port $PORTABLE_DOCKER_REGISTRY_PORT --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"
        
    4. Mirror the images to the registry on the portable device.
      cloudctl case launch \
        --case $CASE_LOCAL_PATH \
        --inventory $CASE_INVENTORY_SETUP \
        --action mirror-images \
        --args "--registry $PORTABLE_DOCKER_REGISTRY --inputDir $OFFLINEDIR"
      
  7. Optional: Save the Docker registry image that you stored on the portable device.

    If your air-gapped network doesn’t have a Docker registry image, you can save the image on the bastion host and copy it later to the host in your air-gapped environment.

    docker save docker.io/library/registry:2.6 -o $PORTABLE_DOCKER_REGISTRY_PATH/registry-image.tar
    
  8. Configure access to the local registry for installation.
    1. Create environment variables with the local Docker registry connection information.
      Note: If you are using the same bastion host that you used for installing API Connect, you can re-use the environment variables from the installation, but you must update the CASE_VERSION variable for the new release.

      Create the following environment variables with the installer image name and the image inventory. Set the CASE_VERSION to the same value for the new API Connect release that you used in step #tapic_upgrade_OpenShift_portable_storage__d125e286.

      For example:

      export CASE_NAME=ibm-apiconnect
      export CASE_VERSION=3.0.7
      export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz
      export CASE_INVENTORY_SETUP=apiconnectOperatorSetup
      export OFFLINEDIR=$HOME/upgrade_offline
      export OFFLINEDIR_ARCHIVE=offline.tgz
      export CASE_REMOTE_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case/$CASE_NAME/$CASE_VERSION/$CASE_ARCHIVE
      export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE
      
      export LOCAL_DOCKER_REGISTRY_HOST=<IP_or_FQDN_of_local_docker_registry>
      export LOCAL_DOCKER_REGISTRY_PORT=443
      export LOCAL_DOCKER_REGISTRY=$LOCAL_DOCKER_REGISTRY_HOST:$LOCAL_DOCKER_REGISTRY_PORT
      export LOCAL_DOCKER_REGISTRY_USER=username
      export LOCAL_DOCKER_REGISTRY_PASSWORD=password
      
    2. Set up local registry credentials for mirroring.

      Store the credentials of the registry that is running on the internal host.

      cloudctl case launch \
        --case $CASE_LOCAL_PATH \
        --inventory $CASE_INVENTORY_SETUP \
        --action configure-creds-airgap \
        --args "--registry $LOCAL_DOCKER_REGISTRY --user $LOCAL_DOCKER_REGISTRY_USER --pass $LOCAL_DOCKER_REGISTRY_PASSWORD"
      
    3. If you use an insecure registry, add the local registry to the cluster's insecureRegistries list.
      oc patch image.config.openshift.io/cluster --type=merge -p '{"spec":{"registrySources":{"insecureRegistries":["'$LOCAL_DOCKER_REGISTRY'"]}}}'
      
  9. If you are upgrading from any version prior to 10.0.4.0-ifix2, patch the CatalogSource named ibm-apiconnect-catalog to update the registry namespace.

    In version 10.0.4.0-ifix2, the registry namespace in the image name used for the CatalogSource ibm-apiconnect-catalog changed from ibmcom to cpopen. Patch the CatalogSource by running the following command:

    oc patch catalogsource ibm-apiconnect-catalog -n openshift-marketplace --type=merge -p='{"spec":{"image":"'${LOCAL_DOCKER_REGISTRY}'/cpopen/ibm-apiconnect-catalog:latest-amd64"}}'
  10. Update the operator channel:
    1. Confirm that the pod ibm-apiconnect-catalog-xxxx in the openshift-marketplace namespace was updated.
    2. Open the OpenShift web console and click Operators > Installed Operators > IBM API connect > Subscriptions.
    3. Change the channel to the new version (v2.5), which triggers an upgrade of the API Connect operator.
    Known issues:
    • If you are upgrading to API Connect version 10.0.4-ifix2 or 10.0.4-ifix3, you might encounter the following error while updating the operator:
      Message: unpack job not completed: Unpack pod(openshift-marketplace/e9f169cee8bffacf9ab35d276a48b7207d9606e2b7a0a8087bc58b4ff7tx22l) container(pull) is pending. Reason: ImagePullBackOff, Message: Back-off pulling image "ibmcom/ibm-apiconnect-operator-bundle@sha256:ef0ce455270189c37a5dc0500219061959c041f88110f601f6e7bf8072df4943" Reason: JobIncomplete
      Resolve the error by completing the following steps to update the ImageContentSourcePolicy for your deployment:
      1. Log in to the OpenShift cluster UI as an administrator of your cluster.
      2. Click Search > Resources and search for ICSP.
      3. In the list of ICSPs, click ibm-apiconnect to edit it.
      4. In the ibm-apiconnect ICSP, click the YAML tab.
      5. In the spec.repositoryDigestMirrors section, locate the -mirrors: subsection containing source: docker.io/ibmcom).
      6. Add a new mirror ending with /ibmcom to the section as in the following example:
        - mirrors:
                - <AIRGAP_REGISTRY_ADDRESS>/ibmcom
                - <AIRGAP_REGISTRY_ADDRESS>/cpopen
              source: docker.io/ibmcom
      7. If the job does not automatically continue, uninstall and reinstall the API Connect operator.
    • The certificate manager was upgraded in Version 10.0.4.0 and you might encounter an upgrade error if the CRD for the new certificate manager is not found. For information on the errors messages that indicate this problem, and steps to resolve it, see Upgrade error when the CRD for the new certificate manager is not found in the Troubleshooting installation and upgrade on OpenShift topic.
    • A null value for the backrestStorageType property in the pgcluster CR causes an error during the operator upgrade from versions earlier than 10.0.4.0. For information on the errors messages that indicate this problem, and steps to resolve it, see Operator upgrade fails with error from API Connect operator and Postgres operator in the Troubleshooting installation and upgrade on OpenShift topic.

    When the IBM API Connect operator is updated, the new pod starts automatically.

  11. Verify that the API Connect operator was updated by completing the following steps:
    1. Get the name of the pod that hosts the operator by running the following command:
      oc get po -n <APIC_namespace> | grep apiconnect
      The response looks like the following example:
      ibm-apiconnect-7bdb795465-8f7rm                                   1/1     Running     0          4m23s
    2. Get the API Connect version deployed on that pod by running the following command:
      oc describe po <ibm-apiconnect-operator-podname> -n <APIC_namespace> | grep -i productversion
      The response looks like the following example:
      productVersion: 10.0.4.0-ifix3
  12. Check for certificate errors, and then recreate issuers and certificates if needed.

    In Version 10.0.4.0, API Connect upgraded its certificate manager, which might cause some certificate errors during the upgrade. Complete the following steps to check for certificate errors and correct them.

    1. Check the new API Connect operator's log for an error similar to the following example:
      {"level":"error","ts":1634966113.8442025,"logger":"controllers.AnalyticsCluster","msg":"Failed to set owner reference on certificate request","analyticscluster":"apic/minimum-a7s","certificate":"minimum-a7s-ca","error":"Object apic/minimum-a7s-ca is already owned by another Certificate controller minimum-a7s-ca",
      

      To correct this problem, delete all issuers and certificates generated with certmanager.k8s.io/v1alpha1. For certificates used by route objects, you must also delete the route and secret objects.

    2. Run the following commands to delete the issuers and certificates that were generated with certmanager.k8s.io/v1alpha1:
      oc delete issuers.certmanager.k8s.io minimum-self-signed minimum-ingress-issuer  minimum-mgmt-ca minimum-a7s-ca minimum-ptl-ca
      oc delete certs.certmanager.k8s.io minimum-ingress-ca minimum-mgmt-ca minimum-ptl-ca minimum-a7s-ca

      In the examples, minimum is the instance name of the top-level apiconnectcluster.

      When you delete the issuers and certificates, the new certificate manager generates replacements; this might take a few minutes.

    3. Verify that the new CA certs are refreshed and ready.

      Run the following command to verify the certificates:

      oc get certs minimum-ingress-ca minimum-mgmt-ca minimum-ptl-ca minimum-a7s-ca
      

      The CA certs are ready when AGE is "new" and the READY column shows True.

    4. Delete the remaining old certificates, routes, and secret objects.

      Run the following commands:

      oc get certs.certmanager.k8s.io | awk '/minimum/{print $1}'  | xargs oc delete certs.certmanager.k8s.io
      oc delete certs.certmanager.k8s.io postgres-operator
      oc get routes --no-headers -o custom-columns=":metadata.name" | grep ^minimum- | xargs oc delete secrets
      oc get routes --no-headers -o custom-columns=":metadata.name" | grep ^minimum- | xargs oc delete routes
    5. Verify that no old issuers or certificates from your top-level instance remain.

      Run the following commands:

      oc get issuers.certmanager.k8s.io | grep minimum
      oc get certs.certmanager.k8s.io | grep minimum
      

      Both commands should report that no resources were found.

  13. Use apicops to validate the certificates.
    1. Run the following command:
      apicops upgrade:stale-certs -n <APIC_namespace>
    2. Delete any stale certificates that are managed by cert-manager.
      If a certificate failed the validation and it is managed by cert-manager, you can delete the stale certificate secret, and let cert-manager regenerate it. Run the following command:
      kubectl delete secret <stale-secret> -n <APIC_namespace>
    3. Restart the corresponding so that it can pick up the new secret.
      To determine which pod to restart, see the following topics:

    For information on the apicops tool, see The API Connect operations tool: apicops.

  14. Run the following command to delete the Postgres pods, which refreshes the new certificate:
    oc get pod -n <namespace> --no-headers=true | grep postgres | grep -v backup | awk '{print $1}' | xargs oc delete pod -n <namespace>
  15. If needed, delete the portal-www, portal-db and portal-nginx pods to ensure they use the new secrets.

    If you have the Developer Portal deployed, then the portal-www, portal-db and portal-nginx pods might require deleting to ensure that they pick up the newly generated secrets when restarted. If the pods are not showing as "ready" in a timely manner, then delete all the pods at the same time (this will cause down time).

    Run the following commands to get the name of the portal CR and delete the pods:

    oc project <APIC_namespace>
    oc get ptl
    oc delete po -l app.kubernetes.io/instance=<name_of_portal_CR>
    
  16. If needed, renew the internal certificates for the analytics subsystem.

    If you see analytics-storage-* or analytics-mq-* pods in the CrashLoopBackOff state, then renew the internal certificates for the analytics subsystem and force a restart of the pods.

    1. Switch to the project/namespace where analytics is deployed and run the following command to get the name of the analytics CR (AnalyticsCluster):
      oc project <APIC_namespace>
      oc get a7s

      You need the CR name for the remaining steps.

    2. Renew the internal certificates (CA, client, and server) by running the following commands:
      oc get certificate <name_of_analytics_CR>
      -ca -o=jsonpath='{.spec.secretName}' | xargs oc delete secret
      oc get certificate <name_of_analytics_CR>
      -client -o=jsonpath='{.spec.secretName}' | xargs oc delete secret
      oc get certificate <name_of_analytics_CR>
      -server -o=jsonpath='{.spec.secretName}' | xargs oc delete secret
      
    3. Force a restart of all analytics pods by running the following command:
      oc delete po -l app.kubernetes.io/instance=<name_of_analytics_CR>
      
  17. Update the top-level API Connect Cluster CR:
    1. Update the version field.

      For example:

      apiVersion: apiconnect.ibm.com/v1beta1
      kind: APIConnectCluster
      metadata:
        labels:
          app.kubernetes.io/instance: apiconnect
          app.kubernetes.io/managed-by: ibm-apiconnect
          app.kubernetes.io/name: apiconnect-production
        name: prod
        namespace: APIC_namespace
      spec:
        license:
          accept: true
          use: production
        profile: n12xc4.m12
        version: 10.0.4.0-ifix3
        storageClassName: rook-ceph-block

      Specify the currently deployed profile and use values, which might not match the example. If you want to change to a different profile, you can do it after completing the upgrade (for instructions, see Changing deployment profiles on OpenShift.)

    2. In the spec.gateway section, delete the template override section, if it exists. You cannot perform an upgrade if the CR contains an override.
  18. Apply the updated top-level CR to upgrade the API Connect operand by running the following command:
    oc apply -f CR_file_name.yaml
    The response looks like the following example:
    apiconnectcluster.apiconnect.ibm.com/prod configured
  19. If needed, resolve gateway peering issues by completing the following steps.

    Due to the ingress issuer changes, the gateway pods must be scaled down and then back up. This process will cause 5 to 10 minutes of down time.

    1. Run the following command to verify that the management, portal, and analytics subsystems are Running:
      oc get apic --all-namespaces

      The response looks like the following example, with the gateway pods showing as Pending.

      NAME                                                        READY   STATUS    VERSION      RECONCILED VERSION   AGE
      analyticscluster.analytics.apiconnect.ibm.com/minimum-a7s   8/8     Running   10.0.4.0     10.0.4.0-2221        7d1h
      
      NAME                                           READY      STATUS    VERSION          RECONCILED VERSION   AGE
      apiconnectcluster.apiconnect.ibm.com/minimum   6/7        Pending   10.0.4.0         10.0.3.0-ifix1-351   47h
      
      NAME                                            PHASE     READY   SUMMARY                           VERSION      AGE
      datapowerservice.datapower.ibm.com/minimum-gw   Pending   True    StatefulSet replicas ready: 1/1   10.0.4.0     46h
      
      NAME                                            PHASE     LAST EVENT   WORK PENDING   WORK IN-PROGRESS   AGE
      datapowermonitor.datapower.ibm.com/minimum-gw   Pending                false          false              46h
      
      NAME                                                   READY   STATUS    VERSION          RECONCILED VERSION   AGE
      gatewaycluster.gateway.apiconnect.ibm.com/minimum-gw   0/2     Pending   10.0.4.0         10.0.4.0-2221        46h
      
      NAME                                                           READY   STATUS    VERSION          RECONCILED VERSION   AGE
      managementcluster.management.apiconnect.ibm.com/minimum-mgmt   16/16   Running   10.0.4.0         10.0.4.0-2221        47h
      
      NAME                                                                      STATUS     MESSAGE                                                     AGE
      managementdbupgrade.management.apiconnect.ibm.com/minimum-mgmt-up-pxl77   Complete   Fresh install is Complete (DB Schema/data are up-to-date)   46h
      managementdbupgrade.management.apiconnect.ibm.com/management-up-87fcz     Complete   Upgrade is Complete (DB Schema/data are up-to-date)         8h
      
      NAME                                                  READY   STATUS    VERSION        RECONCILED VERSION   AGE
      portalcluster.portal.apiconnect.ibm.com/minimum-ptl   3/3     Running   10.0.4.0       10.0.4.0-2221        46h
    2. Scale down the gateway firmware containers by editing the top-level APIConnectCluster CR and setting the replica count to 0.

      OpenShift:

      1. Run the following command to edit the CR:
        oc edit apiconnectcluster <apic-cr-name>
      2. In the spec.gateway section, set the replicaCount setting to 0:
        ...
        spec:
          gateway:
            replicaCount: 0
        ...

        If the setting is not already included in the CR, add it now as shown in the example.

      3. Save and exit the CR.

      Cloud Pak for Integration:

      1. In the Automation Platform UI, edit the API Connect instance.
      2. Click Advanced.
      3. In the Gateway subsystem section, set the Advance Replica count field to 0.
    3. Wait for the gateway firmware pods to scale down and terminate.

      Do not proceed to the next step until the pods are terminated.

    4. Reset the replica count to its original value.

      If the replica count setting was not used previously, then:

      • OpenShift: Delete the setting from the CR.
      • Cloud Pak for Integration: Clear the Advance Replica count field.
  20. Validate that the upgrade was successfully deployed by running the following command:
    oc get apic -n APIC_namespace

    The response looks like the following example:

    NAME                                                     READY   STATUS    VERSION              RECONCILED VERSION       AGE
    analyticscluster.analytics.apiconnect.ibm.com/prod-a7s   8/8     Running   10.0.4.0-ifix3   10.0.4.0-ifix3   21h
    
    NAME                                        READY   STATUS   VERSION              RECONCILED VERSION       AGE
    apiconnectcluster.apiconnect.ibm.com/prod   4/4     Ready    10.0.4.0-ifix3   10.0.4.0-ifix3   22h
    
    NAME                                         PHASE     READY   SUMMARY                           VERSION    AGE
    datapowerservice.datapower.ibm.com/prod-gw   Running   True    StatefulSet replicas ready: 3/3   10.0.4.0-ifix3   21h
    
    NAME                                         PHASE     LAST EVENT   WORK PENDING   WORK IN-PROGRESS   AGE
    datapowermonitor.datapower.ibm.com/prod-gw   Running                false          false              21h
    
    NAME                                                READY   STATUS    VERSION              RECONCILED VERSION       AGE
    gatewaycluster.gateway.apiconnect.ibm.com/prod-gw   2/2     Running   10.0.4.0-ifix3   10.0.4.0-ifix3   21h
    
    NAME                                                        READY   STATUS    VERSION              RECONCILED VERSION       AGE
    managementcluster.management.apiconnect.ibm.com/prod-mgmt   16/16   Running   10.0.4.0-ifix3   10.0.4.0-ifix3   22h
    
    NAME                                                                STATUS   ID                                  CLUSTER     TYPE   CR TYPE   AGE
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-0f583bd9   Ready    20210505-141020F_20210506-011830I   prod-mgmt   incr   record    11h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-10af02ee   Ready    20210505-141020F                    prod-mgmt   full   record    21h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-148f0cfa   Ready    20210505-141020F_20210506-012856I   prod-mgmt   incr   record    11h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-20bd6dae   Ready    20210505-141020F_20210506-090753I   prod-mgmt   incr   record    3h28m
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-40efdb38   Ready    20210505-141020F_20210505-195838I   prod-mgmt   incr   record    16h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-681aa239   Ready    20210505-141020F_20210505-220302I   prod-mgmt   incr   record    14h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-7f7150dd   Ready    20210505-141020F_20210505-160732I   prod-mgmt   incr   record    20h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-806f8de6   Ready    20210505-141020F_20210505-214657I   prod-mgmt   incr   record    14h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-868a066a   Ready    20210505-141020F_20210506-090140I   prod-mgmt   incr   record    3h34m
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-cf9a85dc   Ready    20210505-141020F_20210505-210119I   prod-mgmt   incr   record    15h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-ef63b789   Ready    20210506-103241F                    prod-mgmt   full   record    83m
    
    NAME                                                                   STATUS     MESSAGE                                                     AGE
    managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-649mc   Complete   Upgrade is Complete (DB Schema/data are up-to-date)         142m
    managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-9mjhk   Complete   Fresh install is Complete (DB Schema/data are up-to-date)   22h
    
    NAME                                               READY   STATUS    VERSION              RECONCILED VERSION       AGE
    portalcluster.portal.apiconnect.ibm.com/prod-ptl   3/3     Running   10.0.4.0-ifix3   10.0.4.0-ifix3   21h
  21. Upgrade the OpenShift cluster to OpenShift 4.10.

    API Connect 10.0.5 requires that your cluster be on OpenShift 4.10 before you begin that upgrade.

    Upgrading OpenShift requires that you move to interim minor releases instead of upgrading directly from 4.6 to 4.10. For more information, see the Red Hat OpenShift documentation. In the "Documentation" banner, select the version of OpenShift that you want to upgrade to, and then expand the "Updating clusters" section in the navigation list.