Upgrading with a portable computer or storage device

Use a portable computer or portable storage device to perform an air-gapped upgrade of IBM® API Connect on Red Hat OpenShift Container Platform (OCP) using either the top-level APIConnectCluster CR or individual subsystem CRs. First the operator is updated, and then the operands (IBM API Connect itself).

Before you begin

  • If you are upgrading an installation online (connected to the internet), see Upgrading on OpenShift in an online environment.
  • The upgrade procedure requires you to use Red Hat Skopeo for moving container images. Skopeo is not available for Microsoft Windows, so you cannot perform this task using a Windows host.
  • If you are upgrading to a version of API Connect that supports a newer version of Red Hat OpenShift, complete the API Connect upgrade before upgrading Red Hat OpenShift.
  • Upgrading from 10.0.5.2 or earlier: If you did not verify that your Portal customizations are compatible with Drupal 10, do that now.

    In API Connect 10.0.5.3, the Developer Portal moved from Drupal 9 to Drupal 10 (this upgrade also requires PHP 8.1). The upgrade tooling will update your Developer Portal sites; however, if you have any custom modules or themes, it is your responsibility to ensure their compatibility with Drupal 10 and PHP 8.1 before starting the upgrade. Review the Guidelines on upgrading your Developer Portal from Drupal 9 to Drupal 10 to ensure that any customizations to the Developer Portal are compatible with Drupal 10 and PHP 8.1.

About this task

  • The Gateway subsystem remains available during the upgrade of the Management, Portal, and Analytics subsystems.
  • Don't use the tilde ~ within double quotation marks in any command because the tilde doesn’t expand and your commands might fail.

Procedure

  1. Ensure that you have completed all of the steps in Preparing to upgrade on OpenShift, including reviewing the Upgrade considerations on OpenShift.

    Do not attempt an upgrade until you have reviewed the considerations and prepared your deployment.

  2. Ensure your API Connect deployment is ready to upgrade:
    • Your API Connect release (operand) supports a direct upgrade to this release.

      For information on the operator and operand version that is used with each API Connect release, see Operator, operand, and CASE versions.

    • The DataPower operator version is correct for the currently deployed version of API Connect.

      For information on upgrade paths and supported versions of DataPower Gateway, see Upgrade considerations on OpenShift.

    • Your deployment is running on a version of Red Hat OpenShift that is supported by both the current version of API Connect and the target version of API Connect.

      For information, see Supported versions of OpenShift.

  3. Set up the mirroring environment.
    1. Prepare the target cluster:
    2. Prepare the portable device:

      You must be able to connect your portable device to the internet and to the restricted network environment (with access to the Red Hat OpenShift Container Platform (OCP) cluster and the local registry). The portable device must be on a Linux x86_64 or Mac platform with any operating system that the Red Hat OpenShift Client supports (in Windows, execute the actions in a Linux x86_64 VM or from a Windows Subsystem for Linux terminal).

      1. Ensure that the portable device has sufficient storage to hold all of the software that is to be transferred to the local registry.
      2. On the portable device, install either Docker or Podman (not both).

        Docker and Podman are used for managing containers; you only need to install one of these applications.

        • To install Docker (for example, on Red Hat Enterprise Linux), run the following commands:
          yum check-update
          yum install docker
        • To install Podman, see the Podman installation instructions.
          For example, on Red Hat Enterprise Linux 9, install Podman with the following command:
          yum install podman
      3. Install the Red Hat OpenShift Client tool (oc) as explained in Getting started with the OpenShift CLI.

        The oc tool is used for managing Red Hat OpenShift resources in the cluster.

      4. Download the IBM Catalog Management Plug-in for IBM Cloud Paks version 1.1.0 or later from GitHub.
        The ibm-pak plug-in enables you to access hosted product images, and to run oc ibm-pak commands against the cluster. To confirm that ibm-pak is installed, run the following command and verify that the response lists the command usage:
        oc ibm-pak --help
    3. Set up a local image registry and credentials.

      The local Docker registry stores the mirrored images in your network-restricted environment.

      1. Install a registry, or get access to an existing registry.

        You might already have access to one or more centralized, corporate registry servers to store the API Connect images. If not, then you must install and configure a production-grade registry before proceeding.

        The registry product that you use must meet the following requirements:
        • Supports multi-architecture images through Docker Manifest V2, Schema 2

          For details, see Docker Manifest V2, Schema 2.

        • Is accessible from the Red Hat OpenShift Container Platform cluster nodes
        • Allows path separators in the image name
        Note: Do not use the Red Hat OpenShift image registry as your local registry because it does not support multi-architecture images or path separators in the image name.
      2. Configure the registry to meet the following requirements:
        • Supports auto-repository creation
        • Has sufficient storage to hold all of the software that is to be transferred
        • Has the credentials of a user who can create and write to repositories (the mirroring process uses these credentials)
        • Has the credentials of a user who can read all repositories (the Red Hat OpenShift Container Platform cluster uses these credentials)

      To access your registries during an air-gapped installation, use an account that can write to the target local registry. To access your registries during runtime, use an account that can read from the target local registry.

  4. Set environment variables and download CASE files.

    Create environment variables to use while mirroring images, connect to the internet, and download the API Connect CASE files.

    1. Create the following environment variables with the installer image name and the image inventory on your portable device:

      Because you will use values from two different CASE files, you must create environment variables for both; notice that the variables for the foundational services (common services) CASE file are prefixed with "CS_" to differentiate them.

      export CASE_NAME=ibm-apiconnect
      export CASE_VERSION=4.0.8
      export ARCH=amd64

      For information on API Connect CASE versions and their corresponding operators and operands, see Operator, operand, and CASE versions.

      export CS_CASE_NAME=ibm-cp-common-services
      export CS_CASE_VERSION=1.15.10
      export CS_ARCH=amd64

      For example, for IBM Cloud Pak foundational services 3.19.X (Long Term Service Release), use version 1.15.10; for foundational services 3.23.X (Continuous Delivery), use version 1.19.2.

      For information on IBM Cloud Pak foundational services (common services) CASE versions, see "Table 1. Image versions for offline installation" in Installing IBM Cloud Pak foundational services in an air-gapped environment in the IBM Cloud Pak foundational services documentation.

    2. Connect your portable device to the internet (it does not need to be connected to the network-restricted environment at this time).
    3. Download the CASE file to your portable device:

      Be sure to download both CASE files as shown in the example:

      oc ibm-pak get $CASE_NAME --version $CASE_VERSION
      oc ibm-pak get $CS_CASE_NAME --version $CS_CASE_VERSION

      If you omit the --version parameter, the command downloads the latest version of the file.

  5. Mirror the images.

    The process of mirroring images pulls the images from the internet and pushes them to your local registry. After mirroring your images, you can configure your cluster and pull the images to it before installing API Connect.

    1. Generate mirror manifests.
      1. Define the environment variable $TARGET_REGISTRY by running the following command:
        export TARGET_REGISTRY=<target-registry>
        Replace <target-registry> with the IP address (or host name) and port of the local registry; for example: 172.16.0.10:5000. If you want the images to use a specific namespace within the target registry, you can specify it here; for example: 172.16.0.10:5000/registry_ns.
      2. Generate mirror manifests by running the following commands:
        oc ibm-pak generate mirror-manifests $CASE_NAME $TARGET_REGISTRY --version $CASE_VERSION
        oc ibm-pak generate mirror-manifests $CS_CASE_NAME $TARGET_REGISTRY --version $CS_CASE_VERSION

        If you need to filter for a specific image group, add the parameter --filter <image_group> to this command.

      The generate command creates the following files at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION and ~/.ibm-pak/data/mirror/$CS_CASE_NAME/$CS_CASE_VERSION:
      • catalog-sources.yaml
      • catalog-sources-linux-<arch>.yaml (if there are architecture-specific catalog sources)
      • image-content-source-policy.yaml
      • images-mapping-to-filesystem.txt
      • images-mapping-from-filesystem.txt

      The files are used when mirroring the images to the TARGET_REGISTRY.

    2. Obtain an entitlement key for the entitled registry where the images are hosted:
      1. Log in to the IBM Container Library.
      2. In the Container software library, select Get entitlement key.
      3. In the "Access your container software" section, click Copy key.
      4. Copy the key to a safe location; you will use it to log in to cp.icr.io in the next step.
    3. Authenticate with the entitled registry where the images are hosted.

      The image pull secret allows you to authenticate with the entitled registry and access product images.

      1. Run the following command to export the path to the file that will store the authentication credentials that are generated on a Podman or Docker login:
        export REGISTRY_AUTH_FILE=$HOME/.docker/config.json

        The authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows.

      2. Log in to the cp.icr.io registry with Podman or Docker; for example:
        podman login cp.icr.io

        Use cp as the username and your entitlement key as the password.

    4. Update the API Connect CASE manifest to correctly reference the DataPower Operator image.

      Files for the DataPower Operator are now hosted on icr.io; however, the CASE manifest still refers to docker.io as the image host. To work around this issue, visit Airgap install failure due to 'unable to retrieve source image docker.io' in the DataPower documentation and update the manifest as instructed. The manifest for API Connect (which uses the DataPower Operator) is stored in ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION.

      After the manifest is updated, continue to the next step in this procedure.

    5. Mirror the images from the internet to the portable device.
      1. Define the environment variable $IMAGE_PATH by running the following command:
        export IMAGE_PATH=<image-path>
        where <image-path> indicates where the files will be stored on the portable device's file system.
      2. Mirror the images to the portable device:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
          --filter-by-os '.*' \
          -a $REGISTRY_AUTH_FILE \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --dir "$IMAGE_PATH"
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CS_CASE_NAME/$CS_CASE_VERSION/images-mapping-to-filesystem.txt \
          --filter-by-os '.*' \
          -a $REGISTRY_AUTH_FILE \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --dir "$IMAGE_PATH"

        There might be a slight delay before you see a response to the command.

    6. Move the portable device to the restricted-network environment.
      The procedure depends on the type of device that you are using:

      If you are using a portable computer, disconnect the device from the internet and connect it to the restricted-network environment. The same environment variables can be used.

      If you are using portable storage, complete the following steps:
      1. Transfer the following files to a device in the restricted-network environment:
        • The ~/.ibm-pak directory.
        • The contents of the <image-path> that you specified in the previous step.
      2. Create the same environment variables as on the original device; for example:
        export CASE_NAME=ibm-apiconnect
        export CASE_VERSION=4.0.8
        export ARCH=amd64
        export CS_CASE_NAME=ibm-cp-common-services
        export CS_CASE_VERSION=1.15.10
        export CS_ARCH=amd64
        export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
        export IMAGE_PATH=<image-path>
    7. Authenticate with the local registry.

      Log in to the local registry using an account that can write images to that registry; for example:

      podman login $TARGET_REGISTRY

      If the registry is insecure, add the following flag to the command: --tls-verify=false.

    8. Mirror the product images to the target registry.
      1. If you are using a portable computer, connect it to the restricted-network environment that contains the local registry.

        If you are using portable storage, you already transferred files to a device within the restricted-network environment.

      2. Run the following commands to copy the images to the local registry:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \
          -a $REGISTRY_AUTH_FILE \
          --filter-by-os '.*' \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --dir "$IMAGE_PATH"
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CS_CASE_NAME/$CS_CASE_VERSION/images-mapping-to-filesystem.txt \
          -a $REGISTRY_AUTH_FILE \
          --filter-by-os '.*' \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --dir "$IMAGE_PATH"
        Note: If the local registry is not secured by TLS, or the certificate presented by the local registry is not trusted by your device, add the --insecure option to the command.

        There might be a slight delay before you see a response to the command.

    9. Configure the target cluster.

      Now that images have been mirrored to the local registry, the target cluster must be configured to pull the images from it. Complete the following steps to configure the cluster's global pull secret with the local registry's credentials and then instruct the cluster to pull the images from the local registry.

      1. Log in to your Red Hat OpenShift Container Platform cluster:
        oc login <openshift_url> -u <username> -p <password> -n <namespace>
      2. Update the global image pull secret for the cluster as explained in the Red Hat OpenShift Container Platform documentation.

        Updating the image pull secret provides the cluster with the credentials needed for pulling images from your local registry.

        Note: If you have an insecure registry, add the registry to the cluster's insecureRegistries list by running the following command:
        oc edit image.config.openshift.io/cluster -o yaml
        and add the TARGET_REGISTRY to spec.registrySources.insecureRegistries as shown in the following example:
        spec:
          registrySources:
            insecureRegistries:
            - insecure0.svc:5001
            - <TARGET_REGISTRY>
        If the insecureRegistries field does not exist, you can add it.
      3. Create the ImageContentSourcePolicy, which instructs the cluster to pull the images from your local registry (run both commands):
        oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
        oc apply -f ~/.ibm-pak/data/mirror/$CS_CASE_NAME/$CS_CASE_VERSION/image-content-source-policy.yaml
      4. Verify that each ImageContentSourcePolicy resource was created:
        oc get imageContentSourcePolicy
      5. Verify your cluster node status:
        oc get MachineConfigPool -w

        Wait for all nodes to be updated before proceeding to the next step.

  6. Apply the catalog sources.

    Now that you have mirrored images to the target cluster, apply the catalog sources.

    In the following steps, replace <Architecture> with either amd64, s390x or ppc64le as appropriate for your environment.

    1. Export the variables for the command line to use:
      export CASE_NAME=ibm-apiconnect
      export CASE_VERSION=4.0.8
      export ARCH=amd64
      export CS_CASE_NAME=ibm-cp-common-services
      export CS_CASE_VERSION=1.15.10
      export CS_ARCH=amd64
    2. Generate the catalog source and save it in another directory in case you need to replicate this installation in the future.
      1. Get the catalog source:
        cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
        cat ~/.ibm-pak/data/mirror/${CS_CASE_NAME}/${CS_CASE_VERSION}/catalog-sources.yaml
      2. (10.0.5.6 or earlier) Get any architecture-specific catalog sources that you need to back up as well:
        cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml

        Starting with 10.0.5.7 this step is not needed.

      You can also navigate to the directory in your file browser to copy these artifacts into files that you can keep for re-use or for pipelines.

    3. Apply the catalog sources to the cluster.
      1. Apply the universal catalog sources:
        oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
        oc apply -f ~/.ibm-pak/data/mirror/${CS_CASE_NAME}/${CS_CASE_VERSION}/catalog-sources.yaml
      2. (10.0.5.6 or earlier) Apply any architecture-specific catalog source:
        oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml

        Starting with 10.0.5.7 this step is not needed.

      3. Confirm that the catalog source was created in the openshift-marketplace namespace:
        oc get catalogsource -n openshift-marketplace
  7. Update the operator channel.
    1. Open the Red Hat OpenShift web console and click Operators > Installed Operators > IBM API connect > Subscriptions.
    2. Change the channel to the new version (v3.7), which triggers an upgrade of the API Connect operator.
      If you are upgrading from 10.0.4-ifix3 and the API Connect operator does not begin its upgrade within a few minutes, perform the following workaround to delete the ibm-ai-wmltraining subscription and associated csv:
      1. Run the following command to get the name of the subscription:
        oc get subscription -n <APIC_namespace> --no-headers=true | grep ibm-ai-wmltraining | awk '{print $1}'
      2. Run the following command to delete the subscription:
        oc delete subscription <subscription-name> -n <APIC_namespace>
      3. Run the following command to get the name of the csv:
        oc get csv --no-headers=true -n <APIC_namespace> | grep ibm-ai-wmltraining | awk '{print $1}'
      4. Run the following command to delete the csv:
        oc delete csv <csv-name> -n <APIC_namespace>

      Deleting the subscription and csv triggers the API Connect operator upgrade.

    When the API Connect operator is updated, the new operator pod starts automatically.

  8. Verify that the API Connect operator was updated by completing the following steps:
    1. Get the name of the pod that hosts the operator by running the following command:
      oc get po -n <APIC_namespace> | grep apiconnect
      The response looks like the following example:
      ibm-apiconnect-7bdb795465-8f7rm                                   1/1     Running     0          4m23s
    2. Get the API Connect version deployed on that pod by running the following command:
      oc describe po <ibm-apiconnect-operator-podname> -n <APIC_namespace> | grep -i productversion
      The response looks like the following example:
      productVersion: 10.0.5.7
  9. If you are using a top-level CR: Update the top-level APIConnectCluster CR:

    The spec section of the apiconnectcluster looks like the following example:

    apiVersion: apiconnect.ibm.com/v1beta1
    kind: APIConnectCluster
    metadata:
      labels:
        app.kubernetes.io/instance: apiconnect
        app.kubernetes.io/managed-by: ibm-apiconnect
        app.kubernetes.io/name: apiconnect-production
      name: prod
      namespace: <APIC_namespace>
    spec:
      allowUpgrade: true
      license:
        accept: true
        use: production
        license: L-GVEN-GFUPVE
      profile: n12xc4.m12
      version: 10.0.5.7
      storageClassName: rook-ceph-block
    1. Edit the apiconnectcluster CR by running the following command:
      oc -n <APIC_namespace> edit apiconnectcluster
    2. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): In the spec section, add a new allowUpgrade attribute and set it to true:
      spec:
        allowUpgrade: true

      The allowUpgrade attribute enables the upgrade to 10.0.5.x. Because the upgrade deletes your analytics data, the attribute is required to prevent an accidental upgrade.

    3. In the spec section, update the API Connect version:
      Change the version setting to 10.0.5.7.
    4. In the spec.gateway section of the CR, delete any template or dataPowerOverride sections.

      You cannot perform an upgrade if the CR contains an override.

    5. Save and close the CR.
      The response looks like the following example:
      apiconnectcluster.apiconnect.ibm.com/prod configured
      Note: If you see an error message when you attempt to save the CR, check if it is one of the following known issues:
      • Webhook error for incorrect license.
        If you did not update the license ID in the CR, then when you save your changes, the following webhook error might display:
        admission webhook "vapiconnectcluster.kb.io" denied the request: 
        APIConnectCluster.apiconnect.ibm.com "<instance-name>" is invalid: spec.license.license: 
        Invalid value: "L-RJON-BYGHM4": License L-RJON-BYGHM4 is invalid for the chosen version version. 
        Please refer license document https://ibm.biz/apiclicenses
        To resolve the error, see API Connect licenses for the list of the available license IDs and select the appropriate license IDs for your deployment. Update the CR with the new license value as in the following example, and then save and apply your changes again.
      • Webhook error: Original PostgreSQL primary not found. Take the following actions to complete the upgrade and fix the cause of the error message:
        1. Edit your apiconnectcluster CR and add the following annotation:
          ...
          metadata:
            annotations:
              apiconnect-operator/db-primary-not-found-allow-upgrade: "true"
              ...
        2. Continue with the upgrade. When the upgrade is complete, the management CR reports the warning:
          Original PostgreSQL primary not found. Run apicops upgrade:pg-health-check to check the health of the database and to ensure pg_wal symlinks exist. If database health check passes please perform a management database backup and restore to restore the original PostgreSQL primary pod
        3. Take a new management database backup.
        4. Immediately restore from the new backup taken in the previous step. The action of taking and restoring a management backup results in the establishment of a new Postgres primary, eliminating the CR warning message. Be careful to restore from the backup that is taken after the upgrade, and not from a backup taken before upgrade.
      • Webhook error: Original postgres primary is running as replica. Complete a Postgres failover, see Postgres failover steps. After you apply the Postgres failover steps, the upgrade resumes automatically.
    6. If needed, delete old Postgres client certificates.
      If you are upgrading from 10.0.1.x or 10.0.4.0-ifix1, or if you previously installed any of those versions before upgrading to 10.0.5.x, there might be old Postgres client certificates. To verify, run the following command:
      oc -n <namespace> get certs | grep db-client

      For example, if you see that both -db-client-apicuser and apicuser exist, apicuser is no longer in use. Remove the old certificates by running one of the following commands, depending on how many old certifications left in your system:

      oc -n <namespace> delete certs  apicuser pgbouncer primaryuser postgres replicator

      or:

      oc -n <namespace> delete certs  apicuser pgbouncer postgres replicator
  10. If you are using individual subsystem CRs: Start with the Management subsystem and update by completing the following steps:
    1. Edit the ManagementCluster CR:
      oc edit ManagementCluster -n <mgmt_namespace>
    2. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): In the spec section, add a new allowUpgrade attribute and set it to true.
      spec:
        allowUpgrade: true

      The allowUpgrade attribute enables the upgrade to 10.0.5.x. Because the upgrade to 10.0.5.x. deletes your analytics data, the attribute is required to prevent an accidental upgrade.

    3. In the spec section, update the API Connect version:
      Change the version setting to 10.0.5.7.
    4. If you are upgrading to a version of API Connect that requires a new license, update the license value now.

      For the list of licenses, see API Connect licenses.

    5. Save and close the CR to apply your changes.
      The response looks like the following example:
      managementcluster.management.apiconnect.ibm.com/management edited
      Note: If you see an error message when you attempt to save the CR, check if it is one of the following known issues:
      • Webhook error for incorrect license.
        If you did not update the license ID in the CR, then when you save your changes, the following webhook error might display:
        admission webhook "vapiconnectcluster.kb.io" denied the request: 
        APIConnectCluster.apiconnect.ibm.com "<instance-name>" is invalid: spec.license.license: 
        Invalid value: "L-RJON-BYGHM4": License L-RJON-BYGHM4 is invalid for the chosen version version. 
        Please refer license document https://ibm.biz/apiclicenses
        To resolve the error, see API Connect licenses for the list of the available license IDs and select the appropriate license IDs for your deployment. Update the CR with the new license value as in the following example, and then save and apply your changes again.
      • Webhook error: Original PostgreSQL primary not found. Take the following actions to complete the upgrade and fix the cause of the error message:
        1. Edit your ManagementCluster CR and add the following annotation:
          ...
          metadata:
            annotations:
              apiconnect-operator/db-primary-not-found-allow-upgrade: true
              ...
        2. Continue with the upgrade. When the upgrade is complete, the management CR reports the warning:
          Original PostgreSQL primary not found. Run apicops upgrade:pg-health-check to check the health of the database and to ensure pg_wal symlinks exist. If database health check passes please perform a management database backup and restore to restore the original PostgreSQL primary pod
        3. Take a new management database backup.
        4. Immediately restore from the new backup taken in the previous step. The action of taking and restoring a management backup results in the establishment of a new Postgres primary, eliminating the CR warning message. Be careful to restore from the backup that is taken after the upgrade, and not from a backup taken before upgrade.
      • Webhook error: Original postgres primary is running as replica. Complete a Postgres failover, see Postgres failover steps. After you apply the Postgres failover steps, the upgrade resumes automatically.
    6. Confirm that the Management subsystem upgrade is complete.

      Check the status of the upgrade with: oc get ManagementCluster -n <mgmt_namespace>, and wait until all pods are running at the new version. For example:

      oc get ManagementCluster -n <mgmt_namespace>
      NAME         READY   STATUS    VERSION    RECONCILED VERSION   AGE
      management   18/18   Running   10.0.5.7   10.0.5.7-1281        97m
    7. Management subsystem only: If needed, delete old Postgres client certificates.

      Skip this step for the Portal, Analytics, and Gateway subsystems.

      If you are upgrading from 10.0.1.x or 10.0.4.0-ifix1, or if you previously installed any of those versions before upgrading to 10.0.5.x, there might be old Postgres client certificates. To verify, run the following command:
      oc -n <namespace> get certs | grep db-client

      For example, if you see that both -db-client-apicuser and apicuser exist, apicuser is no longer in use. Remove the old certificates by running one of the following commands, depending on how many old certifications left in your system:

      oc -n <namespace> delete certs  apicuser pgbouncer primaryuser postgres replicator

      or:

      oc -n <namespace> delete certs  apicuser pgbouncer postgres replicator
    8. Repeat the process for the remaining subsystem CRs: GatewayCluster, PortalCluster, and then AnalyticsCluster.
      Important:
      • In the GatewayCluster CR, delete any template or dataPowerOverride sections. You cannot perform an upgrade if the CR contains an override.
      • If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): The allowUpgrade attribute set in the Management CR must also be set in the AnalyticsCluster CR. It is not required for Gateway or Portal CRs.
  11. Validate that the upgrade was successfully deployed by running the following command:
    oc get apic -n <APIC_namespace>

    The response looks like the following example:

    NAME                                                     READY   STATUS    VERSION              RECONCILED VERSION       AGE
    analyticscluster.analytics.apiconnect.ibm.com/prod-a7s   5/5     Running   10.0.5.7   10.0.5.7   21h
    
    NAME                                        READY   STATUS   VERSION              RECONCILED VERSION       AGE
    apiconnectcluster.apiconnect.ibm.com/prod   4/4     Ready    10.0.5.7   10.0.5.7   22h
    
    NAME                                         PHASE     READY   SUMMARY                           VERSION    AGE
    datapowerservice.datapower.ibm.com/prod-gw   Running   True    StatefulSet replicas ready: 3/3   10.0.5.7   21h
    
    NAME                                         PHASE     LAST EVENT   WORK PENDING   WORK IN-PROGRESS   AGE
    datapowermonitor.datapower.ibm.com/prod-gw   Running                false          false              21h
    
    NAME                                                READY   STATUS    VERSION              RECONCILED VERSION       AGE
    gatewaycluster.gateway.apiconnect.ibm.com/prod-gw   2/2     Running   10.0.5.7   10.0.5.7   21h
    
    NAME                                                        READY   STATUS    VERSION              RECONCILED VERSION       AGE
    managementcluster.management.apiconnect.ibm.com/prod-mgmt   16/16   Running   10.0.5.7   10.0.5.7   22h
    
    NAME                                                                STATUS   ID                                  CLUSTER     TYPE   CR TYPE   AGE
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-0f583bd9   Ready    20210505-141020F_20210506-011830I   prod-mgmt   incr   record    11h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-10af02ee   Ready    20210505-141020F                    prod-mgmt   full   record    21h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-148f0cfa   Ready    20210505-141020F_20210506-012856I   prod-mgmt   incr   record    11h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-20bd6dae   Ready    20210505-141020F_20210506-090753I   prod-mgmt   incr   record    3h28m
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-40efdb38   Ready    20210505-141020F_20210505-195838I   prod-mgmt   incr   record    16h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-681aa239   Ready    20210505-141020F_20210505-220302I   prod-mgmt   incr   record    14h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-7f7150dd   Ready    20210505-141020F_20210505-160732I   prod-mgmt   incr   record    20h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-806f8de6   Ready    20210505-141020F_20210505-214657I   prod-mgmt   incr   record    14h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-868a066a   Ready    20210505-141020F_20210506-090140I   prod-mgmt   incr   record    3h34m
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-cf9a85dc   Ready    20210505-141020F_20210505-210119I   prod-mgmt   incr   record    15h
    managementbackup.management.apiconnect.ibm.com/prod-mgmt-ef63b789   Ready    20210506-103241F                    prod-mgmt   full   record    83m
    
    NAME                                                                   STATUS     MESSAGE                                                     AGE
    managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-649mc   Complete   Upgrade is Complete (DB Schema/data are up-to-date)         142m
    managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-9mjhk   Complete   Fresh install is Complete (DB Schema/data are up-to-date)   22h
    
    NAME                                               READY   STATUS    VERSION              RECONCILED VERSION       AGE
    portalcluster.portal.apiconnect.ibm.com/prod-ptl   3/3     Running   10.0.5.7   10.0.5.7   21h
    Important: If you need to restart the deployment, wait until all Portal sites complete the upgrade. Run the following commands to check the status of the sites:
    1. Log in as an admin user:
      apic login -s <server_name> --realm admin/default-idp-1 --username admin --password <password>
    2. Get the portal service ID and endpoint:
      apic portal-services:get -o admin -s <management_server_endpoint> \
                   --availability-zone availability-zone-default <portal-service-name> \
                   --output - --format json
    3. List the sites:
      apic --mode portaladmin sites:list -s <management_server_endpoint> \ 
                   --portal_service_name <portal-service-name> \ 
                   --format json

      Any sites currently upgrading display the UPGRADING status; any site that completed its upgrade displays the INSTALLED status and the new platform version. Verify that all sites display the INSTALLED status before proceeding.

      For more information on the sites command, see apic sites:list and Using the sites commands.

    4. After all sites are in INSTALLED state and have the new platform listed, run the following command:
      apic --mode portaladmin platforms:list -s <server_name> --portal_service_name <portal_service_name>
      

      Verify that the new version of the platform is the only platform listed.

      For more information on the platforms command, see apic platforms:list and Using the platforms commands.

  12. Upgrading to 10.0.5.5: Verify that the GatewayCluster upgraded correctly.

    When upgrading to 10.0.5.5 on OpenShift, the rolling update might fail to start on gateway operand pods due to a gateway peering issue, even though the reconciled version on the gateway CR (incorrectly) displays as 10.0.5.5. Complete the following steps to check for this issue and correct it if needed.

    1. Check the productVersion of each gateway pod to verify that it is 10.0.5.7 (the version of DataPower Gateway that was released with API Connect 10.0.5.5 by running one of the following commands:
      oc get po -n apic_namespace <gateway_pods> -o yaml | yq .metadata.annotations.productVersion
      or
      oc get po -n apic_namespace <gateway_pods> -o custom-columns="productVersion:.metadata.annotations.productVersion"
      
      where:
      • apic_namespace is the namespace where API Connect is installed
      • <gateway_pods> is a space-delimited list of the names of your gateway peering pods
    2. If any pod returns an incorrect value for the version, resolve the issue as explained in Incorrect productVersion of gateway pods after upgrade.
  13. (Optional). If you upgraded from 10.0.5.4 or earlier, delete the DataPowerService CRs so that they will be regenerated with random passwords for gateway-peering.

    Starting with API Connect 10.0.5.5, GatewayCluster pods are configured by default to secure the gateway-peering sessions with a unique, randomly generated password. However, GatewayCluster pods created prior to API Connect 10.0.5.5 are configured to use a single, hard-coded password and upgrading to 10.0.5.5 or later does not replace the hard-coded password.

    After upgrading to API Connect 10.0.5.5 or later, you can choose to secure the gateway-peering sessions by running the following command to delete the DataPowerService CR that was created by the GatewayCluster:

    oc delete dp <gateway_cluster_name>

    This action prompts the API Connect Operator to recreate the DataPowerService CR with the unique, randomly generated password. This is a one-time change and does not need to be repeated for subsequent upgrades.

  14. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): Enable analytics as explained in Enabling Analytics after upgrading
  15. If you are upgrading to 10.0.5.3 (or later) from an earlier 10.0.5.x release: Review and configure the new inter-subsystem communication features: Optional post-upgrade steps for upgrade to 10.0.5.3 from earlier 10.0.5 release.
  16. Restart all nats-server pods by running the following command:
    oc -n <namespace> delete po -l app.kubernetes.io/name=natscluster

What to do next

Update your toolkit CLI by downloading it from IBM Fix Central or from the Cloud Manager UI, see Installing the toolkit.

If you are upgrading from v10.0.5.1 or earlier to v10.0.5.2: The change in deployment profile CPU and memory limits that are introduced in 10.0.5.2 (see New deployment profiles and CPU licensing) can result in a change in the performance of your Management component. If you notice any obvious reduction in performance of the Management UI or toolkit CLI where you have multiple concurrent users, open a support case.