Planning your API Connect upgrade on OpenShift

Review the supported upgrade paths and other upgrade considerations on OpenShift or Cloud Pak for Integration.

Supported upgrade paths for API Connect

Table 1 lists the supported upgrade paths for API Connect and explains whether you can upgrade directly or an interim upgrade is required.

Table 1. Supported upgrade paths on OpenShift
Upgrade from: How to upgrade to 10.0.8.0
  • 10.0.7.0
  • 10.0.5.7
  • 10.0.5.6
  • 10.0.5.5
  • 10.0.5.4
Complete the procedure in Upgrading on OpenShift and Cloud Pak for Integration.
10.0.6.0 Complete two upgrades, making sure to perform the additional steps to preserve and re-create your SFTP backup configuration:
  1. Upgrade to version 10.0.7.0 as explained in Upgrading on OpenShift and Cloud Pak for Integration in the API Connect 10.0.7.0 documentation.
    If you used SFTP backups for the Management subsystem in 10.0.6.0, you must temporarily remove the SFTP backup configuration before running the upgrade to 10.0.7.0:
    1. Edit the Management subsystem CR (or the spec.management section of your top-level apiconnectcluster CR), delete the databaseBackup section, and save it so you can restore it later.
    2. Add the following annotation into that CR:
      apiconnect-operator/backups-not-configured: "true"
  2. Upgrade to the current release by completing the procedures in Upgrading on OpenShift and Cloud Pak for Integration.
  3. Restore your SFTP backup configuration in the new version of API Connect:
    1. Edit the Management subsystem CR (or the spec.management section of your top-level apiconnectcluster CR), and add the preserved databaseBackup section.
    2. Delete the apiconnect-operator/backups-not-configured: "true" annotation.
10.0.5.3 or earlier Complete two upgrades:
  1. Upgrade to version 10.0.5.4 or later as explained in Upgrading on OpenShift and Cloud Pak for Integration in the API Connect 10.0.5.x documentation.
  2. Upgrade to the current release by completing the procedures in Upgrading on OpenShift and Cloud Pak for Integration.

Supported DataPower Gateway versions

You can use any combination of API Connect 10.0.8.x with DataPower 10.6.0.y.

Before you upgrade, best practice is to review the latest compatibility support for your version of API Connect. To view compatibility support, follow the instructions in IBM API Connect Version 10 software product compatibility requirements to access API Connect information on the Software Product Compatibility Reports website. When you find the information for your version of API Connect, select Supported Software > Integration Middleware, and view the list of compatible DataPower Gateway versions.

Version and platform compatibility

Be sure to review Operator, CASE, and platform requirements for upgrading for the latest information on OpenShift and API Connect requirements.

Important upgrade considerations

Carefully review the following items, to plan and prepare for your upgrade.

Analytics backup changes
In V10.0.8, the analytics database backup functions moved from the API Connect operator to the analytics subsystem pods. This change has two implications:
  • It is not possible to restore pre-V10.0.8 analytics database backups to a V10.0.8 or later analytics subsystem.
  • During the upgrade procedure, after the API Connect operator is upgraded but before the analytics subsystems are upgraded, the analytics subsystem CR status reports
    The backup and restore functionality for the API Analytics subsystem has been moved from the IBM API Connect operator into the API Analytics microservices. 
    This means that the v5.2-sc2 operator cannot support backup and restore on versions before 10.0.8. 
    Update to 10.0.8 for scheduled backups to continue.

If you are using the top-level CR, then the APIConnectCluster CR reports a status of Pending until the analytics subsystems are upgraded to V10.0.8.

Upgrading multiple instances of API Connect
  • If a single operator manages multiple instances, all of those instances must be upgraded, one at a time, as soon as possible.

    The operator should not be managing an operand based on an older version any longer than necessary.

  • Ensure that each instance is fully upgraded before you start the upgrade on the next instance.
Top-level CR deployment profiles n3xc16.m48 and n3xc12.m40 deprecated

The top-level CR deployment profiles n3xc16.m48 and n3xc12.m40 are deprecated, and replaced with the profiles n3xc16.m64 and n3xc12.m56. It is recommended to switch to one of these new profiles after upgrade to V10.0.8 is completed. For information about switching profile, see Changing deployment profiles on OpenShift top-level CR.

Analytics n3xc4.m16 component profile is deprecated
The analytics n3xc4.m16 profile is deprecated in 10.0.8.0. If you use this profile, then it is recommended to switch to the n3xc4.m32 profile. You can switch profile either before or after upgrade to V10.0.8. For information about switching profile, see Changing deployment profiles on Kubernetes and OpenShift.
Extra persistent storage required for analytics local backups

If analytics database backups are configured, then beginning with 10.0.8.0, an extra PV is required to store local backups and prepare them for transmission to your remote SFTP server or object-store.

Before you upgrade, verify that you have an extra PV available for your analytics subsystem local backups, and decide how much space to allocate to it. To estimate storage requirements for your local backups, see Estimating storage requirements.

The default size of the new PVC is 150Gi. You can override the size to increase it during the upgrade procedure.

API transaction failure during the upgrade of gateway

The gateway subsystem remains available and continues processing API calls during the upgrade of the management, portal, and analytics subsystems. However, a few API transactions might fail during the upgrade of the gateway.

When you upgrade a cluster of gateway pods, a few API transactions might fail. During the upgrade, Kubernetes removes the pod from the load balancer configuration, deletes the pod, and then starts a new pod. The steps are repeated for each pod. Socket hang-ups occur on transactions that are in process at the time the pod is terminated.

The number of transactions that fail depends on the rate of incoming transactions, and the length of time that is needed to complete each transaction. Typically the number of failures is a small percentage. This behavior is expected during an upgrade. If the failure level is not acceptable, schedule the upgrade during an off-hours maintenance window.

DataPower Gateway supports long-lived connections such as GraphQL subscriptions or other websockets connections. These long-lived connections might not be preserved when you upgrade. Workloads with long-lived connections are more vulnerable to failed API transactions during upgrading.

You can limit the number of failed API transactions during the upgrade by using the DataPower Operator's lifecycle property to configure the preStop container lifecycle hook on the gateway pods. This approach mitigates the risk of API failures during the rolling update of the gateway StatefulSet by sleeping the pod for a span of time, allowing in-flight transactions to complete before the SIGTERM is delivered to the container. While this feature does not guarantee no in-flight API call failures, it does provide some mitigation for in-flight transactions that can complete successfully within the configured time window. For more information, see Delaying SIGTERM with preStop in the DataPower documentation.

OAuth provider resources with api.securityDefintions
If you have native OAuth providers that are configured with the api.securityDefintions field assigned, then upgrade fails. Before upgrade, remove all api.securityDefintions from all native OAuth providers that you configured in the Cloud Manager and API Manager UIs: Switch to the source view and delete any securityDefintions sections that are present.
Extra backup folder appended to management database backup path

When you upgrade to V10.0.7.0 or later with S3 backups configured, an extra backup path folder /edb is appended to your backup path. The /edb path distinguishes new EDB backups from your previous management backups.

Extra persistent storage requirement for management subsystem
Upgrading to V10.0.7.0 or later requires extra persistent storage space for the management subsystem. Verify that your environment has at least as much extra capacity as the capacity assigned to your existing Postgres PVC:
kubectl get pvc -n <namespace>

NAME                                      STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS    AGE
def-management-site1-postgres             Bound    local-pv-7b528204   244Gi      RWO            local-storage   125d
def-management-site1-postgres-pgbr-repo   Bound    local-pv-8d425510   244Gi      RWO            local-storage   125d
def-management-site1-postgres-wal         Bound    local-pv-ee8feb38   244Gi      RWO            local-storage   125d
Analytics persistent queue enablement requires extra PV. Upgrades from V10.0.5

The analytics persistent queue feature is enabled during the upgrade (if it was not already enabled). This feature requires an extra Physical Volume (PV). Before you upgrade from V10.0.5.x, ensure that your environment has an available PV.