Planning your API Connect upgrade on Kubernetes
Review the supported versions, requirements, and limitations for upgrading API Connect on Kubernetes.
Supported upgrade paths
Table 1 lists the supported upgrade paths for API Connect and explains whether a direct upgrade is supported or an interim upgrade is required.
Upgrade from: | How to upgrade to 10.0.9.0 |
---|---|
|
Complete the procedure in Upgrading API Connect on Kubernetes. |
|
Upgrade to 10.0.8.0 as explained in Upgrading on Kubernetes on API Connect 10.0.8.0 documentation. |
10.0.6.0 | Complete two upgrades, making sure to perform the additional steps to preserve
and re-create your SFTP backup configuration:
|
10.0.5.3 or earlier | Complete two upgrades:
|
Supported DataPower Gateway versions
You can use API Connect 10.0.9.0 with DataPower 10.6.2.0.
Before you upgrade, best practice is to review the latest compatibility support for your version of API Connect. To view compatibility support, follow the instructions in IBM API Connect Version 10 software product compatibility requirements to access API Connect information on the Software Product Compatibility Reports website. When you find the information for your version of API Connect, select , and view the list of compatible DataPower Gateway versions.
Supported versions of Kubernetes
Table 2 provides a quick reference of the supported Kubernetes versions for each API Connect release:
API Connect version | K8S 1.24 | K8S 1.25 | K8S 1.26 | K8S 1.27 | K8S 1.28 | K8S 1.29 | K8S 1.30 | K8S 1.31 |
---|---|---|---|---|---|---|---|---|
10.0.5.4 | ||||||||
10.0.5.5 | ||||||||
10.0.5.6 | ||||||||
10.0.5.7 | ||||||||
10.0.5.8 | ||||||||
10.0.6.0 | ||||||||
10.0.7.0 | ||||||||
10.0.8.0 | ||||||||
10.0.8.1 | ||||||||
10.0.9.0 |
When you upgrade API Connect, both your current deployment and your target release must support the same version of Kubernetes. After the API Connect upgrade, you can update Kubernetes to the highest version supported by the new release of API Connect.
It is possible that you must upgrade both Kubernetes and API Connect more than once to complete the process. For example, suppose that your current deployment is running API Connect 10.0.5.4 on Kubernetes v1.24 and you want to upgrade to API Connect 10.0.8.0. The highest version of Kubernetes that API Connect 10.0.5.4 supports is v1.26, and v1.28 is the lowest version of Kubernetes that API Connect10.0.8.0 supports.
- On your API Connect 10.0.5.4 deployment, upgrade Kubernetes from v1.24 to v1.25 to v1.26 and then to v1.27. (all interim upgrades are required).
- Upgrade API Connect to 10.0.7.0 on Kubernetes v1.27.
- Upgrade Kubernetes from v1.27 to v1.28.
- Upgrade API Connect to 10.0.8.0 on Kubernetes v1.28.
- Optionally upgrade Kubernetes from v1.28 to v1.29 and then to v1.30 to reach the highest supported version of Kubernetes in API Connect 10.0.8.0.
Important upgrade considerations
Carefully review the following items, to plan and prepare for your upgrade.
- Analytics backup changes
-
In V10.0.8, the analytics database backup functions moved from the API Connect operator to the analytics subsystem pods. This change has two implications:
- It is not possible to restore pre-V10.0.8 analytics database backups to a V10.0.8 or later analytics subsystem.
- During the upgrade procedure, after the API Connect operator is
upgraded but before the analytics subsystems are upgraded, the analytics subsystem CR status reports
The backup and restore functionality for the API Analytics subsystem has been moved from the IBM API Connect operator into the API Analytics microservices. This means that the v5.2-sc2 operator cannot support backup and restore on versions before 10.0.8. Update to 10.0.8 for scheduled backups to continue.
- Extra persistent storage required for analytics local backups
-
If analytics database backups are configured, then beginning with 10.0.8.0, an extra PV is required to store local backups and prepare them for transmission to your remote SFTP server or object-store.
Before you upgrade, verify that you have an extra PV available for your analytics subsystem local backups, and decide how much space to allocate to it. To estimate storage requirements for your local backups, see Estimating storage requirements.
The default size of the new PVC is 150Gi. You can override the size to increase it during the upgrade procedure.
- Analytics
n3xc4.m16
component profile is deprecated - The analytics
n3xc4.m16
profile is deprecated in 10.0.8.0. If you use this profile, then it is recommended to switch to then3xc4.m32
profile. You can switch profile either before or after upgrade to V10.0.8. For information about switching profile, see Changing deployment profiles on Kubernetes and OpenShift. - API transaction failure during the upgrade of gateway
-
The gateway subsystem remains available and continues processing API calls during the upgrade of the management, portal, and analytics subsystems. However, a few API transactions might fail during the upgrade of the gateway.
When you upgrade a cluster of gateway pods, a few API transactions might fail. During the upgrade, Kubernetes removes the pod from the load balancer configuration, deletes the pod, and then starts a new pod. The steps are repeated for each pod. Socket hang-ups occur on transactions that are in process at the time the pod is terminated.
The number of transactions that fail depends on the rate of incoming transactions, and the length of time that is needed to complete each transaction. Typically the number of failures is a small percentage. This behavior is expected during an upgrade. If the failure level is not acceptable, schedule the upgrade during an off-hours maintenance window.
DataPower Gateway supports long-lived connections such as GraphQL subscriptions or other websockets connections. These long-lived connections might not be preserved when you upgrade. Workloads with long-lived connections are more vulnerable to failed API transactions during upgrading.
You can limit the number of failed API transactions during the upgrade by using the DataPower Operator's
lifecycle
property to configure thepreStop
container lifecycle hook on the gateway pods. This approach mitigates the risk of API failures during the rolling update of the gateway StatefulSet by sleeping the pod for a span of time, allowing in-flight transactions to complete before the SIGTERM is delivered to the container. While this feature does not guarantee no in-flight API call failures, it does provide some mitigation for in-flight transactions that can complete successfully within the configured time window. For more information, see Delaying SIGTERM with preStop in the DataPower documentation.
- FIPS support
- API Connect is not supported in a FIPS-enabled environment on Kubernetes.
- OAuth provider resources with
api.securityDefintions
-
If you have native OAuth providers that are configured with the
api.securityDefintions
field assigned, then upgrade fails. Before upgrade, remove allapi.securityDefintions
from all native OAuth providers that you configured in the Cloud Manager and API Manager UIs:- Configuring a native OAuth provider in the Cloud Manager UI
- Configuring a native OAuth provider in the API Manager UI
securityDefintions
sections that are present. - Extra backup folder appended to management database backup path
-
When you upgrade to V10.0.7.0 or later with S3 backups configured, an extra backup path folder /edb is appended to your backup path. The /edb path distinguishes new EDB backups from your previous management backups.
- Extra persistent storage requirement for management subsystem
-
Upgrading to V10.0.7.0 or later requires extra persistent storage space for the management subsystem. Verify that your environment has at least as much extra capacity as the capacity assigned to your existing Postgres PVC:
kubectl get pvc -n <namespace>
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE def-management-site1-postgres Bound local-pv-7b528204 244Gi RWO local-storage 125d def-management-site1-postgres-pgbr-repo Bound local-pv-8d425510 244Gi RWO local-storage 125d def-management-site1-postgres-wal Bound local-pv-ee8feb38 244Gi RWO local-storage 125d
- Analytics persistent queue enablement requires extra PV. Upgrades from V10.0.8
-
The analytics persistent queue feature is enabled during the upgrade (if it was not already enabled). This feature requires an extra Physical Volume (PV). Before you upgrade from V10.0.8.x, ensure that your environment has an available PV.