Upgrading API Connect on OpenShift
Upgrade your API Connect installation to a newer version on OpenShift in an online (connected to the internet) environment.
Before you begin
Review and complete all steps that are documented in Planning your API Connect upgrade on OpenShift and Pre-upgrade preparation and checks on OpenShift.
- If any subsystem database backups are running or are scheduled to run within a few hours, do not start the upgrade process.
- Do not perform maintenance tasks such as updating certificates, restoring subsystem databases from backup, or triggering subsystem database backups at any time while you are upgrading API Connect.
Procedure
- If you did not run the Pre-upgrade health-checks recently, run them now.
- If you have a two data center disaster recovery deployment, then upgrade the warm-standby data center
first. See the special steps for 2DCDR upgrades: Upgrading a 2DCDR deployment on Kubernetes and OpenShift from V10.0.9
- Update the operators for the API Connect
deployment.
API Connect uses three operators; which you update by selecting a newer version for the operator channel. Update the channels in the following sequence:
- DataPower: set the channel to v1.15
- API Connect: set the channel to v6.1
- Foundational services: set the channel to v4.6
Be sure to update the Foundational services operator channel in all namespaces.
Complete the following steps to update a channel:
- Click Operators > Installed Operators.
- Select the operator to be upgraded.
- Select the Subscription tab.
- In the Update channel list, select the new channel version.
If you previously chose automatic subscriptions, the operator version upgrades automatically when you update the operator channel. If you previously chose manual subscriptions, OpenShift OLM notifies you that an upgrade is available and then you must manually approve the upgrade before proceeding.
Wait for the operators to update, for the pods to restart, and for the instances to display the
Ready
status. Both the API Connect and DataPower channels must be changed before either operator upgrades. The upgrade of both operators begins when the channel is changed for both operators.Troubleshooting: See Troubleshooting upgrade on OpenShift.
Note: If analytics backups are configured, then after the API Connect operator is upgraded, the analytics CR reports a status ofBlocked
until the analytics subsystem is upgraded to V10.0.8. With top-level CR deployments, theapiconnectcluster
CR reportsPending
until the analytics CR is no longerBlocked
. For more information, see Analytics backup changes. - Ensure that the operators and operands are working correctly before proceeding.
- If you are using the top-level CR (includes Cloud Pak for Integration), then
update the top-level
apiconnectcluster
CR:The
spec
section of theapiconnectcluster
looks like the following example:apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-production name: prod namespace: <APIC_namespace> spec: license: accept: true use: production license: L-RAWZ-DSFFFV profile: n12xc4.m12 version: 10.0.10.0 storageClassName: rook-ceph-block
- Edit the
apiconnectcluster
CR by running the following command:oc -n <APIC_namespace> edit apiconnectcluster
- Update the
spec.version
property to the version you are upgrading to. - If you are upgrading to a version of API Connect that requires a new
license, update the
spec.license
property accordingly.For the list of licenses, see API Connect licenses.
- Attention: This step applies only when upgrading to API Connect version 10.0.10 or later.
$PLATFORM_CA_SECRET
Set to ingress-ca
. The $PLATFORM_CA_SECRET contains the CA certificate for the platform ingress endpoint of the management subsystem. The analytics makes calls to the platform REST API and this property must be set so that the analytics can verify the server certificate.mgmtPlatformEndpointCASecret: secretName: $PLATFORM_CA_SECRET
Note: The value ofsecretName
must match the name of theKubernetes Secret
containing the CA certificate for the platform ingress endpoint.Optional: In-cluster subsystem communication (mgmtPlatformEndpointSvcCASecret)
If you plan to enable In-cluster intersubsystem communication (internal Kubernetes service-based communication instead of ingress) you can optionally configure themgmtPlatformEndpointSvcCASecret
field.mgmtPlatformEndpointSvcCASecret: secretName: management-ca # Usually 'management-ca
The
management-ca
secret contains the certificate authority (CA) certificate that the internal Kubernetes service uses to sign its TLS certificates. This secret is typically generated automatically during the management subsystem installation. Copy themanagement-ca
secret to the Analytics namespace. - Optional: Set local analytics backups PVC size. By default the new PVC that is created for the analytics subsystem's local database backups is set to 150Gi. If you want to specify a larger size, then add the following to the CR
spec.analytics
section:
where:storage: backup: volumeClaimTemplate: storageClass: <storage class> volumeSize: <size>
- <storage class> is the same as used by your other analytics PVCs.
- <size> is the size. For example:
500Gi
. See Estimating storage requirements.
- Optional: If you had preserved the Cloud Pak CPD endpoints during a previous upgrade
from 10.0.5.x to 10.0.8.0, and now want to add a custom certificate for the Cloud Pak route,
complete the following steps:
- Create a Cloud Pak certificate that is signed by a Cloud Pak CA, as in the following
example:
--- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: custom-cpd-ca namespace: apic spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: custom-cpd namespace: apic spec: commonName: small-mgmt-cpd dnsNames: - cpd-apic.apps.admin-apickeycloak.cp.fyre.ibm.com duration: 17520h0m0s issuerRef: kind: Issuer name: custom-cpd-ca privateKey: algorithm: RSA rotationPolicy: Always size: 2048 renewBefore: 720h0m0s secretName: custom-cpd usages: - key encipherment - digital signature - server auth
- In the CR, provide the secret name within the
cloudPakEndpoint
property of the newdeprecatedCloudPakRoute
object; for example:spec: deprecatedCloudPakRoute: enabled: true cloudPakEndpoint: hosts: - name: cpd-apic.apps.admin-<domain>.com secretName: custom-cpd
- Create a Cloud Pak certificate that is signed by a Cloud Pak CA, as in the following
example:
- In the
spec.gateway
section, delete anytemplate
ordataPowerOverride
override sections.If the CR contains an override, then you cannot complete the upgrade.
- If you have a two data center disaster recovery installation, and are
upgrading the warm-standby,
then add the annotation:
For more information about two data center disaster recovery upgrade, see Upgrading a 2DCDR deployment on Kubernetes and OpenShift from V10.0.9.apiconnect-operator/dr-warm-standby-upgrade-data-deletion-confirmation: "true"
- Save and close the CR to apply your changes.
The response looks like the following example:
apiconnectcluster.apiconnect.ibm.com/prod configured
Troubleshooting: If you see an error message when you attempt to save the CR, see Troubleshooting upgrade on OpenShift.
- Run the following command to verify that the upgrade is completed and the status of
the top-level CR is
READY
:oc get apiconnectcluster -n <APIC_namespace>
Important: If you need to restart the deployment, wait until all portal sites complete the upgrade.After the portal subsystem upgrade is complete, each portal site is upgraded. You can monitor the site upgrade progress from theMESSAGE
column in theoc get ptl
output. You can still use the portal while sites are upgrading, although a maintenance page is shown for any sites that are being upgraded. When the site upgrades are complete, theoc get ptl
output shows how many sites the portal is serving:NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE portal 3/3 Running <version> <version> Serving 2 sites 22h
On two data center disaster recovery deployments, the sites are not upgraded until both data centers are upgraded.
Troubleshooting: If the upgrade appears to be stuck, showing the status ofPending
for a long time, then check the management CR status for errors:oc -n <namespace> get mgmt -o yaml
Refer to Troubleshooting upgrade on OpenShift, for known issues.
- Edit the
- If you are using individual subsystem CRs: Start with the
management subsystem, and update the management CR as follows:
- Edit the
ManagementCluster
CR:oc edit ManagementCluster -n <mgmt_namespace>
- Update the
spec.version
property to the version you are upgrading to. - If you are upgrading to a version of API Connect that requires a new
license, update the
spec.license
property accordingly.For the list of licenses, see API Connect licenses.
- If you have a two data center disaster recovery installation, and are
upgrading the warm-standby,
then add the annotation:
For more information about two data center disaster recovery upgrade, see Upgrading a 2DCDR deployment on Kubernetes and OpenShift from V10.0.9.apiconnect-operator/dr-warm-standby-upgrade-data-deletion-confirmation: "true"
- Save and close the CR to apply your changes.
The response looks like the following example:
managementcluster.management.apiconnect.ibm.com/management edited
Troubleshooting: If you see an error message when you attempt to save the CR, see Troubleshooting upgrade on OpenShift.
- Wait until the management subsystem upgrade is complete before you proceed to the next
subsystem. Check the status of the upgrade with:
oc get ManagementCluster -n <mgmt_namespace>
, and wait until all pods are running at the new version. For example:oc -n <mgmt_namespace> get ManagementCluster NAME READY STATUS VERSION RECONCILED VERSION AGE management 18/18 Running 10.0.10.0 10.0.10.0-1281 97m
Troubleshooting: If the upgrade appears to be stuck, showing the status ofPending
for a long time, then check the management CR status for errors:oc -n <namespace> get mgmt -o yaml
Refer to Troubleshooting upgrade on OpenShift, for known issues.
- Repeat the process for the remaining subsystem CRs in your preferred order:
GatewayCluster
,PortalCluster
,AnalyticsCluster
.Important: In theGatewayCluster
CR, delete anytemplate
ordataPowerOverride
override sections. If the CR contains an override, then you cannot complete the upgrade.Note:Upgrades to V10.0.8.0: By default the new PVC that is created for the analytics subsystem's local database backups is set to 150Gi. If you want to specify a larger size, then add the following to the CRspec
section:
where:storage: backup: volumeClaimTemplate: storageClassName: <storage class> volumeSize: <size>
- <storage class> is the same as used by your other analytics PVCs.
- <size> is the size. For example:
500Gi
. See Estimating storage requirements.
Important: If you need to restart the deployment, wait until all portal sites complete the upgrade.After the portal subsystem upgrade is complete, each portal site is upgraded. You can monitor the site upgrade progress from theMESSAGE
column in theoc get ptl
output. You can still use the portal while sites are upgrading, although a maintenance page is shown for any sites that are being upgraded. When the site upgrades are complete, theoc get ptl
output shows how many sites the portal is serving:NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE portal 3/3 Running <version> <version> Serving 2 sites 22h
On two data center disaster recovery deployments, the sites are not upgraded until both data centers are upgraded.
Troubleshooting: If the upgrade appears to be stuck, showing the status ofPending
for a long time, then check the subsystem CR status for errors:oc -n <namespace> get <subsystem cr> -o yaml
Refer to Troubleshooting upgrade on OpenShift, for known issues.
- Edit the
- Upgrade your deployment to the latest version of OpenShift that your current
release of API Connect
supports.
For more information, see Supported versions of OpenShift.
- Switch from deprecated deployment profiles.
If you have a top-level CR installation, and were using the
n3xc16.m48
orn3xc12.m40
deployment profiles, then switch to the new replacement profilesn3xc16.m64
orn3xc12.m56
. For information about switching profiles, see Changing deployment profiles on OpenShift top-level CR.If you are using individual subsystem CRs, and your analytics subsystem uses the
n3xc4.m16
profile, then update the profile in your analytics CR ton3xc4.m32
. For information about switching profiles, see Changing deployment profiles on Kubernetes and OpenShift
What to do next
Update your toolkit CLI by downloading it from Fix Central or from the Cloud Manager UI. For more information, see Installing the toolkit.