Upgrading with a bastion host
Use a bastion host to perform an air-gapped upgrade of IBM® API Connect on Red Hat OpenShift Container Platform (OCP) using either the top-level
APIConnectCluster
CR or individual subsystem CRs. First the operator is updated,
and then the operands (IBM API Connect itself).
Before you begin
- If you are upgrading an installation online (connected to the internet), see Upgrading on OpenShift in an online environment.
- The upgrade procedure requires you to use Red Hat Skopeo for moving container images. Skopeo is not available for Microsoft Windows, so you cannot perform this task using a Windows host.
- If you are upgrading to a version of API Connect that supports a newer version of Red Hat OpenShift, complete the API Connect upgrade before upgrading Red Hat OpenShift.
- Upgrading from 10.0.5.2 or earlier: If you did not verify that your Portal customizations are
compatible with Drupal 10, do that now.
In API Connect 10.0.5.3, the Developer Portal moved from Drupal 9 to Drupal 10 (this upgrade also requires PHP 8.1). The upgrade tooling will update your Developer Portal sites; however, if you have any custom modules or themes, it is your responsibility to ensure their compatibility with Drupal 10 and PHP 8.1 before starting the upgrade. Review the Guidelines on upgrading your Developer Portal from Drupal 9 to Drupal 10 to ensure that any customizations to the Developer Portal are compatible with Drupal 10 and PHP 8.1.
About this task
- The Gateway subsystem remains available during the upgrade of the Management, Portal, and Analytics subsystems.
- Don't use the tilde ~ within double quotation marks in any command because the tilde doesn’t expand and your commands might fail.
Procedure
- Ensure that you have completed all of the steps in Preparing to upgrade on OpenShift, including reviewing the Upgrade considerations on OpenShift.
Do not attempt an upgrade until you have reviewed the considerations and prepared your deployment.
-
Ensure your API Connect deployment is
ready to upgrade:
- Your API Connect
release (operand) supports a direct upgrade to this release.
For information on the operator and operand version that is used with each API Connect release, see Operator, operand, and CASE versions.
- The DataPower operator version is correct for the currently deployed
version of API Connect.
For information on upgrade paths and supported versions of DataPower Gateway, see Upgrade considerations on OpenShift.
- Your deployment is running on a version of Red Hat OpenShift that is
supported by both the current version of API Connect and the target version of API Connect.
For information, see Supported versions of OpenShift.
- Your API Connect
release (operand) supports a direct upgrade to this release.
- Set up the mirroring environment.
- Prepare the target cluster:
- Deploy a supported version of Red Hat OpenShift Container Platform (OCP)
as a cluster.
For information, see Table 2 "API Connect and OpenShift Container Platform (OCP) compatibility matrix" in IBM API Connect Version 10 software product compatibility requirements.
- Configure storage on the cluster and make sure that it is available.
- Deploy a supported version of Red Hat OpenShift Container Platform (OCP)
as a cluster.
- Prepare the bastion host:
You must be able to connect your bastion host to the internet and to the restricted network environment (with access to the Red Hat OpenShift Container Platform (OCP) cluster and the local registry) at the same time. Your host must be on a Linux x86_64 or Mac platform with any operating system that the Red Hat OpenShift Client supports (in Windows, execute the actions in a Linux x86_64 VM or from a Windows Subsystem for Linux terminal).
- Ensure that the sites and ports listed in Table 1 can be reached from the bastion host:
Table 1. Sites that must be reached from the bastion host Site Description icr.io:443
IBM entitled registry quay.io:443
Local API Connect image repository github.com
CASE files and tools redhat.com
Red Hat OpenShift Container Platform (OCP) upgrades - On the bastion host, install either Docker or Podman (not both).
Docker and Podman are used for managing containers; you only need to install one of these applications.
- To install Docker (for example, on Red Hat Enterprise
Linux), run the following commands:
yum check-update yum install docker
- To install Podman, see the Podman
installation instructions. For example, on Red Hat Enterprise Linux 9, install Podman with the following command:
yum install podman
- To install Docker (for example, on Red Hat Enterprise
Linux), run the following commands:
- Install the Red Hat OpenShift Client tool (
oc
) as explained in Getting started with the OpenShift CLI.The
oc
tool is used for managing Red Hat OpenShift resources in the cluster. - Download the IBM Catalog Management Plug-in for IBM Cloud Paks version
1.1.0 or later from GitHub.The
ibm-pak
plug-in enables you to access hosted product images, and to runoc ibm-pak
commands against the cluster. To confirm thatibm-pak
is installed, run the following command and verify that the response lists the command usage:oc ibm-pak --help
- Ensure that the sites and ports listed in Table 1 can be reached from the bastion host:
- Set up a local image registry and credentials.
The local Docker registry stores the mirrored images in your network-restricted environment.
- Install a registry, or get access to an existing registry.
You might already have access to one or more centralized, corporate registry servers to store the API Connect images. If not, then you must install and configure a production-grade registry before proceeding.
The registry product that you use must meet the following requirements:- Supports multi-architecture images through Docker Manifest V2, Schema
2
For details, see Docker Manifest V2, Schema 2.
- Is accessible from the Red Hat OpenShift Container Platform cluster nodes
- Allows path separators in the image name
Note: Do not use the Red Hat OpenShift image registry as your local registry because it does not support multi-architecture images or path separators in the image name. - Supports multi-architecture images through Docker Manifest V2, Schema
2
- Configure the registry to meet the following requirements:
- Supports auto-repository creation
- Has sufficient storage to hold all of the software that is to be transferred
- Has the credentials of a user who can create and write to repositories (the mirroring process uses these credentials)
- Has the credentials of a user who can read all repositories (the Red Hat OpenShift Container Platform cluster uses these credentials)
To access your registries during an air-gapped installation, use an account that can write to the target local registry. To access your registries during runtime, use an account that can read from the target local registry.
- Install a registry, or get access to an existing registry.
- Prepare the target cluster:
-
Set environment variables and download CASE files.
Create environment variables to use while mirroring images, connect to the internet, and download the API Connect CASE files.
- Create the following environment variables with the installer image name and the image
inventory on your host:
Because you will use values from two different CASE files, you must create environment variables for both; notice that the variables for the foundational services (common services) CASE file are prefixed with "CS_" to differentiate them.
export CASE_NAME=ibm-apiconnect export CASE_VERSION=4.0.9 export ARCH=amd64
For information on API Connect CASE versions and their corresponding operators and operands, see Operator, operand, and CASE versions.
export CS_CASE_NAME=ibm-cp-common-services export CS_CASE_VERSION=1.15.10 export CS_ARCH=amd64
For example, for IBM Cloud Pak foundational services 3.19.X (Long Term Service Release), use version 1.15.10; for foundational services 3.23.X (Continuous Delivery), use version 1.19.2.
For information on IBM Cloud Pak foundational services (common services) CASE versions, see "Table 1. Image versions for offline installation" in Installing IBM Cloud Pak foundational services in an air-gapped environment in the IBM Cloud Pak foundational services documentation.
- Connect your host to the internet (it does not need to be connected to the network-restricted environment at this time).
- Download the CASE file to your host:
Be sure to download both CASE files as shown in the example:
oc ibm-pak get $CASE_NAME --version $CASE_VERSION
oc ibm-pak get $CS_CASE_NAME --version $CS_CASE_VERSION
If you omit the
--version
parameter, the command downloads the latest version of the file.
- Create the following environment variables with the installer image name and the image
inventory on your host:
-
Mirror the images.
The process of mirroring images pulls the images from the internet and pushes them to your local registry. After mirroring your images, you can configure your cluster and pull the images to it before installing API Connect.
- Generate mirror manifests.
- Define the environment variable $TARGET_REGISTRY by running the following command:
Replaceexport TARGET_REGISTRY=<target-registry>
<target-registry>
with the IP address (or host name) and port of the local registry; for example:172.16.0.10:5000
. If you want the images to use a specific namespace within the target registry, you can specify it here; for example:172.16.0.10:5000/registry_ns
. - Generate mirror manifests by running the following
commands:
oc ibm-pak generate mirror-manifests $CASE_NAME $TARGET_REGISTRY --version $CASE_VERSION
oc ibm-pak generate mirror-manifests $CS_CASE_NAME $TARGET_REGISTRY --version $CS_CASE_VERSION
If you need to filter for a specific image group, add the parameter
--filter <image_group>
to this command.
Thegenerate
command creates the following files at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION and ~/.ibm-pak/data/mirror/$CS_CASE_NAME/$CS_CASE_VERSION:- catalog-sources.yaml
- catalog-sources-linux-<arch>.yaml (if there are architecture-specific catalog sources)
- image-content-source-policy.yaml
- images-mapping.txt
The files are used when mirroring the images to the
TARGET_REGISTRY
. - Define the environment variable $TARGET_REGISTRY by running the following command:
- Obtain an entitlement key for the entitled registry where the images are hosted:
- Log in to the IBM Container Library.
- In the Container software library, select Get entitlement key.
- In the "Access your container software" section, click Copy key.
- Copy the key to a safe location; you will use it to log in to
cp.icr.io
in the next step.
- Authenticate with the entitled registry where the images are hosted.
The image pull secret allows you to authenticate with the entitled registry and access product images.
- Run the following command to export the path to the file that will store the authentication
credentials that are generated on a Podman or Docker
login:
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
The authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows.
- Log in to the
cp.icr.io
registry with Podman or Docker; for example:podman login cp.icr.io
Use
cp
as the username and your entitlement key as the password.
- Run the following command to export the path to the file that will store the authentication
credentials that are generated on a Podman or Docker
login:
- Authenticate with the local registry.
Log in to the local registry using an account that can write images to that registry; for example:
podman login $TARGET_REGISTRY
If the registry is insecure, add the following flag to the command:
--tls-verify=false
. - Update the CASE manifest to correctly reference the DataPower Operator image.
Files for the DataPower Operator are now hosted on
icr.io
; however, the CASE manifest still refers todocker.io
as the image host. To work around this issue, visit Airgap install failure due to 'unable to retrieve source image docker.io' in the DataPower documentation and update the manifest as instructed. After the manifest is updated, continue to the next step in this procedure. - Mirror the product images.
- Connect the bastion host to both the internet and the restricted-network environment that contains the local registry.
- Run the following commands to copy the images to the local registry:
oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --skip-multiple-scopes \ --max-per-registry=1
oc image mirror \ -f ~/.ibm-pak/data/mirror/$CS_CASE_NAME/$CS_CASE_VERSION/images-mapping.txt \ -a $REGISTRY_AUTH_FILE \ --filter-by-os '.*' \ --skip-multiple-scopes \ --max-per-registry=1
Note: If the local registry is not secured by TLS, or the certificate presented by the local registry is not trusted by your device, add the--insecure
option to the command.There might be a slight delay before you see a response to the command.
- Configure the target cluster.
Now that images have been mirrored to the local registry, the target cluster must be configured to pull the images from it. Complete the following steps to configure the cluster's global pull secret with the local registry's credentials and then instruct the cluster to pull the images from the local registry.
- Log in to your Red Hat OpenShift Container Platform
cluster:
oc login <openshift_url> -u <username> -p <password> -n <namespace>
- Update the global image pull secret for the cluster as
explained in the Red Hat OpenShift Container Platform
documentation.
Updating the image pull secret provides the cluster with the credentials needed for pulling images from your local registry.
Note: If you have an insecure registry, add the registry to the cluster'sinsecureRegistries
list by running the following command:
and add the TARGET_REGISTRY tooc edit image.config.openshift.io/cluster -o yaml
spec.registrySources.insecureRegistries
as shown in the following example:
If thespec: registrySources: insecureRegistries: - insecure0.svc:5001 - <TARGET_REGISTRY>
insecureRegistries
field does not exist, you can add it. - Create the ImageContentSourcePolicy, which instructs the cluster to pull the images from your
local registry (run both
commands):
oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
oc apply -f ~/.ibm-pak/data/mirror/$CS_CASE_NAME/$CS_CASE_VERSION/image-content-source-policy.yaml
- Verify that the ImageContentSourcePolicy resource was
created:
oc get imageContentSourcePolicy
- Verify your cluster node status:
oc get MachineConfigPool -w
Wait for all nodes to be updated before proceeding to the next step.
- Log in to your Red Hat OpenShift Container Platform
cluster:
- Generate mirror manifests.
-
Apply the catalog sources.
Now that you have mirrored images to the target cluster, apply the catalog sources.
In the following steps, replace
<Architecture>
with eitheramd64
,s390x
orppc64le
as appropriate for your environment.- Export the variables for the command line to use:
export CASE_NAME=ibm-apiconnect export CASE_VERSION=4.0.9 export ARCH=amd64
export CS_CASE_NAME=ibm-cp-common-services export CS_CASE_VERSION=1.15.10 export CS_ARCH=amd64
- Generate the catalog sources and save them in another directory in case you need to
replicate this installation in the future.
- Get the catalog
sources:
cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
cat ~/.ibm-pak/data/mirror/${CS_CASE_NAME}/${CS_CASE_VERSION}/catalog-sources.yaml
- (10.0.5.6 or earlier) Get any architecture-specific catalog sources that you need to back up as
well:
cat ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml
Starting with 10.0.5.7, this step is not needed.
You can also navigate to the directory in your file browser to copy these artifacts into files that you can keep for re-use or for pipelines.
- Get the catalog
sources:
- Apply the catalog sources to the cluster.
- Apply the universal catalog
sources:
oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources.yaml
oc apply -f ~/.ibm-pak/data/mirror/${CS_CASE_NAME}/${CS_CASE_VERSION}/catalog-sources.yaml
- (10.0.5.6 or earlier) Apply any architecture-specific catalog
sources:
oc apply -f ~/.ibm-pak/data/mirror/${CASE_NAME}/${CASE_VERSION}/catalog-sources-linux-${ARCH}.yaml
Starting with 10.0.5.7 this step is not needed.
- Confirm that the catalog sources have been created in the
openshift-marketplace
namespace:oc get catalogsource -n openshift-marketplace
- Apply the universal catalog
sources:
- Export the variables for the command line to use:
-
Update the operator channel.
- Open the Red Hat OpenShift web console and click Operators > Installed Operators > IBM API connect > Subscriptions.
-
Change the channel to the new version (
v3.8
), which triggers an upgrade of the API Connect operator.If you are upgrading from 10.0.4-ifix3 and the API Connect operator does not begin its upgrade within a few minutes, perform the following workaround to delete the
subscription and associated csv:ibm-ai-wmltraining
- Run the following command to get the name of the
subscription:
oc get subscription -n <APIC_namespace> --no-headers=true | grep ibm-ai-wmltraining | awk '{print $1}'
- Run the following command to delete the
subscription:
oc delete subscription <subscription-name> -n <APIC_namespace>
- Run the following command to get the name of the
csv:
oc get csv --no-headers=true -n <APIC_namespace> | grep ibm-ai-wmltraining | awk '{print $1}'
- Run the following command to delete the
csv:
oc delete csv <csv-name> -n <APIC_namespace>
Deleting the subscription and csv triggers the API Connect operator upgrade.
- Run the following command to get the name of the
subscription:
When the API Connect operator is updated, the new operator pod starts automatically.
-
Verify that the API Connect operator was updated by completing the following steps:
-
Get the name of the pod that hosts the operator by running the following command:
oc get po -n <APIC_namespace> | grep apiconnect
The response looks like the following example:ibm-apiconnect-7bdb795465-8f7rm 1/1 Running 0 4m23s
-
Get the API Connect version deployed on that pod by running the following command:
oc describe po <ibm-apiconnect-operator-podname> -n <APIC_namespace> | grep -i productversion
The response looks like the following example:productVersion: 10.0.5.8
-
Get the name of the pod that hosts the operator by running the following command:
-
If you are using a top-level CR: Update the top-level
APIConnectCluster
CR:The
spec
section of theapiconnectcluster
looks like the following example:apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-production name: prod namespace: <APIC_namespace> spec: allowUpgrade: true license: accept: true use: production license: L-GVEN-GFUPVE profile: n12xc4.m12 version: 10.0.5.8 storageClassName: rook-ceph-block
- Edit the
apiconnectcluster
CR by running the following command:oc -n <APIC_namespace> edit apiconnectcluster
- If upgrading from v10.0.4-ifix3, or
upgrading from v10.0.1.7-eus (or higher): In
the
spec
section, add a newallowUpgrade
attribute and set it totrue
:spec: allowUpgrade: true
The
allowUpgrade
attribute enables the upgrade to 10.0.5.x. Because the upgrade deletes your analytics data, the attribute is required to prevent an accidental upgrade. - In the
spec
section, update the API Connect version:Change theversion
setting to10.0.5.8
. - In the
spec.gateway
section of the CR, delete anytemplate
ordataPowerOverride
sections.You cannot perform an upgrade if the CR contains an override.
- Save and close the CR. The response looks like the following example:
apiconnectcluster.apiconnect.ibm.com/prod configured
Note: If you see an error message when you attempt to save the CR, check if it is one of the following known issues:- Webhook error for incorrect license.If you did not update the license ID in the CR, then when you save your changes, the following webhook error might display:
To resolve the error, see API Connect licenses for the list of the available license IDs and select the appropriate license IDs for your deployment. Update the CR with the new license value as in the following example, and then save and apply your changes again.admission webhook "vapiconnectcluster.kb.io" denied the request: APIConnectCluster.apiconnect.ibm.com "<instance-name>" is invalid: spec.license.license: Invalid value: "L-RJON-BYGHM4": License L-RJON-BYGHM4 is invalid for the chosen version version. Please refer license document https://ibm.biz/apiclicenses
- Webhook error:
Original PostgreSQL primary not found
. Take the following actions to complete the upgrade and fix the cause of the error message:- Edit your
apiconnectcluster
CR and add the following annotation:... metadata: annotations: apiconnect-operator/db-primary-not-found-allow-upgrade: "true" ...
- Continue with the upgrade. When the upgrade is complete, the management CR reports the
warning:
Original PostgreSQL primary not found. Run apicops upgrade:pg-health-check to check the health of the database and to ensure pg_wal symlinks exist. If database health check passes please perform a management database backup and restore to restore the original PostgreSQL primary pod
- Take a new management database backup.
- Immediately restore from the new backup taken in the previous step. The action of taking and restoring a management backup results in the establishment of a new Postgres primary, eliminating the CR warning message. Be careful to restore from the backup that is taken after the upgrade, and not from a backup taken before upgrade.
- Edit your
- Webhook error:
Original postgres primary is running as replica
. Complete a Postgres failover, see Postgres failover steps. After you apply the Postgres failover steps, the upgrade resumes automatically.
- Webhook error for incorrect license.
- If needed, delete old Postgres client certificates. If you are upgrading from 10.0.1.x or 10.0.4.0-ifix1, or if you previously installed any of those versions before upgrading to 10.0.5.x, there might be old Postgres client certificates. To verify, run the following command:
oc -n <namespace> get certs | grep db-client
For example, if you see that both
-db-client-apicuser
andapicuser
exist,apicuser
is no longer in use. Remove the old certificates by running one of the following commands, depending on how many old certifications left in your system:oc -n <namespace> delete certs apicuser pgbouncer primaryuser postgres replicator
or:
oc -n <namespace> delete certs apicuser pgbouncer postgres replicator
- Edit the
- If you are using individual subsystem CRs: Start
with the Management subsystem and update by completing the following steps:
- Edit the
ManagementCluster
CR:oc edit ManagementCluster -n <mgmt_namespace>
- If upgrading from v10.0.4-ifix3, or
upgrading from v10.0.1.7-eus (or higher): In
the
spec
section, add a newallowUpgrade
attribute and set it totrue
.spec: allowUpgrade: true
The
allowUpgrade
attribute enables the upgrade to 10.0.5.x. Because the upgrade to 10.0.5.x. deletes your analytics data, the attribute is required to prevent an accidental upgrade. - In the
spec
section, update the API Connect version:Change theversion
setting to10.0.5.8
. - If you are upgrading to a version of API Connect that requires a new
license, update the license value now.
For the list of licenses, see API Connect licenses.
- Save and close the CR to apply your changes.
The response looks like the following example:
managementcluster.management.apiconnect.ibm.com/management edited
Note: If you see an error message when you attempt to save the CR, check if it is one of the following known issues:- Webhook error for incorrect license.If you did not update the license ID in the CR, then when you save your changes, the following webhook error might display:
To resolve the error, see API Connect licenses for the list of the available license IDs and select the appropriate license IDs for your deployment. Update the CR with the new license value as in the following example, and then save and apply your changes again.admission webhook "vapiconnectcluster.kb.io" denied the request: APIConnectCluster.apiconnect.ibm.com "<instance-name>" is invalid: spec.license.license: Invalid value: "L-RJON-BYGHM4": License L-RJON-BYGHM4 is invalid for the chosen version version. Please refer license document https://ibm.biz/apiclicenses
- Webhook error:
Original PostgreSQL primary not found
. Take the following actions to complete the upgrade and fix the cause of the error message:- Edit your
ManagementCluster
CR and add the following annotation:... metadata: annotations: apiconnect-operator/db-primary-not-found-allow-upgrade: true ...
- Continue with the upgrade. When the upgrade is complete, the management CR reports the
warning:
Original PostgreSQL primary not found. Run apicops upgrade:pg-health-check to check the health of the database and to ensure pg_wal symlinks exist. If database health check passes please perform a management database backup and restore to restore the original PostgreSQL primary pod
- Take a new management database backup.
- Immediately restore from the new backup taken in the previous step. The action of taking and restoring a management backup results in the establishment of a new Postgres primary, eliminating the CR warning message. Be careful to restore from the backup that is taken after the upgrade, and not from a backup taken before upgrade.
- Edit your
- Webhook error:
Original postgres primary is running as replica
. Complete a Postgres failover, see Postgres failover steps. After you apply the Postgres failover steps, the upgrade resumes automatically.
- Webhook error for incorrect license.
- Confirm that the Management subsystem upgrade is complete.
Check the status of the upgrade with:
oc get ManagementCluster -n <mgmt_namespace>
, and wait until all pods are running at the new version. For example:oc get ManagementCluster -n <mgmt_namespace> NAME READY STATUS VERSION RECONCILED VERSION AGE management 18/18 Running 10.0.5.8 10.0.5.8-1281 97m
- Management subsystem only: If needed, delete old Postgres client certificates.
Skip this step for the Portal, Analytics, and Gateway subsystems.
If you are upgrading from 10.0.1.x or 10.0.4.0-ifix1, or if you previously installed any of those versions before upgrading to 10.0.5.x, there might be old Postgres client certificates. To verify, run the following command:oc -n <namespace> get certs | grep db-client
For example, if you see that both
-db-client-apicuser
andapicuser
exist,apicuser
is no longer in use. Remove the old certificates by running one of the following commands, depending on how many old certifications left in your system:oc -n <namespace> delete certs apicuser pgbouncer primaryuser postgres replicator
or:
oc -n <namespace> delete certs apicuser pgbouncer postgres replicator
- Repeat the process for the remaining subsystem CRs:
GatewayCluster
,PortalCluster
, and thenAnalyticsCluster
.Important:- In the
GatewayCluster
CR, delete anytemplate
ordataPowerOverride
sections. You cannot perform an upgrade if the CR contains an override. - If upgrading from v10.0.4-ifix3, or
upgrading from v10.0.1.7-eus (or higher): The
allowUpgrade
attribute set in the Management CR must also be set in theAnalyticsCluster
CR. It is not required for Gateway or Portal CRs.
- In the
- Edit the
-
Validate that the upgrade was successfully deployed by running the following command:
oc get apic -n <APIC_namespace>
The response looks like the following example:
NAME READY STATUS VERSION RECONCILED VERSION AGE analyticscluster.analytics.apiconnect.ibm.com/prod-a7s 5/5 Running 10.0.5.8 10.0.5.8 21h NAME READY STATUS VERSION RECONCILED VERSION AGE apiconnectcluster.apiconnect.ibm.com/prod 4/4 Ready 10.0.5.8 10.0.5.8 22h NAME PHASE READY SUMMARY VERSION AGE datapowerservice.datapower.ibm.com/prod-gw Running True StatefulSet replicas ready: 3/3 10.0.5.8 21h NAME PHASE LAST EVENT WORK PENDING WORK IN-PROGRESS AGE datapowermonitor.datapower.ibm.com/prod-gw Running false false 21h NAME READY STATUS VERSION RECONCILED VERSION AGE gatewaycluster.gateway.apiconnect.ibm.com/prod-gw 2/2 Running 10.0.5.8 10.0.5.8 21h NAME READY STATUS VERSION RECONCILED VERSION AGE managementcluster.management.apiconnect.ibm.com/prod-mgmt 16/16 Running 10.0.5.8 10.0.5.8 22h NAME STATUS ID CLUSTER TYPE CR TYPE AGE managementbackup.management.apiconnect.ibm.com/prod-mgmt-0f583bd9 Ready 20210505-141020F_20210506-011830I prod-mgmt incr record 11h managementbackup.management.apiconnect.ibm.com/prod-mgmt-10af02ee Ready 20210505-141020F prod-mgmt full record 21h managementbackup.management.apiconnect.ibm.com/prod-mgmt-148f0cfa Ready 20210505-141020F_20210506-012856I prod-mgmt incr record 11h managementbackup.management.apiconnect.ibm.com/prod-mgmt-20bd6dae Ready 20210505-141020F_20210506-090753I prod-mgmt incr record 3h28m managementbackup.management.apiconnect.ibm.com/prod-mgmt-40efdb38 Ready 20210505-141020F_20210505-195838I prod-mgmt incr record 16h managementbackup.management.apiconnect.ibm.com/prod-mgmt-681aa239 Ready 20210505-141020F_20210505-220302I prod-mgmt incr record 14h managementbackup.management.apiconnect.ibm.com/prod-mgmt-7f7150dd Ready 20210505-141020F_20210505-160732I prod-mgmt incr record 20h managementbackup.management.apiconnect.ibm.com/prod-mgmt-806f8de6 Ready 20210505-141020F_20210505-214657I prod-mgmt incr record 14h managementbackup.management.apiconnect.ibm.com/prod-mgmt-868a066a Ready 20210505-141020F_20210506-090140I prod-mgmt incr record 3h34m managementbackup.management.apiconnect.ibm.com/prod-mgmt-cf9a85dc Ready 20210505-141020F_20210505-210119I prod-mgmt incr record 15h managementbackup.management.apiconnect.ibm.com/prod-mgmt-ef63b789 Ready 20210506-103241F prod-mgmt full record 83m NAME STATUS MESSAGE AGE managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-649mc Complete Upgrade is Complete (DB Schema/data are up-to-date) 142m managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-9mjhk Complete Fresh install is Complete (DB Schema/data are up-to-date) 22h NAME READY STATUS VERSION RECONCILED VERSION AGE portalcluster.portal.apiconnect.ibm.com/prod-ptl 3/3 Running 10.0.5.8 10.0.5.8 21h
Important: If you need to restart the deployment, wait until all Portal sites complete the upgrade. Run the following commands to check the status of the sites:- Log in as an admin
user:
apic login -s <server_name> --realm admin/default-idp-1 --username admin --password <password>
- Get the portal service ID and
endpoint:
apic portal-services:get -o admin -s <management_server_endpoint> \ --availability-zone availability-zone-default <portal-service-name> \ --output - --format json
- List the
sites:
apic --mode portaladmin sites:list -s <management_server_endpoint> \ --portal_service_name <portal-service-name> \ --format json
Any sites currently upgrading display the
UPGRADING
status; any site that completed its upgrade displays theINSTALLED
status and the new platform version. Verify that all sites display theINSTALLED
status before proceeding.For more information on the
sites
command, seeapic sites:list
and Using thesites
commands. - After all sites are in
INSTALLED
state and have the new platform listed, run the following command:apic --mode portaladmin platforms:list -s <server_name> --portal_service_name <portal_service_name>
Verify that the new version of the platform is the only platform listed.
For more information on the
platforms
command, seeapic platforms:list
and Using theplatforms
commands.
- Log in as an admin
user:
- Upgrading to 10.0.5.5: Verify that the GatewayCluster upgraded
correctly.
When upgrading to 10.0.5.5 on OpenShift, the rolling update might fail to start on gateway operand pods due to a gateway peering issue, even though the reconciled version on the gateway CR (incorrectly) displays as 10.0.5.5. Complete the following steps to check for this issue and correct it if needed.
- Check the
productVersion
of each gateway pod to verify that it is 10.0.5.7 (the version of DataPower Gateway that was released with API Connect 10.0.5.5 by running one of the following commands:
oroc get po -n apic_namespace <gateway_pods> -o yaml | yq .metadata.annotations.productVersion
where:oc get po -n apic_namespace <gateway_pods> -o custom-columns="productVersion:.metadata.annotations.productVersion"
apic_namespace
is the namespace where API Connect is installed<gateway_pods>
is a space-delimited list of the names of your gateway peering pods
- If any pod returns an incorrect value for the version, resolve the issue as explained in Incorrect productVersion of gateway pods after upgrade.
- Check the
- (Optional). If you upgraded from 10.0.5.4 or earlier, delete the
DataPowerService CRs so that they will be regenerated with random passwords for
gateway-peering.
Starting with API Connect 10.0.5.5, GatewayCluster pods are configured by default to secure the gateway-peering sessions with a unique, randomly generated password. However, GatewayCluster pods created prior to API Connect 10.0.5.5 are configured to use a single, hard-coded password and upgrading to 10.0.5.5 or later does not replace the hard-coded password.
After upgrading to API Connect 10.0.5.5 or later, you can choose to secure the gateway-peering sessions by running the following command to delete the DataPowerService CR that was created by the GatewayCluster:
oc delete dp <gateway_cluster_name>
This action prompts the API Connect Operator to recreate the DataPowerService CR with the unique, randomly generated password. This is a one-time change and does not need to be repeated for subsequent upgrades.
- If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): Enable analytics as explained in Enabling Analytics after upgrading
- If you are upgrading to 10.0.5.3 (or later) from an earlier 10.0.5.x release: Review and configure the new inter-subsystem communication features: Optional post-upgrade steps for upgrade to 10.0.5.3 from earlier 10.0.5 release.
- Restart all nats-server pods by running the following command:
oc -n <namespace> delete po -l app.kubernetes.io/name=natscluster
What to do next
Update your toolkit CLI by downloading it from IBM Fix Central or from the Cloud Manager UI, see Installing the toolkit.
If you are upgrading from v10.0.5.1 or earlier to v10.0.5.2: The change in deployment profile CPU and memory limits that are introduced in 10.0.5.2 (see New deployment profiles and CPU licensing) can result in a change in the performance of your Management component. If you notice any obvious reduction in performance of the Management UI or toolkit CLI where you have multiple concurrent users, open a support case.