Upgrading to 10.0.1.8-eus using portable storage
You can use a portable storage device to perform an air-gapped upgrade of IBM® API Connect 10.0.1.8-eus on OpenShift Container Platform (OCP) when your cluster has no internet connectivity.
Before you begin
- If you plan to upgrade to the latest version of 10.0.1.x-eus, your API Connect deployment must
be upgraded to 10.0.1.7-eus or 10.0.1.8-eus first. If your deployment is already at 10.0.1.7-eus,
you can skip this task and proceed directly to Deprecated: Air-gapped upgrade using cloudctl.Restriction: Cloud Pak for Integration 2020.4 is now End of Support and the API Management component cannot be upgraded to a version later than API Connect 10.0.1.7-eus.
- The upgrade procedure requires you to use Red Hat Skopeo for moving container images. Skopeo is not available for Microsoft Windows, so you cannot perform this task using a Windows host.
- Don't use the tilde ~ within double quotation marks in any command because the tilde doesn’t expand and your commands might fail.
About this task
- You must upgrade the API Connect deployment before upgrading OpenShift from 4.6 to 4.10.
The step to upgrade OpenShift appears at the end of the upgrade procedure.
- You must upgrade operators in the specified sequence to ensure that dependencies are satisfied. In addition, the Cloud Pak common services operator and the API Connect operator must be upgraded in tandem (as close in time as possible) to ensure success.
- Upgrading the Cloud Pak common services operator can take as long as an hour, and the new certificate manager is not available until that upgrade is complete. After you upgrade operators, it's important to wait for the certificate manager update to complete before proceeding to the next step.
Procedure
-
Ensure that you have completed all of the steps in Preparing to upgrade on OpenShift and Cloud Pak for Integration, including reviewing the Upgrade considerations on OpenShift and Cloud Pak for Integration.
Do not attempt an upgrade until you have reviewed the considerations and prepared your deployment.
-
Prepare a host that can be connected to the internet.
Note: If you are using the same host that you used for installing API Connect, skip this step.
The host must satisfy the following requirements:
- The host must be on a Linux x86_64 platform, or any operating system that the IBM Cloud Pak CLI, the OpenShift CLI. and RedHat Skopeo support.
- The host
locale
must be set to English. - The host must have sufficient storage to hold all of the software that is to be transferred to the local Docker registry.
Complete the following steps to set up your external host:
- Install OpenSSL version 1.1.1 or higher.
-
Install Docker or Podman:
- To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
yum check-update yum install docker
- To install Podman, see Podman Installation Instructions.
- To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
-
Install
httpd-tools
by running the following commands:yum install httpd-tools
-
Install the IBM Cloud Pak CLI by completing the following steps:
Install the latest version of the binary file for your platform. For more information, see cloud-pak-cli.
- Download the binary file by running the following
command:
wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/<binary_file_name
For example:wget https://github.com/IBM/cloud-pak-cli/releases/latest/download/cloudctl-linux-amd64.tar.gz
- Extract the binary file by running the following
command:
tar -xf <binary_file_name>
- Run the following commands to modify and move the
file:
chmod 755 <file_name mv <file_name> /usr/local/bin/cloudctl
- Confirm that
cloudctl
is installed by running the following command:cloudctl --help
The
cloudctl
usage is displayed.
- Download the binary file by running the following
command:
-
Install the
oc
OpenShift Container Platform CLI tool.For more information, see Getting started with the CLI in the Red Hat OpenShift documentation.
-
Install RedHat Skopeo CLI version 1.0.0 or higher.
For more information, see Installing Skopeo from packages.
-
Run the following command to create a directory that serves as the offline store.
The following example creates a directory called "upgrade_offline", which is used in the subsequent steps.
mkdir $HOME/upgrade_offline
Notes:- The
$HOME/upgrade_offline
store must be persistent to avoid transferring data more than once. The persistence also helps to run the mirroring process multiple times or on a schedule. - The
$HOME/upgrade_offline
store must not use the same name that you used for the previous installation. If you use the same name as original directory ($HOME/upgrade_offline
), then theapi-connect-catalog
tag will not get updated correctly, the catalog source pods will not pick up the new image, and as a result, the operator will not upgrade. The environment variable should be updated in all the steps.
- The
-
On the bastion host, create environment variables for the installer and image inventory.
Note: If you are using the same bastion host that you used for installing API Connect, you can re-use the environment variables from the installation, but you must update the following variables for the new release:
CASE_VERSION
- update to the newest CASE.OFFLINEDIR
- update to reflect new folder created for upgrade; for example, $HOME/upgrade_offline.
Create the following environment variables with the installer image name and the image inventory. Set the
CASE_VERSION
to the value for the new API Connect release. The CASE version shown in the example might not be correct for your deployment – refer to Operator, operand, and CASE version for the correct CASE version.export CASE_NAME=ibm-apiconnect export CASE_VERSION=2.1.14 export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz export CASE_INVENTORY_SETUP=apiconnectOperatorSetup export OFFLINEDIR=$HOME/upgrade_offline export OFFLINEDIR_ARCHIVE=offline.tgz export CASE_REMOTE_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case/$CASE_NAME/$CASE_VERSION/$CASE_ARCHIVE export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE export BASTION_DOCKER_REGISTRY_HOST=localhost export BASTION_DOCKER_REGISTRY_PORT=443 export BASTION_DOCKER_REGISTRY=$BASTION_DOCKER_REGISTRY_HOST:$BASTION_DOCKER_REGISTRY_PORT export BASTION_DOCKER_REGISTRY_USER=username export BASTION_DOCKER_REGISTRY_PASSWORD=password export BASTION_DOCKER_REGISTRY_PATH=$OFFLINEDIR/imageregistry
-
Connect the portable host to the internet.
Connect the portable host to the internet and disconnect it from the local, air-gapped network.
-
Download the API Connect installer and image inventory by running the following command:
cloudctl case save \ --case $CASE_REMOTE_PATH \ --outputdir $OFFLINEDIR
-
Mirror the images from the ICR (source) registry to the portable host's (destination)
registry.
-
Store the credentials for the ICR (source) registry.
The following command stores and caches the IBM Entitled Registry credentials in a file on your file system in the
$HOME/.airgap/secrets
location.cloudctl case launch \ --case $OFFLINEDIR/$CASE_ARCHIVE \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --namespace $NAMESPACE \ --args "--registry cp.icr.io --user cp --pass <entitlement-key> --inputDir $OFFLINEDIR"
-
Store the credentials for the portable host's (destination) registry.
The following command stores and caches the Docker registry credentials in a file on your file system in the
$HOME/.airgap/secrets
location:cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action configure-creds-airgap \ --args "--registry $PORTABLE_DOCKER_REGISTRY --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD"
-
If needed, start the Docker registry service on the portable host.
If you are using the same portable host that you used for installing API Connect, the Docker registry service might already be running.
- Initialize the Docker registry by running the following
command:
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action init-registry \ --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"
- Start the Docker registry by running the following
command:
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action start-registry \ --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --port $PORTABLE_DOCKER_REGISTRY_PORT --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"
- Initialize the Docker registry by running the following
command:
-
Mirror the images to the registry on the portable host.
cloudctl case launch \ --case $CASE_LOCAL_PATH \ --inventory $CASE_INVENTORY_SETUP \ --action mirror-images \ --args "--registry $PORTABLE_DOCKER_REGISTRY --inputDir $OFFLINEDIR"
-
Store the credentials for the ICR (source) registry.
- Optional:
Save the Docker registry image that you stored on the portable host.
If your air-gapped network doesn’t have a Docker registry image, you can save the image on the bastion host and copy it later to the host in your air-gapped environment.
docker save docker.io/library/registry:2.6 -o $PORTABLE_DOCKER_REGISTRY_PATH/registry-image.tar
- If you are upgrading from any version prior to 10.0.1.6, patch the
CatalogSource named
ibm-apiconnect-catalog
to update the registry namespace.In version 10.0.1.6, the registry namespace in the image name used for the CatalogSource
ibm-apiconnect-catalog
changed fromibmcom
tocpopen
. Patch the CatalogSource by running the following command:oc patch catalogsource ibm-apiconnect-catalog -n openshift-marketplace --type=merge -p='{"spec":{"image":"'${LOCAL_DOCKER_REGISTRY}'/cpopen/ibm-apiconnect-catalog:latest-amd64"}}'
- If needed, update the operator channel and the operator
for Cloud Pak common services:
- Open the OpenShift web console and click Operators > Installed Operators > IBM Foundational Services > Subscriptions.
-
Change the channel to
v3
, and update the operator to3.19
.
-
Immediately update the operator channel for API Connect:
-
Confirm that the pod
ibm-apiconnect-catalog-xxxx
in theopenshift-marketplace
namespace was updated. - Open the OpenShift web console and click Operators > Installed Operators > IBM API connect > Subscriptions.
-
Change the channel to the new version (v2.1.8-eus), which triggers an upgrade of the API
Connect operator. Known issues:
- If you are upgrading to API Connect version 10.0.1.6-eus, 10.0.1.6-ifix1-eus, 10.0.1.7-eus, or
10.0.1.8-eus, you might encounter the following error while updating the
operator:
Resolve the error by completing the following steps to update the ImageContentSourcePolicy for your deployment:Message: unpack job not completed: Unpack pod(openshift-marketplace/e9f169cee8bffacf9ab35d276a48b7207d9606e2b7a0a8087bc58b4ff7tx22l) container(pull) is pending. Reason: ImagePullBackOff, Message: Back-off pulling image "ibmcom/ibm-apiconnect-operator-bundle@sha256:ef0ce455270189c37a5dc0500219061959c041f88110f601f6e7bf8072df4943" Reason: JobIncomplete
- Log in to the OpenShift cluster UI as an administrator of your cluster.
- Click Search > Resources and search for ICSP.
- In the list of ICSPs, click ibm-apiconnect to edit it.
- In the
ibm-apiconnect
ICSP, click the YAML tab. - In the
spec.repositoryDigestMirrors
section, locate the-mirrors:
subsection containingsource: docker.io/ibmcom)
. - Add a new mirror ending with
/ibmcom
to the section as in the following example:- mirrors: - <AIRGAP_REGISTRY_ADDRESS>/ibmcom - <AIRGAP_REGISTRY_ADDRESS>/cpopen source: docker.io/ibmcom
- If the job does not automatically continue, uninstall and reinstall the API Connect operator.
- The certificate manager was upgraded, so you might encounter an upgrade error if the CRD for the new certificate manager is not found. For information on the errors messages that indicate this problem, and steps to resolve it, see Upgrade error when the CRD for the new certificate manager is not found in the Troubleshooting installation and upgrade on OpenShift topic.
- A null value for the
backrestStorageType
property in the pgcluster CR causes an error during the operator upgrade from 10.0.1.6-ifix1-eus or earlier. For information on the errors messages that indicate this problem, and steps to resolve it, see Operator upgrade fails with error from API Connect operator and Postgres operator in the Troubleshooting installation and upgrade on OpenShift topic.
- If you are upgrading to API Connect version 10.0.1.6-eus, 10.0.1.6-ifix1-eus, 10.0.1.7-eus, or
10.0.1.8-eus, you might encounter the following error while updating the
operator:
When the APIC Connect operator is updated, the new pod starts automatically. -
Confirm that the pod
-
Verify that the API Connect Operator was updated by completing the following steps:
-
Get the name of the pod that hosts the operator by running the following command:
oc get po -n <APIC_namespace> | grep apiconnect
The response looks like the following example:ibm-apiconnect-7bdb795465-8f7rm 1/1 Running 0 4m23s
-
Get the API Connect version deployed on that pod by running the following command:
oc describe po <ibm-apiconnect-operator-podname> -n <APIC_namespace> | grep -i productversion
The response looks like the following example:productVersion: 10.0.1.8-eus
-
Get the name of the pod that hosts the operator by running the following command:
- Verify that the cert-manager was upgraded. Attention: The Cloud Pak common services upgrade can take as long as an hour, and the new version of cert-manager will not be available until the upgrade is complete. Do not proceed until the cert-manager upgrade is complete.
- Check for certificate errors, and then recreate
issuers and certificates if needed.
The cert-manager upgrade might cause some errors during the API Connect operator upgrade. Complete the following steps to check for certificate errors and correct them.
- Check the new API Connect operator's log for an error similar to the following
example:
{"level":"error","ts":1634966113.8442025,"logger":"controllers.AnalyticsCluster","msg":"Failed to set owner reference on certificate request","analyticscluster":"apic/<instance-name>-a7s","certificate":"<instance-name>-a7s-ca","error":"Object apic/<instance-name>-a7s-ca is already owned by another Certificate controller <instance-name>-a7s-ca",
To correct this problem, delete all issuers and certificates generated with
certmanager.k8s.io/v1alpha1
. For certificates used by route objects, you must also delete the route and secret objects. - Run the following commands to delete the issuers and certificates that were generated
with
certmanager.k8s.io/v1alpha1
:oc delete issuers.certmanager.k8s.io <instance-name>-self-signed <instance-name>-ingress-issuer <instance-name>-mgmt-ca <instance-name>-a7s-ca <instance-name>-ptl-ca
oc delete certs.certmanager.k8s.io <instance-name>-ingress-ca <instance-name>-mgmt-ca <instance-name>-ptl-ca <instance-name>-a7s-ca
In the examples,
<instance-name>
is the instance name of the top-levelapiconnectcluster
.When you delete the issuers and certificates, the new certificate manager generates replacements; this might take a few minutes.
- Verify that the new CA certs are refreshed and ready.
Run the following command to verify the certificates:
oc get certs <instance-name>-ingress-ca <instance-name>-mgmt-ca <instance-name>-ptl-ca <instance-name>-a7s-ca
The CA certs are ready when
AGE
is "new" and theREADY
column showsTrue
. - Delete the remaining old certificates, routes, and secret objects corresponding to
those routes.
Run the following commands:
oc get certs.certmanager.k8s.io | awk '/<instance-name>/{print $1}' | xargs oc delete certs.certmanager.k8s.io
oc delete certs.certmanager.k8s.io postgres-operator
oc get routes --no-headers -o custom-columns=":metadata.name" | grep ^<instance-name>- | xargs oc delete routes
Note: The following command deletes the secrets for the routes. Do not delete any other secrets.oc get routes --no-headers -o custom-columns=":metadata.name" | grep ^<instance-name>- | xargs oc delete secrets
- Verify that no old issuers or certificates from your top-level instance remain.
Run the following commands:
oc get issuers.certmanager.k8s.io | grep <instance-name>
oc get certs.certmanager.k8s.io | grep <instance-name>
Both commands should report that no resources were found.
- Check the new API Connect operator's log for an error similar to the following
example:
- Use the latest version of
apicops
to validate the certificates.- Run the following command:
apicops upgrade:stale-certs -n <APIC_namespace>
- Delete any stale certificates that are managed by cert-manager. If a certificate failed the validation and it is managed by cert-manager, you can delete the stale certificate secret, and let cert-manager regenerate it. Run the following command:
oc delete secret <stale-secret> -n <APIC_namespace>
- Restart the corresponding so that it can pick up the new secret. To determine which pod to restart, see the following topics:
For information on the
apicops
tool, see The API Connect operations tool: apicops. - Run the following command:
- Run the following command to delete the Postgres pods,
which refreshes the new certificate:
oc get pod -n <namespace> --no-headers=true | grep postgres | grep -v backup | awk '{print $1}' | xargs oc delete pod -n <namespace>
- Delete the
portal-www
,portal-db
andportal-nginx
pods to ensure they use the new secrets.If you have the Developer Portal deployed, then the
portal-www
,portal-db
, andportal-nginx
pods might require deleting to ensure that they pick up the newly generated secrets when restarted. If the pods are not showing as "ready" in a timely manner, then delete all the pods at the same time (this will cause down time).Run the following commands to get the name of the portal CR and delete the pods:
oc project <APIC_namespace>
oc get ptl
oc delete po -l app.kubernetes.io/instance=<name_of_portal_CR>
- Renew the internal certificates for the analytics
subsystem.
If you see analytics pods in the
CrashLoopBackOff
state, then renew the internal certificates for the analytics subsystem and force a restart of the pods.- Switch to the project/namespace where analytics is deployed and run the following
command to get the name of the analytics CR (AnalyticsCluster):
oc project <APIC_namespace>
oc get a7s
You need the CR name for the remaining steps.
- Renew the internal certificates (CA, client, and server) by running the following
commands:
oc get certificate <name_of_analytics_CR> -ca -o=jsonpath='{.spec.secretName}' | xargs oc delete secret
oc get certificate <name_of_analytics_CR> -client -o=jsonpath='{.spec.secretName}' | xargs oc delete secret
oc get certificate <name_of_analytics_CR> -server -o=jsonpath='{.spec.secretName}' | xargs oc delete secret
- Force a restart of all analytics pods by running the following command:
oc delete po -l app.kubernetes.io/instance=<name_of_analytics_CR>
- Switch to the project/namespace where analytics is deployed and run the following
command to get the name of the analytics CR (AnalyticsCluster):
- Ensure that the operators and operands are healthy before
proceeding.
- Operators: The OpenShift web console indicates that all operators are in
Succeeded
state without any warnings. - Operands:
- To verify whether operands are healthy, run the following command:
oc get apic
Check the status of the
apiconnectcluster
custom resource. The CR will not report as ready until you complete some additional steps in this procedure. - In Cloud Pak for Integration, wait until the API Connect capability shows
READY
(green check) in Platform Navigator.Known issue: Status toggles betweenReady
andWarning
There is a known issue where the API Connect operator toggles the overall status of the API Connect deployment in Platform Navigator between
Ready
andWarning
. Look at the full list of conditions and whenReady
isTrue
, you can proceed to the next step even ifWarning
is also true.
- To verify whether operands are healthy, run the following command:
- Operators: The OpenShift web console indicates that all operators are in
- Upgrade the API Connect operand:
- OpenShift:
- Update the
version
field in the top-level CR; for example:apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-production name: prod namespace: APIC_namespace spec: license: accept: true use: production profile: n12xc4.m12 version: 10.0.1.8-eus storageClassName: rook-ceph-block
Specify the currently deployed
profile
anduse
values, which might not match the example. If you want to change to a different profile, you can do it after completing the upgrade (for instructions, see Changing deployment profiles on OpenShift.) - In the
spec.gateway
section, delete thetemplate
override section, if it exists. You cannot run an upgrade if the CR contains an override. - Apply the updated top-level CR to upgrade the API Connect operand by running the following
command:
oc apply -f CR_file_name.yaml
The response looks like the following example:apiconnectcluster.apiconnect.ibm.com/prod configured
- Update the
- Cloud Pack for Integration:
- In Platform Navigator, click the Runtimes tab.
- Click
at the end of the current row, and then click Change version.
- Click Select a new channel or version, and then select 10.0.1.8-eus in
the Version field.
Selecting the new channel ensures that both DataPower Gateway and API Connect are upgraded.
Note: On the operand version change, you might see the following webhook error message:admission webhook "vapiconnectcluster.kb.io" denied the request: APIConnectCluster.apiconnect.ibm.com "<instance-name>" is invalid: spec.license.license: Invalid value: "L-RJON-BYGHM4": License L-RJON-BYGHM4 is invalid for the chosen version 10.0.1.7-eus. Please refer license document https://ibm.biz/apiclicenses
To correct the webhook error, visit https://ibm.biz/apiclicenses and select the appropriate License ID for the new version, and then re-apply your CR with the newer license value. The updated CR spec should look like the following example:
spec: license: accept: true use: production license: license-value
- Click Save to save your selections and start the upgrade.
In the runtimes table, the Status column for the runtime displays the "Upgrading" message. The upgrade is complete when the Status is "Ready" and the Version displays the new version number.
- OpenShift:
- Verify that the upgraded subsystems report as
Running
.Run the following command:
oc get apic --all-namespaces
The Management, Analytics, and Portal subsystems should report as
Running
. The Gateway subsystem will not be running until you complete the next step to correct peering issues.Example response:
NAME READY STATUS VERSION RECONCILED VERSION AGE analyticscluster.analytics.apiconnect.ibm.com/analytics 8/8 Running 10.0.1.8-eus 10.0.1.8-eus-1074 121m NAME PHASE READY SUMMARY VERSION AGE datapowerservice.datapower.ibm.com/gw1 Running True StatefulSet replicas ready: 1/1 10.0.1.8-eus 100m NAME PHASE LAST EVENT WORK PENDING WORK IN-PROGRESS AGE datapowermonitor.datapower.ibm.com/gw1 Running false false 100m NAME READY STATUS VERSION RECONCILED VERSION AGE gatewaycluster.gateway.apiconnect.ibm.com/gw1 2/2 Running 10.0.1.8-eus 10.0.1.8-eus-1074 100m NAME READY STATUS VERSION RECONCILED VERSION AGE managementcluster.management.apiconnect.ibm.com/m1 16/16 Running 10.0.1.8-eus 10.0.1.8-eus-1074 162m NAME READY STATUS VERSION RECONCILED VERSION AGE portalcluster.portal.apiconnect.ibm.com/portal 3/3 Running 10.0.1.8-eus 10.0.1.8-eus-1074 139m
- After the operand upgrade, scale the Gateways pods down, and
back up, to correct peering issues caused by the ingress issuer change.
- Scale down the Gateway firmware containers by editing the top API Connect CR and
setting the
replicaCount
to0
.- OpenShift:
- Run the following command to edit the
CR:
oc -n <APIC_namespace> edit apiconnectcluster
- Set the
replicaCount
to0
(you might have to add the setting):... spec: gateway: replicaCount: 0 ...
- Run the following command to edit the
CR:
- Cloud Pak for Integration:
- In Platform Navigator, edit the instance and enable the Advanced settings.
- In the Gateway subsystem section, set the Advance Replica count field to 0.
- OpenShift:
- Wait for Gateway firmware pods to scale down and terminate.
Do not proceed until the pods have terminated.
- Scale up the Gateway firmware containers back to the original value.
- OpenShift:
- Run the following command to edit the
apiconnectcluster
CR:oc -n <APIC_namespace> edit apiconnectcluster
- Set the
replicaCount
to its original value, or remove the setting:... spec: gateway: ...
- Run the following command to edit the
- Cloud Pak for Integration:
- In the Platform UI, edit the instance and enable the Advanced settings.
- In the Gateway subsystem section, set the Advance Replica count field to its original value, or clear the field.
- OpenShift:
- Scale down the Gateway firmware containers by editing the top API Connect CR and
setting the
-
Validate that the upgrade was successfully deployed by running the following command:
oc get apic -n APIC_namespace
The response looks like the following example and should show the new product version:
NAME READY STATUS VERSION RECONCILED VERSION AGE analyticscluster.analytics.apiconnect.ibm.com/prod-a7s 8/8 Running 10.0.1.8-eus 10.0.1.8-eus 21h NAME READY STATUS VERSION RECONCILED VERSION AGE apiconnectcluster.apiconnect.ibm.com/prod 4/4 Ready 10.0.1.8-eus 10.0.1.8-eus 22h NAME PHASE READY SUMMARY VERSION AGE datapowerservice.datapower.ibm.com/prod-gw Running True StatefulSet replicas ready: 3/3 10.0.1.8-eus 21h NAME PHASE LAST EVENT WORK PENDING WORK IN-PROGRESS AGE datapowermonitor.datapower.ibm.com/prod-gw Running false false 21h NAME READY STATUS VERSION RECONCILED VERSION AGE gatewaycluster.gateway.apiconnect.ibm.com/prod-gw 2/2 Running 10.0.1.8-eus 10.0.1.8-eus 21h NAME READY STATUS VERSION RECONCILED VERSION AGE managementcluster.management.apiconnect.ibm.com/prod-mgmt 16/16 Running 10.0.1.8-eus 10.0.1.8-eus 22h NAME STATUS ID CLUSTER TYPE CR TYPE AGE managementbackup.management.apiconnect.ibm.com/prod-mgmt-0f583bd9 Ready 20210505-141020F_20210506-011830I prod-mgmt incr record 11h managementbackup.management.apiconnect.ibm.com/prod-mgmt-10af02ee Ready 20210505-141020F prod-mgmt full record 21h managementbackup.management.apiconnect.ibm.com/prod-mgmt-148f0cfa Ready 20210505-141020F_20210506-012856I prod-mgmt incr record 11h managementbackup.management.apiconnect.ibm.com/prod-mgmt-20bd6dae Ready 20210505-141020F_20210506-090753I prod-mgmt incr record 3h28m managementbackup.management.apiconnect.ibm.com/prod-mgmt-40efdb38 Ready 20210505-141020F_20210505-195838I prod-mgmt incr record 16h managementbackup.management.apiconnect.ibm.com/prod-mgmt-681aa239 Ready 20210505-141020F_20210505-220302I prod-mgmt incr record 14h managementbackup.management.apiconnect.ibm.com/prod-mgmt-7f7150dd Ready 20210505-141020F_20210505-160732I prod-mgmt incr record 20h managementbackup.management.apiconnect.ibm.com/prod-mgmt-806f8de6 Ready 20210505-141020F_20210505-214657I prod-mgmt incr record 14h managementbackup.management.apiconnect.ibm.com/prod-mgmt-868a066a Ready 20210505-141020F_20210506-090140I prod-mgmt incr record 3h34m managementbackup.management.apiconnect.ibm.com/prod-mgmt-cf9a85dc Ready 20210505-141020F_20210505-210119I prod-mgmt incr record 15h managementbackup.management.apiconnect.ibm.com/prod-mgmt-ef63b789 Ready 20210506-103241F prod-mgmt full record 83m NAME STATUS MESSAGE AGE managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-649mc Complete Upgrade is Complete (DB Schema/data are up-to-date) 142m managementdbupgrade.management.apiconnect.ibm.com/prod-mgmt-up-9mjhk Complete Fresh install is Complete (DB Schema/data are up-to-date) 22h NAME READY STATUS VERSION RECONCILED VERSION AGE portalcluster.portal.apiconnect.ibm.com/prod-ptl 3/3 Running 10.0.1.8-eus 10.0.1.8-eus 21h
- Upgrade the OpenShift cluster to OpenShift 4.10.
Upgrading OpenShift requires that you proceed through each minor release instead of upgrading directly from 4.6 to 4.10. For more information, see the Red Hat OpenShift documentation. In the "Documentation" banner, select the version of OpenShift that you want to upgrade to, and then expand the "Updating clusters" section in the navigation list.
What to do next
After the upgrade to 10.0.1.8-eus is complete, upgrade to the latest version.