Upgrading to the latest release on VMware
You can upgrade API Connect on VMware.
Before you begin
- Before starting an upgrade, review the supported upgrade paths from prior versions: Upgrade considerations on VMware.
- Enable keep-alive in your SSH configuration to avoid problems during the upgrade. For information, consult the documentation for your version of SSH.
- Upgrading to v10.0.5.x results in the deletion of existing analytics data. If you want to retain
your analytics data then you must export it before upgrading. For instructions on exporting the
data, see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
When you upgrade subsystems in this scenario, you must include the
--accept-data-loss
parameter on allapicup install
commands that result in an upgrade from the older releases. Since you upgrade the Management subsystem first, including the parameter at that point commits you to the loss of data, and the operation cannot be reversed. - Delete any
template
section from your analytics CR before upgrading. For instructions, see Update the analytics extra-values file.
About this task
You can upgrade the Management subsystem, Developer Portal subsystem, and Analytics subsystem. The Gateway subsystem remains available during the upgrade of the other subsystems.
For the optional components API Connect Toolkit and API Connect Local Test Environment, you do not need to upgrade. For these components, install the new version of each after you upgrade the subsystems.
Important notes:
- When installing the new files, be sure to use the new
apicup
installer. - The upgrade order for subsystems is important. Upgrade the subsystems in the following order: (1) Management, (2) Portal, (3) Analytics, (4) Gateway. Management must be upgraded first. Gateway must be upgraded after Management because upgrading the Management service before the Gateway ensures that any new policies and capabilities will be available to a previously registered Gateway service.
- When you run the install, the program sends the compressed tar file, which contains the upgrade images, to all cluster members. The compressed tar file is about 2 GB, and transfer can take some time. When the install command exits, the compressed tar file has arrived at each member. The upgrade process is now underway, and might continue for some time depending on factors such as the size of your deployment, your network speed, and so on.
- The
apicup subsys install
command automatically runsapicup health-check
prior to attempting the upgrade. An error is displayed if a problem is found that will prevent successful upgrade.When troubleshooting a problem with an upgrade, you can optionally suppress the health check. See Skipping health check when re-running upgrade.
- The certificate manager was upgraded in earlier versions of API Connect. If you are upgrading
from an earlier release, then after you trigger the upgrade from the
apim
,lur
, andtaskmanager
pods will be in a CrashLoopBackoff state due to Postgres requiring a certificate refresh. Correct the problem by completing the following steps:- SSH into the server:
- Run the following command to connect as the API Connect administrator, replacing
ip_address
with the appropriate IP address:ssh ip_address -l apicadm
- When prompted, select Yes to continue connecting.
- When you are connected, run the following command to receive the necessary permissions for
working directly on the appliance:
sudo -i
- Run the following command to connect as the API Connect administrator, replacing
- Run the following command to delete the Postgres pods, which refreshes the new
certificate:
kubectl get pod --no-headers=true | grep postgres | grep -v backup | awk '{print $1}' | xargs kubectl delete pod
- SSH into the server:
Procedure
- Complete the prerequisites:
- Ensure that your deployment meets the upgrade requirements. See Upgrade considerations on VMware.
- Complete the steps in Preparing to upgrade on VMware to
ensure that your environment is ready for the upgrade.
- Verify the API Connect deployment is healthy and fully operational. See Checking cluster health on VMware
- Remove any stale upgrade files:
apic clean-upgrade-files
- Verify sufficient free disk space.
For each appliance node that you are planning to upgrade:
- SSH into the appliance, and switch to user
root
. - Check disk space in
/data/secure
:df -h /data/secure
Make sure the disk usage shown is below 70%. If it is not, add disk capacity. See Adding disk space to a VMware appliance.
- Check free disk space in
/
:df -h /
Make sure usage is below 70%. If it is not, consider deleting or offloading older
/var/log/syslog*
files.
- SSH into the appliance, and switch to user
- Complete a manual backup of the API Connect subsystems as explained in Backing up and restoring on VMware. Notes:
- Do not start an upgrade if a backup is scheduled to run within a few hours.
- Do not perform maintenance tasks such as rotating key-certificates, restoring from a backup, or starting a new backup, at any time while the upgrade process is running.
- Ensure that your deployment meets the upgrade requirements. See Upgrade considerations on VMware.
-
Back up the Postgres images on each of the Management Server VMs.
Follow the steps for the version of API Connect that you are upgrading from:
- 10.0.1.9 or later, and 10.0.5.1 or
later:
SSH to one of the Management Server VMs: ssh <ip_address of management vm> -l apicadm Sudo to root: sudo -i Run the following commands: postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name) ctr --namespace k8s.io images pull --plain-http=true `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'` ctr --namespace k8s.io images export rmdata.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'` ctr --namespace k8s.io images pull --plain-http=true `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'` ctr --namespace k8s.io images export backrestrepo.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'` ctr --namespace k8s.io images pull --plain-http=true `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'` ctr --namespace k8s.io images export pgbouncer.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'` ctr --namespace k8s.io images pull --plain-http=true `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'` ctr --namespace k8s.io images export postgres-ha.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'` postgres_pod=$(kubectl get pod -l role=master -o name) ctr --namespace k8s.io images pull --plain-http=true `kubectl get $postgres_pod -o jsonpath='{.spec.initContainers[].image}'` ctr --namespace k8s.io images export k8s-init.tar `kubectl get $postgres_pod -o jsonpath='{.spec.initContainers[].image}'`
- 10.0.1.8 or earlier, and 10.0.4.0-ifix3 or
earlier:
SSH to one of the Management Server VMs: ssh <ip_address of management vm> -l apicadm Sudo to root: sudo -i Run the following commands: postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name) docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'` docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'` docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'` docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'` docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'` -o rmdata.tar docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'` -o backrestrepo.tar docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'` -o pgbouncer.tar docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'` -o postgres-ha..tar postgres_pod=$(kubectl get pod -l role=master -o name) docker save `kubectl get $postgres_pod -o jsonpath='{.spec.initContainers[].image}'` -o k8s-init.tar
Remember to complete this step on every Management Server VM.
- 10.0.1.9 or later, and 10.0.5.1 or
later:
- Run the pre-upgrade health check: Attention: This is a required step. Failure to run this check before upgrading could result in problems during the upgrade.
- Run the health check on the Management subsystem: Note: If you have a 2DCDR deployment, run this check only on the active management subsystem.
You can run the check on the warm-standby after it is made stand-alone. See Upgrading a two data center deployment.
- SSH into the server:
- Run the following command to connect as the API Connect administrator, replacing
ip_address
with the appropriate IP address:ssh ip_address -l apicadm
- When prompted, select Yes to continue connecting.
- When you are connected, run the following command to receive the necessary permissions for
working directly on the appliance:
sudo -i
- Run the following command to connect as the API Connect administrator, replacing
- Verify that the
apicops
utility is installed by running the following command to check the current version of the utility:apicops --version
If the response indicates that
apicops
is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation. - Run the version-check script and verify that there are no
errors:
apicops version:pre-upgrade
- SSH into the server:
- Run the health check on the portal subsystem:
- SSH into the server.
- Verify that the
apicops
utility is installed by running the following command to check the current version of the utility:apicops --version
If the response indicates that
apicops
is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation. - Run the following command to check the system status, and verify that there are no
errors:
apicops upgrade:check-subsystem-status
- Run the health check on the Analytics subsystem:
- SSH into the server.
- Verify that the
apicops
utility is installed by running the following command to check the current version of the utility:apicops --version
If the response indicates that
apicops
is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation. - Run the following command to check the system status, and verify that there are no
errors:
apicops upgrade:check-subsystem-status
- Run the health check on the Management subsystem:
- Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action does require a brief outage
while all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs,
as they might not restore successfully. VM snapshots can offer a faster recovery when compared to
redeploying OVAs and restoring from normal backups. Important: VM snapshots are not an alternative to the standard API Connect backups that are described in the previous steps. You must complete the API Connect backups in order to use the API Connect restore feature.
- Obtain the API Connect files:
You can access the latest files from IBM Fix Central by searching for the API Connect product and your installed version.
The following files are used during upgrade on VMware:
- IBM API Connect Security Signatures Bundle File
- The signatures_<version>.zip contains the set of .asc files needed for signature verification so you can validate the downloaded product files.
- IBM API Connect <version> Management Upgrade File for VMware
- Management subsystem files for upgrade
- IBM API Connect <version> Developer Portal Upgrade File for VMware
- Developer Portal subsystem files for upgrade
- IBM API Connect <version> Analytics Upgrade File for VMware
- Analytics subsystem files for upgrade
- IBM API Connect <version> Install Assist for <operating_system_type>
- The apicup installation utility. Required for all installations on VMware.
The following files are not used during upgrade, but can be installed as new installations used to replace prior versions
- IBM API Connect <version> Toolkit for <operating_system_type>
- Toolkit command line utility. Packaged standalone, or with API Designer or Loopback:
- IBM API Connect <version> Toolkit for <operating_system_type>
- IBM API Connect <version> Toolkit with Loopback for <operating_system_type>
- IBM API Connect <version> Toolkit Designer with Loopback for <operating_system_type>
The toolkit is not required for API Connect installation. After installation, you can download the toolkit from the Cloud Manager UI and API Manager UI. For more information, see Installing the toolkit.
- IBM API Connect <version> Local Test Environment
- Optional test environment. The LTE is not required for API Connect installation and can be optionally downloaded and installed at any time. For more information, see Testing an API with the Local Test Environment
- If necessary, download (from the same Fix Pack page) any Control Plane
files that are needed.
Control Plane files provide support for specific Kubernetes versions. The IBM API Connect <version> Management Upgrade File for VMware file contains the latest Control Plane file. An upgrade from the most recent API Connect version to the current version does not need a separate Control Plane file. However, when upgrading from older versions of API Connect, you must install one or more control plane files to ensure that all current Kubernetes versions are supported.
Consult the following table to see if your deployment needs one or more separate Control Plane files.
On the Fix Central page, type "control-plane" in the Filter fix details field to only show the list of control plane files in the results table. Then, click theDescription header to sort the results so you can easily locate the control plane files that you need.Table 1. Control Plane files needed for upgrading to 10.0.5.8 API Connect version Control Plane files needed - 10.0.5.7
Download: appliance-control-plane-1.29.x.tgz
- 10.0.5.6
- 10.0.5.5
Download: appliance-control-plane-1.29.x.tgz
appliance-control-plane-1.28.x.tgz
- 10.0.5.4
- 10.0.5.3
- 10.0.1.15-eus
- 10.0.1.12-eus
Download: appliance-control-plane-1.29.x.tgz
appliance-control-plane-1.28.x.tgz
appliance-control-plane-1.27.x.tgz
- 10.0.5.2
- 10.0.1.11-eus
- 10.0.1.9-eus
Download: appliance-control-plane-1.29.x.tgz
appliance-control-plane-1.28.x.tgz
appliance-control-plane-1.27.x.tgz
appliance-control-plane-1.26.x.tgz
appliance-control-plane-1.25.x.tgz
- 10.0.5.1
Download: appliance-control-plane-1.29.x.tgz
appliance-control-plane-1.28.x.tgz
appliance-control-plane-1.27.x.tgz
appliance-control-plane-1.26.x.tgz
appliance-control-plane-1.25.x.tgz
appliance-control-plane-1.24.x.tgz
- 10.0.4.0 and iFixes
Download: appliance-control-plane-1.29.x.tgz
appliance-control-plane-1.28.x.tgz
appliance-control-plane-1.27.x.tgz
appliance-control-plane-1.26.x.tgz
appliance-control-plane-1.25.x.tgz
appliance-control-plane-1.24.x.tgz
appliance-control-plane-1.23.x.tgz
- 10.0.3.0 and iFixes
- 10.0.1.6-eus and iFixes10.0.1.4-eus and iFixes
Download: appliance-control-plane-1.29.x.tgz
appliance-control-plane-1.28.x.tgz
appliance-control-plane-1.27.x.tgz
appliance-control-plane-1.26.x.tgz
appliance-control-plane-1.25.x.tgz
appliance-control-plane-1.24.x.tgz
appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
For information on the Control Plane files used with older releases, see Control Plane files for earlier releases.
- Complete the steps in Verifying the integrity of IBM product files to verify that the downloaded product files are not corrupted.
- Install the installation utility.
- Locate the apicup installation utility file for your operating system, and place it in your project directory
- Rename the file for your OS type to
apicup
. Note that the instructions in this documentation refer toapicup
. - OSX and Linux® only: Make the
apicup file an executable file by entering
chmod +x apicup
. - Set your path to the location of your apicup file.
OSX and Linux export PATH=$PATH:/Users/your_path/
Windows set PATH=c:\your_path;%PATH%
- From within your project directory, specify the API Connect license:
apicup licenses accept <License_ID>
The <
License_ID
> is specific to the API Connect Program Name you purchased. To view the supported license IDs, see API Connect licenses.
- If upgrading from v10.0.4-ifix3, or
upgrading from v10.0.1.7-eus (or higher):
Disassociate and delete your Analytics services. Important: Upgrading to v10.0.5.x from v10.0.4-ifix3, or from v10.0.1.6-eus (or higher) will result in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrade. For instructions on exporting see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
- In Cloud Manager UI, click Topology.
- In the section for the Availability Zone that contains the Analytics service, locate the Gateway service that the Analytics service is associated with.
- Click the actions menu, and select Unassociate analytics
service. Remember to disassociate each Analytics service from all Gateways.
- In the section for the Availability Zone that contains the Analytics services, locate each Analytics service and click Delete.
- Upgrade the Management subsystem:
- Run the upgrade on the Management subsystem:
- Install the Management subsystem
files.
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
Notes:- If you are adding one or more control plane files, specify each path and file name on the
command
line:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
- If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or
higher), add the argument
--accept-data-loss
to the command to indicate that you accept the loss of the Analytics data:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
Remember, when you include this parameter for the Management subsystem upgrade, you commit to the loss of data and the operation cannot be undone.
- If you are adding one or more control plane files, specify each path and file name on the
command
line:
- Verify that the upgrade was successful by running a health check with the following command:
apicup subsys health-check <subsystem_name>
Note: Upgrade to V10.0.5.6: Verify that the health check did not return a false-positive response (due to a known issue that is corrected from V10.0.5.7).Run the following commands to get the current version of API Connect on the upgraded subsystem:- Log in to the VM:
ssh apicadm@<hostname>
- Switch to
root
user:sudo -i
- Get the API Connect
version:
kubectl get apic
Example output:NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE managementcluster.management.apiconnect.ibm.com/def-management 17/17 Running 10.0.5.6 10.0.5.6-7267 Management is ready 60d
If the health check indicates a successful upgrade but the subsystem is still running the older version of API Connect, skip the remaining steps in this procedure and see False positive result from health check after upgrading to 10.0.5.6 for troubleshooting instructions.
- Log in to the VM:
- If the health check indicates that a reboot is needed, run the following command:
systemctl reboot
- Install the Management subsystem
files.
- Use
apicops
to validate the certificates.For information on the
apicops
tool, see The API Connect operations tool: apicops.- Log in to one of your management subsystem
VMs:
ssh apicadm@<management VM>
- Switch to root user:
sudo -i
- Copy the
apicops
executable file to your management VM, and give it execute permissions:chmod a+x apicops
- Run the following command:
apicops upgrade:stale-certs
- Delete any secrets that are identified as stale by
apicops
.Run the following command:kubectl delete secret <stale-secret>
- Restart the pods that use the secrets you
deleted.
To determine which pods to restart, see the following topics:kubectl delete pod <pod name>
- Log in to one of your management subsystem
VMs:
- If needed, delete old Postgres client certificates. If you are upgrading from 10.0.1.x or 10.0.4.0-ifix1, or if you previously installed any of those versions before upgrading to 10.0.5.x, there might be old Postgres client certificates. To verify, run the following command:
kubectl -n <namespace> get certs | grep db-client
For example, if you see that both
-db-client-apicuser
andapicuser
exist,apicuser
is no longer in use. Remove the old certificates by running one of the following commands, depending on how many old certifications left in your system:kubectl -n <namespace> delete certs apicuser pgbouncer primaryuser postgres replicator
or:
kubectl -n <namespace> delete certs apicuser pgbouncer postgres replicator
- If you see that the Postgres pods are in the
imagePullBackOff
state, import the images that you saved in step 2 to every Management Server VM:Note: Not all image tar files will need to be imported, depending on your environment; however, there is no harm in importing all the image tar files that are saved.- SSH into the VM as root.
- Import each saved image into the
Containerd
registry. For example:ctr --namespace k8s.io image import crunchy-pgbouncer.tar --digests=true
- Re-tag the
import-<datestamp>@sha256:nnn
images to their appropriate tags and digests. For example:ctr --namespace k8s.io image tag <source> <target>
Remember to repeat this step for every Management Server VM.
- Run the upgrade on the Management subsystem:
- If there are multiple Management subsystems in the same project, set the portal
subsystem's
platform-api
andconsumer-api
certificates to match those used by the appropriate Management subsystem to ensure that the portal subsystem is correctly associated with that Management subsystem.This step only applies if you installed more than one Management subsystem into a single project.
A portal subsystem can be associated with only one Management subsystem. To associate the upgraded portal subsystem with the appropriate Management subsystem, manually set the
mgmt-platform-api
andmgmt-consumer-api
certificates to match the ones used by the Management subsystem.- Run the following commands to get the certificates from the Management
subsystem:
apicup certs get mgmt-subsystem-name platform-api -t cert > platform-api.crt apicup certs get mgmt-subsystem-name platform-api -t key > platform-api.key apicup certs get mgmt-subsystem-name platform-api -t ca > platform-api-ca.crt apicup certs get mgmt-subsystem-name consumer-api -t cert > consumer-api.crt apicup certs get mgmt-subsystem-name consumer-api -t key > consumer-api.key apicup certs get mgmt-subsystem-name consumer-api -t ca > consumer-api-ca.crt
where
mgmt-subsystem-name
is the name of the specific Management subsystem that you want to associate the new portal subsystem with. - Run the following commands to set the portal's certificates to match those used by the
Management subsystem:
apicup certs set ptl-subsystem-name platform-api Cert_file_path Key_file_path CA_file_path apicup certs set ptl-subsystem-name consumer-api Cert_file_path Key_file_path CA_file_path
- Run the following commands to get the certificates from the Management
subsystem:
- Upgrade the portal subsystem. Attention: If you are upgrading a two data center disaster recovery deployment, upgrade the warm-standby data center first.
For more information about two data center disaster recovery upgrade, see Upgrading a two data center deployment.
- If you did not verify that your portal customizations are compatible with Drupal 10, do that
now.
In API Connect 10.0.5.3, the Developer Portal moved from Drupal 9 to Drupal 10 (this upgrade also requires PHP 8.1). The upgrade tooling will update your Developer Portal sites; however, if you have any custom modules or themes, it is your responsibility to ensure their compatibility with Drupal 10 and PHP 8.1 before starting the upgrade. Review the Guidelines on upgrading your Developer Portal from Drupal 9 to Drupal 10 to ensure that any customizations to the Developer Portal are compatible with Drupal 10 and PHP 8.1.
- Install the portal subsystem
files:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
Notes:- If you have a two data center disaster recovery
deployment, and you are upgrading the active data center, then add the flag
--skip-health-check
:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --skip-health-check
- If you are adding one or more control plane files, specify each path and file name on the
command
line:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
- If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or
higher), add the argument
--accept-data-loss
to the command to indicate that you accept the loss of the Analytics data:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
- If you have a two data center disaster recovery
deployment, and you are upgrading the active data center, then add the flag
- Verify that the upgrade was successful by running a health check with the following
command:
apicup subsys health-check <subsystem_name>
Important: If the health check indicates that a reboot is required, do not reboot at this stage. Wait until all the portal sites are upgraded, see step 13.Note: Upgrade to V10.0.5.6: Verify that the health check did not return a false-positive response (due to a known issue that is corrected from V10.0.5.7).Run the following commands to get the current version of API Connect on the upgraded subsystem:- Log in to the VM:
ssh apicadm@<hostname>
- Switch to
root
user:sudo -i
- Get the API Connect
version:
kubectl get apic
Example output:NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE managementcluster.management.apiconnect.ibm.com/def-management 17/17 Running 10.0.5.6 10.0.5.6-7267 Management is ready 60d
If the health check indicates a successful upgrade but the subsystem is still running the older version of API Connect, skip the remaining steps in this procedure and see False positive result from health check after upgrading to 10.0.5.6 for troubleshooting instructions.
- Log in to the VM:
- If you have two data center disaster recovery
deployment and upgraded the portal subsystem in the warm-standby data center, then
start the upgrade on the active data center.Note: In a two data center disaster recovery deployment, after the warm-standby is upgraded, the active data center might enter a non-READY state. If this happens, do not be concerned, proceed to upgrade the portal subsystem in the active data center.
- If you did not verify that your portal customizations are compatible with Drupal 10, do that
now.
- Verify portal sites upgrade. Run the following toolkit commands to ensure that the portal site upgrades are complete:
- Log in as an admin
user:
apic login -s <server_name> --realm admin/default-idp-1 --username admin --password <password>
- Get the portal service ID and
endpoint:
apic portal-services:get -o admin -s <management_server_endpoint> \ --availability-zone availability-zone-default <portal-service-name> \ --output - --format json
- List the
sites:
apic --mode portaladmin sites:list -s <management_server_endpoint> \ --portal_service_name <portal-service-name> \ --format json
Any sites currently upgrading display the
UPGRADING
status; any site that completed its upgrade displays theINSTALLED
status and the new platform version. Verify that all sites display theINSTALLED
status before proceeding.For more information on the
sites
command, seeapic sites:list
and Using thesites
commands. - After all sites are in
INSTALLED
state and have the new platform listed, run:apic --mode portaladmin platforms:list -s <server_name> --portal_service_name <portal_service_name>
Verify that the new version of the platform is the only platform listed.
For more information on the
platforms
command, seeapic platforms:list
and Using theplatforms
commands. - If all portal sites are upgraded and the health check indicates that a reboot
is needed, then run the following command:
systemctl reboot
- Log in as an admin
user:
- Upgrade the Analytics subsystem Important: Upgrading to v10.0.5.x from v10.0.4-ifix3, or from v10.0.1.6-eus (or higher) will result in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrade. For instructions on exporting see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
- Install the Analytics subsystem
files:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
Notes:- If you are adding one or more control plane files, specify each path and file name on the
command
line:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
- If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or
higher), add the argument
--accept-data-loss
to the command to indicate that you accept the loss of the Analytics data:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
- If you are adding one or more control plane files, specify each path and file name on the
command
line:
- Verify that the upgrade was successful by running a health check with the following command:
apicup subsys health-check <subsystem_name>
Note: Upgrade to V10.0.5.6: Verify that the health check did not return a false-positive response (due to a known issue that is corrected from V10.0.5.7).Run the following commands to get the current version of API Connect on the upgraded subsystem:- Log in to the VM:
ssh apicadm@<hostname>
- Switch to
root
user:sudo -i
- Get the API Connect
version:
kubectl get apic
Example output:NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE managementcluster.management.apiconnect.ibm.com/def-management 17/17 Running 10.0.5.6 10.0.5.6-7267 Management is ready 60d
If the health check indicates a successful upgrade but the subsystem is still running the older version of API Connect, skip the remaining steps in this procedure and see False positive result from health check after upgrading to 10.0.5.6 for troubleshooting instructions.
- Log in to the VM:
- If the health check indicates that a reboot is needed, run the following
command:
systemctl reboot
- Install the Analytics subsystem
files:
- If you are upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.6-eus (or higher): Enable the analytics service as explained in Enabling Analytics after upgrading.
- If you are upgrading from a release earlier than v10.0.5.3: If you want to use JWT security instead of mTLS, enable this feature as explained in Use JWT security instead of mTLS between subsystems.
- Complete a manual backup of the upgraded API Connect subsystems; see Backing up and restoring on VMware.
- Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action requires a brief outage while all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs, as they might not restore successfully.
- Complete the disaster preparation steps to ensure recovery of API Connect from a disaster event. See Preparing for a disaster.