Upgrading management, portal, and analytics on VMware
Upgrade your management, portal, and analytics subsystems on VMware.
Before you begin
- Review Planning your API Connect upgrade on VMware.
- Complete the Pre-upgrade preparation.
- Enable keep-alive in your SSH client configuration to avoid disconnects during the transfer of the upgrade images to your VMs. For more information, consult the documentation for the version of SSH that is installed on your client system (where you run the apicup commands from).
- If you have a two data center disaster recovery deployment, read and understand the extra requirements for this deployment type: Upgrading a two data center deployment.
About this task
- The upgrade order for subsystems is important. Upgrade the subsystems in the following order: (1) management, (2) portal, (3) analytics, (4) gateway. Management must be upgraded first to ensure that any new policies and capabilities are available to previously registered gateway services.
- When you start the upgrade process on a subsystem, the apicup install command sends the compressed upgrade files to all subsystem VMs. The compressed files are about 2 GB, and the transfer can take some time. The apicup install command exits when the transfer of the upgrade files to each VM is complete, at which point the upgrade starts. The progress of the upgrade is monitored with other commands.
- The
apicup subsys install
command runsapicup health-check
first before the upgrade process is started. If the health-check fails, then the upgrade is aborted and details of the health-check failure is displayed.If you are troubleshooting a problem with upgrade and want the upgrade to proceed despite a health-check failure, then you can suppress the health check. See Skipping health check when re-running upgrade.
- You do not need to upgrade the components API Connect Toolkit and API Connect Local Test Environment in the same upgrade window. These components are single executable files. Replace them the new versions downloaded from Fix Central.
- If any subsystem database backups are running or are scheduled to run within a few hours, do not start the upgrade process.
- Do not perform maintenance tasks such as updating certificates, restoring subsystem databases from backup, or triggering subsystem database backups at any time while you are upgrading API Connect.
Procedure
- If some time has passed since you completed the Pre-upgrade preparation, repeat these pre-upgrade steps
to be sure that your subsystems are ready for upgrade.
- Upgrade Ubuntu Linux on all VMs from 20.04 to 22.04. Note:
- Skip this step if you are on API Connect 10.0.8.1 or later or 10.0.5.9 and later. If you are on older versions, you must first upgrade your API Connect deployment to 10.0.5.8 or 10.0.8.0 before attempting to upgrade.
- It is strongly recommended to upgrade your API Connect product version immediately after upgrading Ubuntu. IBM generally supports one operating system (OS) version per product version. Prolonged use of an unsupported OS/product version pair may result in unexpected behavior, security vulnerabilities, or lack of support.
- On all API Connect subsystems, run
the following command:
apicup subsys prepare-ubuntu-upgrade <subsystem name> ubuntu-jammy-upgrade.tgz
This command copies the Ubuntu Linux upgrade package to all subsystem VMs.
Note: If you face any problems during upgrade, see Ubuntu preupgrade failures during OVA upgrade from 10.0.8.0 to 10.0.8.1. - Login to each VM:
ssh apicadm@<subsystem VM>
- Run the following commands: Note: Run the commands mentioned in the steps 2c(i) and 2c(ii) only if you are upgrading to 10.0.8.1. Skip these steps if you are upgrading to a later version.
-
sudo sed -i '/sudo \/usr\/local\/lib\/record\/proxy/icd \/usr\/local\/lib\/record' /usr/local/bin/record-proxy.sh
-
sudo systemctl restart record-proxy
-
sudo apt update
-
sudo apt upgrade
-
sudo apt dist-upgrade
-
sudo do-release-upgrade
- If you see the message:
You have not rebooted after updating a package which requires a reboot. Please reboot before upgrading.
then:- Run the following command:
sudo apic lock
- Reboot the VM:
sudo reboot
- From the project directory, verify VM is up
with:
apicup subsys health-check <subsystem name>
No output means the health check passed.
- Login to the VM and to complete the upgrade
re-run:
sudo do-release-upgrade
- Run the following command:
- Verify Ubuntu upgrade is complete:
sudo lsb_release -a
The output should be:No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.4 LTS Release: 22.04 Codename: jammy
-
- After Ubuntu is upgraded on all the API Connect VMs, check the
subsystem health:
apicup subsys health-check <subsystem name>
No output means the health check passed.
If asked to reboot, then:- Run the following command:
sudo apic lock
- Reboot the VM:
sudo reboot
- Run the following command:
- From the project directory, cleanup the Ubuntu upgrade packages:
apicup subsys cleanup-ubuntu-upgrade <subsystem name>
-
Block all external traffic to the management server.
Upgrading API Connect requires downtime while files are transferred and updated. All external traffic to the management server from all sources (Cloud Manager UI, API Manager UI, CLI, and REST API) must be blocked. As a result, the Cloud Manager and API Manager UIs cannot be used during the upgrade. In addition, the developer portal cannot be used to create, update, or delete any applications, subscriptions, memberships, or consumer organizations during the upgrade.
- Verify that the new apicup command is in your
operating system's
PATH
variable:apicup version
The version that is returned should be the version that you are upgrading to. If not, then refer to step 12 on Pre-upgrade preparation and checks VMware.
- Upgrade the management
subsystem:
- Upgrading 10.0.7.0 only: If you are
upgrading from 10.0.7.0 to 10.0.8.0, enable maintenance mode for the Management
subsystem:
- SSH into a Management node and then log in as
root
. - Run the following command:
kubectl-cnp maintenance set --reusePVC
When the upgrade is complete, maintenance mode is automatically disabled.
Attention: Failing to enable maintenance mode results in the following error:# ./apicup subsys install kqn3-management ../repo-management.tgz --debug DEBU[0000] Found PDB kqn3-management-site1-db DEBU[0000] Found PDB kqn3-management-site1-db-primary Error: Unable to upgrade: EDB node maintenance mode has not been enabled.
- SSH into a Management node and then log in as
- Optional: If upgrading from 10.0.5.x or
10.0.7.0: Provide a custom endpoint for the Consumer Catalog.
Skip this step if you are upgrading from 10.0.8.0 or later.
Beginning with version 10.0.8.0, API Connect includes the Consumer Catalog feature. During the upgrade from 10.0.5.x or 10.0.7.0, a new endpoint is added to the management subsystem for the Custom Catalog. The new endpoint uses a default host name and certificate based on the values used for other management endpoints. If you want to change the values for the Consumer Catalog, complete the following steps before running the management upgrade.
- Change the host name for the Consumer Catalog endpoint by running the following
command:
apicup subsys set <management_subsystem_name> consumer-catalog-ui=${consumer_catalog_endpoint}
Attention: If you changed the host name, then you must also complete the next step to change the certificate. - Regenerate the
consumer-catalog-ui
certificate by running the following commands:apicup certs set --clear <management_subsystem_name> consumer-catalog-ui apicup certs generate <management_subsystem_name>
- Change the host name for the Consumer Catalog endpoint by running the following
command:
- Start the upgrade of the management subsystem:
Attention: If you are upgrading a two data center disaster recovery deployment, upgrade the warm-standby data center first.
For more information about two data center disaster recovery upgrade, see Upgrading a two data center deployment.
- Login in to each VM of the management subsystem as root and run the following
command:
for pkg in libpq5 shim shim-signed; do rm -f "/etc/apt/preferences.d/${pkg}-preferences"; done
Note: This step ensures a clean upgrade environment by removing any outdated package preferences from a previous upgrade (For example, v10.0.8.1). - Install the management subsystem upgrade file. The arguments to include in the
install command depend on your upgrade path and configuration:
- Single data center deployment and upgrading from
V10.0.5.x:
apicup subsys install <subsystem name> <subsystem_upgrade_tar_archive> --force-operand-update
- Single data center deployment and upgrading from V10.0.7.0 or
later:
apicup subsys install <subsystem name> <subsystem_upgrade_tar_archive>
- Two data center disaster recovery deployment
and upgrading warm-standby data
center from
V10.0.5:
apicup subsys install <subsystem name> <subsystem_upgrade_tar_archive> --accept-warm-standby-upgrade-data-deletion --skip-health-check --force-operand-update
- Two data center disaster recovery deployment
and upgrading active data center from
V10.0.5.x:
apicup subsys install <subsystem name> <subsystem_upgrade_tar_archive> --skip-health-check --force-operand-update
- Two data center disaster recovery deployment
and upgrading warm-standby data
center from V10.0.7 or
later:
apicup subsys install <subsystem name> <subsystem_upgrade_tar_archive>
- Two data center disaster recovery deployment
and upgrading active data center from V10.0.7.0 or
later:
apicup subsys install <subsystem name> <subsystem_upgrade_tar_archive>
Note: If you are including one or more control plane files, specify the file path to each control plane file (separate multiple file paths with blank spaces); for example:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n> <other arguments>
- Single data center deployment and upgrading from
V10.0.5.x:
- Verify that the upgrade was successful by running the
apicup health check:
apicup subsys health-check <subsystem_name>
Troubleshooting: If health-check reports any problems, then see Troubleshooting upgrades on VMware.Note: Two data center disaster recovery deployments: After upgrading the warm-standby management subsystem, the health-check reports a status of Blocked until the active management subsystem is upgraded, see step 5.d. - If the health check indicates that a reboot is needed, complete the following steps on all
management VMs:
- Log in to the VM:
ssh apicadm@<hostname>
- Switch to
root
user:sudo -i
-
systemctl reboot
- Log in to the VM:
- Login in to each VM of the management subsystem as root and run the following
command:
- Two data center disaster recovery deployments: If you
just upgraded the warm-standby
data center, then complete the following checks before you start the upgrade on the active data
center: Check the warm-standby management subsystem status with the following command:
apicup subsys health-check <management warm-standby>
If the output of the command is:
then you can proceed to upgrade the management subsystem in the active data center.ManagementCluster (specified multi site ha mode: passive, current ha mode: WarmStandbyError, ha status: Error, ha message: Remote HAMode is Empty (Not received from peer). Expected it to be in either active or setup complete phase) is not Ready or Complete | State: 2/18 | Phase: Blocked | Message: HA status Error - see HAStatus in CR for details Spec version <target version> and reconciled version <source version> do not match
- Upgrading 10.0.7.0 only: If you are
upgrading from 10.0.7.0 to 10.0.8.0, enable maintenance mode for the Management
subsystem:
- If you have multiple management subsystems in the same project, set the portal
subsystem's
platform-api
andconsumer-api
certificates to match the certificates used by their corresponding management subsystems.Note: Complete this step only if you installed more than one management subsystem into a single project.A portal subsystem can be associated with only one management subsystem. To associate the upgraded portal subsystem with the appropriate management subsystem, manually set the
mgmt-platform-api
andmgmt-consumer-api
certificates to match the ones used by the management subsystem.- Run the following commands to get the certificates from the management
subsystem:
apicup certs get mgmt-subsystem-name platform-api -t cert > platform-api.crt apicup certs get mgmt-subsystem-name platform-api -t key > platform-api.key apicup certs get mgmt-subsystem-name platform-api -t ca > platform-api-ca.crt apicup certs get mgmt-subsystem-name consumer-api -t cert > consumer-api.crt apicup certs get mgmt-subsystem-name consumer-api -t key > consumer-api.key apicup certs get mgmt-subsystem-name consumer-api -t ca > consumer-api-ca.crt
where
mgmt-subsystem-name
is the name of the specific management subsystem that you want to associate the new portal subsystem with. - Run the following commands to set the portal's certificates to match the certificates
used by the management subsystem:
apicup certs set ptl-subsystem-name platform-api Cert_file_path Key_file_path CA_file_path apicup certs set ptl-subsystem-name consumer-api Cert_file_path Key_file_path CA_file_path
- Run the following commands to get the certificates from the management
subsystem:
- Upgrade the portal subsystem. Attention: If you are upgrading a two data center disaster recovery deployment, upgrade the warm-standby data center first.
For more information about two data center disaster recovery upgrade, see Upgrading a two data center deployment.
- Login in to each VM of the portal subsystem as root and run the following
command:
for pkg in libpq5 shim shim-signed; do rm -f "/etc/apt/preferences.d/${pkg}-preferences"; done
Note: This step ensures a clean upgrade environment by removing any outdated package preferences from a previous upgrade (For example, v10.0.8.1). - Install the portal subsystem upgrade file. The arguments to include in the
install command depend on your upgrade path and configuration:
- Single data center
deployment:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
- Two data center disaster recovery deployment.
The portal subsystem in the warm-standby data
center:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
- Two data center disaster recovery deployment.
The portal subsystem in the active
datacenter:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --skip-health-check
Note: The portal subsystem in the data center might appear in a non-READY state after the warm-standby is upgraded. This is expected, and the--skip-health-check
argument tells the upgrade to proceed despite the non-READY state.
Note: If you are adding one or more control plane files, specify the file path of each control plane file:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n> <other arguments>
- Single data center
deployment:
- Verify that the upgrade was successful by running a health check with the following
command:
apicup subsys health-check <subsystem_name>
Important: If the health check indicates that a reboot is required, first complete the upgrade for all portal sites and verify that all sites were upgraded. Then run the reboot command at the end of this step. - If the upgrade fails and the subsystem is in the
SUBSYSTEM_ROLLBACK
state, do not proceed with the upgrade.If the subsystem is in the
SUBSYSTEM_ROLLBACK
state, then the upgrade was automatically rolled back. Contact IBM Support for help resolving the underlying problem before attempting another upgrade. Refer to the troubleshooting section called Upgrade from V10.0.7 fails with an automatic rollback for instructions on performing a rollback and resolving the underlying problem before attempting the upgrade again. - If you have two data center disaster recovery
deployment and upgraded the portal subsystem in the warm-standby data center, then
start the upgrade on the active data center.Note: In a two data center disaster recovery deployment, after the warm-standby is upgraded, the active data center might enter a non-READY state. If this happens, do not be concerned, proceed to upgrade the portal subsystem in the active data center.
- Login in to each VM of the portal subsystem as root and run the following
command:
- Verify portal site upgrades are complete.
- Log in as an admin
user:
apic login -s <server_name> --realm admin/default-idp-1 --username admin --password <password>
- Get the portal service ID and
endpoint:
apic portal-services:get -o admin -s <management_server_endpoint> \ --availability-zone availability-zone-default <portal-service-name> \ --output - --format json
- List the
sites:
apic --mode portaladmin sites:list -s <management_server_endpoint> \ --portal_service_name <portal-service-name> \ --format json
Any sites currently upgrading display the
UPGRADING
status; any site that completed its upgrade displays theINSTALLED
status and the new platform version. Verify that all sites display theINSTALLED
status before proceeding.For more information about the
sites
command, see the toolkit CLI reference documentation. - After all sites are in
INSTALLED
state and show the new platform listed, run:apic --mode portaladmin platforms:list -s <server_name> --portal_service_name <portal_service_name>
Verify that the new version of the platform is the only platform listed.
For more information about the
platforms
command, see the toolkit CLI reference documentation. - If all portal sites are upgraded and the health check indicates that a reboot
is needed, then run the following command on all portal
VMs:
systemctl reboot
- Log in as an admin
user:
- Upgrade the analytics subsystem.
- Login in to each VM of the analytics subsystem as root and run the following
command:
for pkg in libpq5 shim shim-signed; do rm -f "/etc/apt/preferences.d/${pkg}-preferences"; done
Note: This step ensures a clean upgrade environment by removing any outdated package preferences from a previous upgrade (For example, v10.0.8.1). - Install the analytics subsystem
files:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
Note: If you are adding one or more control plane files, specify the file path of each control plane file:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
- Verify that the upgrade was successful by running a health check with the following command:
apicup subsys health-check <subsystem_name>
- If the upgrade fails and the subsystem is in the
SUBSYSTEM_ROLLBACK
state, do not proceed with the upgrade.If the subsystem is in the
SUBSYSTEM_ROLLBACK
state, then the upgrade was automatically rolled back. Contact IBM Support for help resolving the underlying problem before attempting another upgrade. Refer to the troubleshooting section called Upgrade from V10.0.7 fails with an automatic rollback for instructions on performing a rollback and resolving the underlying problem before attempting the upgrade again. - If the health check indicates that a reboot is needed, run the following commands on all
analytics VMs:
systemctl reboot
- Login in to each VM of the analytics subsystem as root and run the following
command:
- Take a backup of your project directory, and database backups of each upgraded API Connect subsystem; see Backing up and restoring on VMware.
- Optional: Take a virtual machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action requires a brief outage while the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs, as they might not restore successfully.