You can upgrade API Connect on VMware.
Before you begin
- Before starting an upgrade, review the supported upgrade paths from prior versions: Upgrade considerations on VMware.
- Enable keep-alive in your SSH configuration to avoid problems during the upgrade. For
information, consult the documentation for your version of SSH.
Attention:
- Upgrading directly to the latest release from releases earlier than 10.0.1.6-ifix1-eus might run
into problems. For best results, first upgrade to 10.0.1.6-ifix1eus and make a full set of backups
before proceeding to upgrade to the latest release.
- Upgrading to the latest release from v10.0.4-ifix3, or from 10.0.1.6-ifix1-eus (or higher),
will result in the deletion of existing analytics data. If you want to retain your analytics data
then you must export it before upgrade. For instructions on exporting, see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
When you upgrade subsystems in this
scenario, you must include the --accept-data-loss
on all apicup
install
commands that will result in an upgrade from the older releases. Since you will
upgrade the Management subsystem first, including the parameter at that point commits you to the
loss of data, and the operation cannot be undone.
About this task
You can upgrade the Management subsystem, Developer Portal subsystem, and Analytics subsystem.
The Gateway subsystem remains available during the upgrade of the other subsystems.
For the optional components API Connect Toolkit and API Connect Local Test Environment, you do
not need to upgrade. For these components, install the new version of each after you upgrade the
subsystems.
Important notes:
Procedure
- Complete the prerequisites:
- Ensure that your deployment meets the upgrade requirements. See Upgrade considerations on VMware.
- Complete the steps in Preparing to upgrade on VMware to
ensure that your environment is ready for the upgrade.
- Verify the API Connect deployment is healthy and fully operational. See Checking cluster health on VMware
- Remove any stale upgrade files:
- Verify sufficient free disk space.
For each appliance node that you are planning to upgrade:
- SSH into the appliance, and switch to user
root
.
- Check disk space in
/data/secure
:df -h /data/secure
Make sure the disk usage
shown is below 70%. If it is not, add disk capacity. See Adding disk space to a VMware appliance.
- Check free disk space in
/
:df -h /
Make sure usage is
below 70%. If it is not, consider deleting or offloading older /var/log/syslog*
files.
- Complete a manual backup of the API Connect subsystems as explained in Backing up and restoring on VMware.
Notes:
- Do not start an upgrade if a backup is scheduled to run within a few hours.
- Do not perform maintenance tasks such as rotating key-certificates, restoring from a backup, or
starting a new backup, at any time while the upgrade process is running.
-
Back up the Postgres images on one of the Management Server VMs.
Follow the steps for the version of API Connect that you are upgrading from:
- 10.0.1.9 or later, and 10.0.5.1 or
later:
SSH to one of the Management Server VMs:
ssh <ip_address of management vm> -l apicadm
Sudo to root:
sudo -i
Run the following commands:
postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name)
ctr --namespace k8s.io images pull --plain-http=true `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
ctr --namespace k8s.io images export rmdata.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
ctr --namespace k8s.io images export backrestrepo.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'`
ctr --namespace k8s.io images export pgbouncer.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'`
ctr --namespace k8s.io images export postgres-ha.tar `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'`
postgres_pod=$(kubectl get pod -l role=master -o name)
ctr --namespace k8s.io images export k8s-init.tar `kubectl get $postgres_pod -o jsonpath='{.spec.initContainers[].image}'`
- 10.0.1.8 or earlier, and 10.0.4.0-ifix3 or
earlier:
SSH to one of the Management Server VMs:
ssh <ip_address of management vm> -l apicadm
Sudo to root:
sudo -i
Run the following commands:
postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name)
docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'`
docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'`
docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'`
docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'` -o rmdata.tar
docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'` -o backrestrepo.tar
docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'` -o pgbouncer.tar
docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'` -o postgres-ha..tar
postgres_pod=$(kubectl get pod -l role=master -o name)
docker save `kubectl get $postgres_pod -o jsonpath='{.spec.initContainers[].image}'` -o k8s-init.tar
Note: After performing the upgrade, if you see that the Postgres pods are in the
imagePullBackOff
state, correct the issue by following steps 2 and 3 in
Management subsystem upgrade imagePullBackOff issue to import
each of the images you saved in this step.
- Run the pre-upgrade health check:
- Run the health check on the Management subsystem:
- SSH into the server:
- Run the following command to connect as the API Connect administrator, replacing
ip_address
with the appropriate IP
address:ssh ip_address -l apicadm
- When prompted, select Yes to continue connecting.
- When you are connected, run the following command to receive the necessary permissions for
working directly on the appliance:
sudo -i
- Verify that the
apicops
utility is installed by running the following command
to check the current version of the utility:apicops --version
If the
response indicates that apicops
is not available, install it now. See The API Connect operations tool: apicops in the API
Connect documentation.
- Run the version-check script and verify that there are no
errors:
apicops version:pre-upgrade
- Run the appliance health check and verify that there are no
errors:
apicops appliance-checks:appliance-pre-upgrade
- Run the health check on the Portal subsystem:
- SSH into the server.
- Verify that the
apicops
utility is installed by running the following command
to check the current version of the utility:apicops --version
If the
response indicates that apicops
is not available, install it now. See The API Connect operations tool: apicops in the API
Connect documentation.
- Run the following command to check the system status, and verify that there are no
errors:
apicops upgrade:check-subsystem-status
- Run the appliance health check and verify that there are no
errors:
apicops appliance-checks:appliance-pre-upgrade
- Run the health check on the Analytics subsystem:
- SSH into the server.
- Verify that the
apicops
utility is installed by running the following command
to check the current version of the utility:apicops --version
If the
response indicates that apicops
is not available, install it now. See The API Connect operations tool: apicops in the API
Connect documentation.
- Run the following command to check the system status, and verify that there are no
errors:
apicops upgrade:check-subsystem-status
- Run the appliance health check and verify that there are no
errors:
apicops appliance-checks:appliance-pre-upgrade
- Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action does require a brief outage
while all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs,
as they might not restore successfully. VM snapshots can offer a faster recovery when compared to
redeploying OVAs and restoring from normal backups.
Important: VM snapshots are not an alternative to the standard API Connect backups that
are described in the previous steps. You must complete the API Connect backups in order to use the
API Connect restore feature.
- Obtain the API Connect files:
You can access the latest files from IBM Fix Central by searching for the API Connect product and
your installed version.
The following files are used during upgrade on VMware:
- IBM API Connect <version> Management Upgrade File for VMware
- Management subsystem files for upgrade
- IBM API Connect <version> Developer Portal Upgrade File for VMware
- Developer Portal subsystem files for upgrade
- IBM API Connect <version> Analytics Upgrade File for VMware
- Analytics subsystem files for upgrade
- IBM API Connect <version> Install Assist for
<operating_system_type>
- The apicup installation utility. Required for all installations on
VMware.
The following files are not used during upgrade, but can be installed as new installations used
to replace prior versions
- IBM API Connect <version> Toolkit for
<operating_system_type>
- Toolkit command line utility. Packaged standalone, or with API Designer or Loopback:
- IBM API Connect <version> Toolkit for
<operating_system_type>
- IBM API Connect <version> Toolkit with Loopback for
<operating_system_type>
- IBM API Connect <version> Toolkit Designer with Loopback for
<operating_system_type>
Not required during initial installation. After installation, you can download directly from
the Cloud Manager UI and API Manager UI. See Installing the
toolkit.
- IBM API Connect <version> Local Test Environment
- Optional test environment. See Testing an API with the
Local Test Environment
- IBM API Connect <version> Security Signature Bundle File
- Checksum files that you can use to verify the integrity of your downloads.
- If necessary, download from the same Fix Pack page any Control Plane
files that are needed.
Control Plane files provide support for specific Kubernetes versions. The IBM API
Connect <version> Management Upgrade File for VMware file contains
the latest Control Plane file. An upgrade from the most recent API Connect version to the current
version does not need a separate Control Plane file. However, when upgrading from older versions of
API Connect, you must install one or more control plane files to ensure that all current Kubernetes
versions are supported.
Consult the following table to see if your deployment needs one or more separate Control Plane
files.
Table 1. Control Plane files needed for
upgrade
Version to upgrade from |
Instructions for upgrading to 10.0.5.3 |
|
Download:
appliance-control-plane-1.25.x.tgz
|
For information on the Control Plane files used with older releases, see Control Plane files for earlier releases.
- Verify that the subsystem upgrade files are not corrupted.
- Run the following command separately for the Management, the Developer Portal, and the
Analytics upgrade files:
- Mac or
Linux:
sha256sum <upgrade-file-name>
Example:
sha256sum upgrade_management_v10.0.5.3
- Windows:
C:\> certUtil -hashfile C:\<upgrade-file-name> SHA256
Example:
C:\> certUtil -hashfile C:\upgrade_management_v10.0.5.3 SHA256
- Compare the result with the checksum values to verify that the files are not
corrupted.
If the checksum does not match the value for the corresponding version and subsystem, then the
upgrade file is corrupted. Delete it and download a new copy of the upgrade file. The following list
shows the checksum values for the current release.
Checksum values for
10.0.5.3:
10.0.5.3 analytics : 94d73a54254580182cee045f551c72a019c4bfb225710fd22606a74fb890b9f9
10.0.5.3 management : 24cd8ea00b06621fcc9f26b381fdc807feafad42a4e4971ed9ac7bee6fc3e371
10.0.5.3 portal : d44180c4c417f3b39c7f5cbc695a2998ee42b987deda59019d7b0e1eb13a25e3
For information on the checksum files used with older releases, see Checksum values for earlier releases.
-
Install the installation utility.
- Locate the apicup installation utility file for your operating
system, and place it in your project directory
- Rename the file for your OS type to
apicup
. Note that the
instructions in this documentation refer to apicup
.
- OSX and Linux® only: Make the
apicup file an executable file by entering
chmod +x
apicup
.
- Set your path to the location of your apicup file.
OSX and Linux |
export PATH=$PATH:/Users/your_path/
|
Windows
|
set PATH=c:\your_path;%PATH%
|
- From within your project directory, specify the API Connect license:
apicup licenses accept <License_ID>
The <License_ID
> is specific to the API Connect Program
Name you purchased. To view the supported license IDs, see API Connect licenses.
- If upgrading from v10.0.4-ifix3, or
upgrading from v10.0.1.7-eus (or higher):
Disassociate and delete your Analytics services.
- In Cloud Manager UI, click Topology.
- In the section for the Availability Zone that contains the Analytics service, locate
the Gateway service that the Analytics service is associated with.
- Click the actions menu, and select Unassociate analytics
service.
Remember to disassociate each Analytics service from all
Gateways.
- In the section for the Availability Zone that contains the Analytics services, locate
each Analytics service and click Delete.
- Upgrade the Management subsystem:
- Run the upgrade on the Management subsystem:
- Install the Management subsystem
files.
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
Notes:
- If you are adding one or more control plane files, specify each path and file name on the
command
line:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
- If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or
higher), add the argument
--accept-data-loss
to the command to indicate that you
accept the loss of the Analytics
data:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
Remember,
when you include this parameter for the Management subsystem upgrade, you commit to the loss of data
and the operation cannot be undone.
- Verify that the upgrade was successful by running a health check with the following command:
apicup subsys health-check <subsystem_name>
- If the health check indicates that a reboot is needed, run the following commands:
-
apic lock
-
systemctl reboot
- Use
apicops
to validate the certificates.
For information on the apicops
tool, see The API Connect operations tool: apicops.
- Run the following
command:
apicops upgrade:stale-certs -n <namespace>
- Delete any stale certificates that are managed by cert-manager.
If a certificate failed the
validation and it is managed by cert-manager, you can delete the stale certificate secret, and let
cert-manager regenerate it. Run the following
command:
kubectl delete secret <stale-secret> -n <namespace>
- Restart the corresponding pod so that it can pick up the new secret.
To determine which pod to
restart, see the following topics:
- Restart all nats-server pods.
- Run the following command to connect as the API Connect administrator, replacing
<ip_address>
with the appropriate IP
address:ssh <ip_address> -l apicadm
- When prompted, select Yes to continue connecting.
- When you are connected, run the following command to receive the necessary permissions for
working directly on the appliance:
sudo -i
- Restart all nats-server pods by running the following
command:
kubectl -n <namespace> delete po -l app.kubernetes.io/name=natscluster
- If there are multiple Management subsystems in the same project, set the Portal
subsystem's
platform-api
and consumer-api
certificates to match
those used by the appropriate Management subsystem to ensure that the Portal subsystem is correctly
associated with that Management subsystem.
This step only applies if you installed more than one Management subsystem into a single
project.
A Portal subsystem can be associated with only one Management subsystem. To associate the
upgraded Portal subsystem with the appropriate Management subsystem, manually set the
mgmt-platform-api
and mgmt-consumer-api
certificates to match the
ones used by the Management subsystem.
- Run the following commands to get the certificates from the Management
subsystem:
apicup certs get mgmt-subsystem-name platform-api -t cert > platform-api.crt
apicup certs get mgmt-subsystem-name platform-api -t key > platform-api.key
apicup certs get mgmt-subsystem-name platform-api -t ca > platform-api-ca.crt
apicup certs get mgmt-subsystem-name consumer-api -t cert > consumer-api.crt
apicup certs get mgmt-subsystem-name consumer-api -t key > consumer-api.key
apicup certs get mgmt-subsystem-name consumer-api -t ca > consumer-api-ca.crt
where mgmt-subsystem-name
is the name of the specific
Management subsystem that you want to associate the new Portal subsystem with.
- Run the following commands to set the Portal's certificates to match those used by the
Management subsystem:
apicup certs set ptl-subsystem-name platform-api Cert_file_path Key_file_path CA_file_path
apicup certs set ptl-subsystem-name consumer-api Cert_file_path Key_file_path CA_file_path
For more information on apicup
certificate commands, see Command reference.
- Upgrade the Portal subsystem.
- If you did not verify that your Portal customizations are compatible with Drupal 10, do that
now.
In API Connect 10.0.5.3, the Developer Portal moved from Drupal 9 to Drupal 10 (this upgrade
also requires PHP 8.1). The upgrade tooling will update your Developer Portal sites; however, if you
have any custom modules or themes, it is your responsibility to ensure their compatibility with
Drupal 10 and PHP 8.1 before starting the upgrade. Review the Guidelines on upgrading your Developer Portal from Drupal 9 to Drupal 10 to ensure that any
customizations to the Developer Portal are compatible with Drupal 10 and PHP 8.1.
- Install the Portal subsystem
files:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
Notes:
- If you are adding one or more control plane files, specify each path and file name on the
command
line:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
- If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or
higher), add the argument
--accept-data-loss
to the command to indicate that you
accept the loss of the Analytics
data:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
- Verify that the upgrade was successful by running a health check with the following
command:
apicup subsys health-check <subsystem_name>
Important: If the health check indicates that a reboot is required, first complete the upgrade
for all Portal sites and verify that all sites were upgraded before running the reboot command as
instructed at this end of this step.
- Ensure that the Portal sites were upgraded:
- Use the toolkit
apic
to obtain the portal service id and
endpoint:apic portal-services:get -o admin -s <management_server_endpoint> \
--availability-zone availability-zone-default <portal-service-name> \
--output - --format json
- List the
sites:
apic --mode portaladmin sites:list -s <management_server_endpoint> \
--portal_service_name <portal-service-name> \
--format json
Any sites currently upgrading will be listed as
UPGRADING
. Once all sites have finished upgrading they should have the
INSTALLED
status and the new platform version listed.
See also: apic sites:list and Using the
sites
commands.
- After all sites are in
INSTALLED
state and have the new platform listed,
run:apic --mode portaladmin platforms:list -s <management_server_endpoint> \
--portal_service_id <portal_service_id_from_above_command> \
--portal_service_endpoint <portal_service_endpoint_from_above_command> \
--format json
The new version of the platform should be the only
platform listed.
See also: apic platforms:list and Using the
platforms
commands.
- If all Portal sites are upgraded and the health check indicates that a reboot
is needed, then run the following commands:
-
apic lock
-
systemctl reboot
- Upgrade the Analytics subsystem
- Install the Analytics subsystem
files:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
Notes:
- If you are adding one or more control plane files, specify each path and file name on the
command
line:
apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
- If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or
higher), add the argument
--accept-data-loss
to the command to indicate that you
accept the loss of the Analytics
data:apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
- Verify that the upgrade was successful by running a health check with the following command:
apicup subsys health-check <subsystem_name>
- If the health check indicates that a reboot is needed, run the following commands:
-
apic lock
-
systemctl reboot
- If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.6-eus (or higher):
Enable the analytics service as explained in Enabling Analytics after upgrading.
- If upgrading from a release previous to v10.0.5.3: If you want to use JWT security
instead of mTLS, enable this feature as explained in Use JWT security instead of mTLS between subsystems.
- Complete a manual backup of the upgraded API Connect subsystems; see Backing up and restoring on VMware.
- Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action requires a brief outage while
all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs, as
they might not restore successfully.
- Complete the disaster preparation steps to ensure recovery of
API Connect from a disaster event. See Preparing for a disaster.
What to do next
When you have upgraded all of the API Connect subsystems, upgrade
your DataPower Gateway Service. See Upgrading DataPower Gateway Service.