Upgrading to the latest release on VMware

You can upgrade API Connect on VMware.

Before you begin

  • Before starting an upgrade, review the supported upgrade paths from prior versions: Upgrade considerations on VMware.
  • Enable keep-alive in your SSH configuration to avoid problems during the upgrade. For information, consult the documentation for your version of SSH.
Attention:
  • Upgrading directly to the latest release from releases earlier than 10.0.1.6-ifix1-eus might run into problems. For best results, first upgrade to 10.0.1.6-ifix1eus and make a full set of backups before proceeding to upgrade to the latest release.
  • Upgrading to the latest release from v10.0.4-ifix3, or from 10.0.1.6-ifix1-eus (or higher), will result in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrade. For instructions on exporting, see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.

    When you upgrade subsystems in this scenario, you must include the --accept-data-loss on all apicup install commands that will result in an upgrade from the older releases. Since you will upgrade the Management subsystem first, including the parameter at that point commits you to the loss of data, and the operation cannot be undone.

About this task

You can upgrade the Management subsystem, Developer Portal subsystem, and Analytics subsystem. The Gateway subsystem remains available during the upgrade of the other subsystems.

For the optional components API Connect Toolkit and API Connect Local Test Environment, you do not need to upgrade. For these components, install the new version of each after you upgrade the subsystems.

Important notes:

  • When installing the new files, be sure to use the new apicup installer.
  • The upgrade order for subsystems is important. Upgrade the subsystems in the following order: (1) Management, (2) Portal, (3) Analytics, (4) Gateway. Management must be upgraded first. Gateway must be upgraded after Management because upgrading the Management service before the Gateway ensures that any new policies and capabilities will be available to a previously registered Gateway service.
  • When you run the install, the program sends the compressed tar file, which contains the upgrade images, to all cluster members. The compressed tar file is about 2 GB, and transfer can take some time. When the install command exits, the compressed tar file has arrived at each member. The upgrade process is now underway, and might continue for some time depending on factors such as the size of your deployment, your network speed, and so on.
  • The apicup subsys install command automatically runs apicup health-check prior to attempting the upgrade. An error is displayed if a problem is found that will prevent successful upgrade.

    When troubleshooting a problem with an upgrade, you can optionally suppress the health check. See Skipping health check when re-running upgrade.

  • The certificate manager was upgraded in API Connect version 10.0.5.1. If you are upgrading from an earlier release, then after you trigger the upgrade from the apim, lur, and taskmanager pods will be in a CrashLoopBackoff state due to Postgres requiring a certificate refresh. Correct the problem by completing the following steps:
    1. SSH into the server:
      1. Run the following command to connect as the API Connect administrator, replacing ip_address with the appropriate IP address:
        ssh ip_address -l apicadm
      2. When prompted, select Yes to continue connecting.
      3. When you are connected, run the following command to receive the necessary permissions for working directly on the appliance:
        sudo -i
    2. Run the following command to delete the Postgres pods, which refreshes the new certificate:
      kubectl get pod --no-headers=true | grep postgres | grep -v backup | awk '{print $1}' | xargs kubectl delete pod

Procedure

Attention: If you encounter an issue while performing the upgrade, see Troubleshooting upgrades on VMware.

  1. Complete the prerequisites:
    1. Ensure that your deployment meets the upgrade requirements. See Upgrade considerations on VMware.

    2. Complete the steps in Preparing to upgrade on VMware to ensure that your environment is ready for the upgrade.

    3. Verify the API Connect deployment is healthy and fully operational. See Checking cluster health on VMware
    4. Remove any stale upgrade files:
      apic clean-upgrade-files
    5. Verify sufficient free disk space.

      For each appliance node that you are planning to upgrade:

      1. SSH into the appliance, and switch to user root.
      2. Check disk space in /data/secure:
        df -h /data/secure

        Make sure the disk usage shown is below 70%. If it is not, add disk capacity. See Adding disk space to a VMware appliance.

      3. Check free disk space in /:
        df -h /

        Make sure usage is below 70%. If it is not, consider deleting or offloading older /var/log/syslog* files.

    6. Complete a manual backup of the API Connect subsystems as explained in Backing up and restoring on VMware.
      Notes:
      • Do not start an upgrade if a backup is scheduled to run within a few hours.
      • Do not perform maintenance tasks such as rotating key-certificates, restoring from a backup, or starting a new backup, at any time while the upgrade process is running.
  2. Back up the Postgres images on one of the Management Server VMs.
    Follow the steps for the version of API Connect that you are upgrading from:
    • 10.0.1.9 or later, and 10.0.5.1 or later:
      SSH to one of the Management Server VMs:
      ssh <ip_address of management vm> -l apicadm
      Sudo to root:
      sudo -i
      
      Run the following commands:
      
      postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name)
      
      ctr --namespace k8s.io images pull --plain-http=true `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
      ctr --namespace k8s.io images export rmdata.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
      ctr --namespace k8s.io images export backrestrepo.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'`
      ctr --namespace k8s.io images export pgbouncer.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'`
      ctr --namespace k8s.io images export postgres-ha.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'`
      
      postgres_pod=$(kubectl get pod -l role=master -o name)
      ctr --namespace k8s.io images export k8s-init.tar `kubectl get $postgres_pod  -o jsonpath='{.spec.initContainers[].image}'`
    • 10.0.1.8 or earlier, and 10.0.4.0-ifix3 or earlier:
      SSH to one of the Management Server VMs:
      ssh <ip_address of management vm> -l apicadm
      Sudo to root:
      sudo -i
      
      Run the following commands:
      
      postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name)
      
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'`
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'`
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'`
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'` -o rmdata.tar
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'` -o backrestrepo.tar
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'` -o pgbouncer.tar
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'` -o postgres-ha..tar
      
      postgres_pod=$(kubectl get pod -l role=master -o name)
      docker save `kubectl get $postgres_pod  -o jsonpath='{.spec.initContainers[].image}'` -o k8s-init.tar
      
    Note: After performing the upgrade, if you see that the Postgres pods are in the imagePullBackOff state, correct the issue by following steps 2 and 3 in Management subsystem upgrade imagePullBackOff issue to import each of the images you saved in this step.
  3. Run the pre-upgrade health check:
    1. Run the health check on the Management subsystem:
      1. SSH into the server:
        1. Run the following command to connect as the API Connect administrator, replacing ip_address with the appropriate IP address:
          ssh ip_address -l apicadm
        2. When prompted, select Yes to continue connecting.
        3. When you are connected, run the following command to receive the necessary permissions for working directly on the appliance:
          sudo -i
      2. Verify that the apicops utility is installed by running the following command to check the current version of the utility:
        apicops --version

        If the response indicates that apicops is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation.

      3. Run the version-check script and verify that there are no errors:
        apicops version:pre-upgrade
      4. Run the appliance health check and verify that there are no errors:
        apicops appliance-checks:appliance-pre-upgrade
    2. Run the health check on the Portal subsystem:
      1. SSH into the server.
      2. Verify that the apicops utility is installed by running the following command to check the current version of the utility:
        apicops --version

        If the response indicates that apicops is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation.

      3. Run the following command to check the system status, and verify that there are no errors:
        apicops upgrade:check-subsystem-status
      4. Run the appliance health check and verify that there are no errors:
        apicops appliance-checks:appliance-pre-upgrade
    3. Run the health check on the Analytics subsystem:
      1. SSH into the server.
      2. Verify that the apicops utility is installed by running the following command to check the current version of the utility:
        apicops --version

        If the response indicates that apicops is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation.

      3. Run the following command to check the system status, and verify that there are no errors:
        apicops upgrade:check-subsystem-status
      4. Run the appliance health check and verify that there are no errors:
        apicops appliance-checks:appliance-pre-upgrade
  4. Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action does require a brief outage while all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs, as they might not restore successfully. VM snapshots can offer a faster recovery when compared to redeploying OVAs and restoring from normal backups.
    Important: VM snapshots are not an alternative to the standard API Connect backups that are described in the previous steps. You must complete the API Connect backups in order to use the API Connect restore feature.
  5. Obtain the API Connect files:

    You can access the latest files from IBM Fix Central by searching for the API Connect product and your installed version.

    The following files are used during upgrade on VMware:

    IBM API Connect <version> Management Upgrade File for VMware
    Management subsystem files for upgrade
    IBM API Connect <version> Developer Portal Upgrade File for VMware
    Developer Portal subsystem files for upgrade
    IBM API Connect <version> Analytics Upgrade File for VMware
    Analytics subsystem files for upgrade
    IBM API Connect <version> Install Assist for <operating_system_type>
    The apicup installation utility. Required for all installations on VMware.

    The following files are not used during upgrade, but can be installed as new installations used to replace prior versions

    IBM API Connect <version> Toolkit for <operating_system_type>
    Toolkit command line utility. Packaged standalone, or with API Designer or Loopback:
    • IBM API Connect <version> Toolkit for <operating_system_type>
    • IBM API Connect <version> Toolkit with Loopback for <operating_system_type>
    • IBM API Connect <version> Toolkit Designer with Loopback for <operating_system_type>

    Not required during initial installation. After installation, you can download directly from the Cloud Manager UI and API Manager UI. See Installing the toolkit.

    IBM API Connect <version> Local Test Environment
    Optional test environment. See Testing an API with the Local Test Environment
    IBM API Connect <version> Security Signature Bundle File
    Checksum files that you can use to verify the integrity of your downloads.
  6. If necessary, download from the same Fix Pack page any Control Plane files that are needed.

    Control Plane files provide support for specific Kubernetes versions. The IBM API Connect <version> Management Upgrade File for VMware file contains the latest Control Plane file. An upgrade from the most recent API Connect version to the current version does not need a separate Control Plane file. However, when upgrading from older versions of API Connect, you must install one or more control plane files to ensure that all current Kubernetes versions are supported.

    Consult the following table to see if your deployment needs one or more separate Control Plane files.

    Table 1. Control Plane files needed for upgrade
    Version to upgrade from Instructions for upgrading to 10.0.5.3
    • 10.0.5.2
    • 10.0.1.11-eus
    Download:
    • appliance-control-plane-1.25.x.tgz

    For information on the Control Plane files used with older releases, see Control Plane files for earlier releases.

  7. Verify that the subsystem upgrade files are not corrupted.
    1. Run the following command separately for the Management, the Developer Portal, and the Analytics upgrade files:
      • Mac or Linux:
        sha256sum <upgrade-file-name>

        Example:

        sha256sum upgrade_management_v10.0.5.3
      • Windows:
        C:\> certUtil -hashfile C:\<upgrade-file-name> SHA256

        Example:

        C:\> certUtil -hashfile C:\upgrade_management_v10.0.5.3 SHA256
    2. Compare the result with the checksum values to verify that the files are not corrupted.

      If the checksum does not match the value for the corresponding version and subsystem, then the upgrade file is corrupted. Delete it and download a new copy of the upgrade file. The following list shows the checksum values for the current release.

      Checksum values for 10.0.5.3:

      10.0.5.3 analytics : 94d73a54254580182cee045f551c72a019c4bfb225710fd22606a74fb890b9f9
      10.0.5.3 management : 24cd8ea00b06621fcc9f26b381fdc807feafad42a4e4971ed9ac7bee6fc3e371
      10.0.5.3 portal : d44180c4c417f3b39c7f5cbc695a2998ee42b987deda59019d7b0e1eb13a25e3

      For information on the checksum files used with older releases, see Checksum values for earlier releases.

  8. Install the installation utility.
    1. Locate the apicup installation utility file for your operating system, and place it in your project directory
    2. Rename the file for your OS type to apicup. Note that the instructions in this documentation refer to apicup.
    3. OSX and Linux® only: Make the apicup file an executable file by entering chmod +x apicup.
    4. Set your path to the location of your apicup file.
      OSX and Linux
      export PATH=$PATH:/Users/your_path/
      Windows
      set PATH=c:\your_path;%PATH%
    5. From within your project directory, specify the API Connect license:
      apicup licenses accept <License_ID>

      The <License_ID> is specific to the API Connect Program Name you purchased. To view the supported license IDs, see API Connect licenses.

  9. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): Disassociate and delete your Analytics services.
    Important: Upgrading to v10.0.5.x from v10.0.4-ifix3, or from v10.0.1.6-eus (or higher), will result in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrade. For instructions on exporting see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
    1. In Cloud Manager UI, click Topology.
    2. In the section for the Availability Zone that contains the Analytics service, locate the Gateway service that the Analytics service is associated with.
    3. Click the actions menu, and select Unassociate analytics service.
      Remember to disassociate each Analytics service from all Gateways.
    4. In the section for the Availability Zone that contains the Analytics services, locate each Analytics service and click Delete.
  10. Upgrade the Management subsystem:
    1. Run the upgrade on the Management subsystem:
      1. Install the Management subsystem files.
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
        Notes:
        • If you are adding one or more control plane files, specify each path and file name on the command line:
          apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
        • If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher), add the argument --accept-data-loss to the command to indicate that you accept the loss of the Analytics data:
          apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss

          Remember, when you include this parameter for the Management subsystem upgrade, you commit to the loss of data and the operation cannot be undone.

      2. Verify that the upgrade was successful by running a health check with the following command:
        apicup subsys health-check <subsystem_name>
      3. If the health check indicates that a reboot is needed, run the following commands:
        1. apic lock
        2. systemctl reboot
    2. Use apicops to validate the certificates.

      For information on the apicops tool, see The API Connect operations tool: apicops.

      1. Run the following command:
        apicops upgrade:stale-certs -n <namespace>
      2. Delete any stale certificates that are managed by cert-manager.
        If a certificate failed the validation and it is managed by cert-manager, you can delete the stale certificate secret, and let cert-manager regenerate it. Run the following command:
        kubectl delete secret <stale-secret> -n <namespace>
      3. Restart the corresponding pod so that it can pick up the new secret.
        To determine which pod to restart, see the following topics:
    3. Restart all nats-server pods.
      1. Run the following command to connect as the API Connect administrator, replacing <ip_address> with the appropriate IP address:
        ssh <ip_address> -l apicadm
      2. When prompted, select Yes to continue connecting.
      3. When you are connected, run the following command to receive the necessary permissions for working directly on the appliance:
        sudo -i
      4. Restart all nats-server pods by running the following command:
        kubectl -n <namespace> delete po -l app.kubernetes.io/name=natscluster
  11. If there are multiple Management subsystems in the same project, set the Portal subsystem's platform-api and consumer-api certificates to match those used by the appropriate Management subsystem to ensure that the Portal subsystem is correctly associated with that Management subsystem.

    This step only applies if you installed more than one Management subsystem into a single project.

    A Portal subsystem can be associated with only one Management subsystem. To associate the upgraded Portal subsystem with the appropriate Management subsystem, manually set the mgmt-platform-api and mgmt-consumer-api certificates to match the ones used by the Management subsystem.

    1. Run the following commands to get the certificates from the Management subsystem:
      apicup certs get mgmt-subsystem-name platform-api -t cert > platform-api.crt
      apicup certs get mgmt-subsystem-name platform-api -t key > platform-api.key
      apicup certs get mgmt-subsystem-name platform-api -t ca > platform-api-ca.crt
      
      apicup certs get mgmt-subsystem-name consumer-api -t cert > consumer-api.crt
      apicup certs get mgmt-subsystem-name consumer-api -t key > consumer-api.key
      apicup certs get mgmt-subsystem-name consumer-api -t ca > consumer-api-ca.crt

      where mgmt-subsystem-name is the name of the specific Management subsystem that you want to associate the new Portal subsystem with.

    2. Run the following commands to set the Portal's certificates to match those used by the Management subsystem:
      apicup certs set ptl-subsystem-name platform-api Cert_file_path Key_file_path CA_file_path
      
      apicup certs set ptl-subsystem-name consumer-api Cert_file_path Key_file_path CA_file_path

      For more information on apicup certificate commands, see Command reference.

  12. Upgrade the Portal subsystem.
    1. If you did not verify that your Portal customizations are compatible with Drupal 10, do that now.

      In API Connect 10.0.5.3, the Developer Portal moved from Drupal 9 to Drupal 10 (this upgrade also requires PHP 8.1). The upgrade tooling will update your Developer Portal sites; however, if you have any custom modules or themes, it is your responsibility to ensure their compatibility with Drupal 10 and PHP 8.1 before starting the upgrade. Review the Guidelines on upgrading your Developer Portal from Drupal 9 to Drupal 10 to ensure that any customizations to the Developer Portal are compatible with Drupal 10 and PHP 8.1.

    2. Install the Portal subsystem files:
      apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
      Notes:
      • If you are adding one or more control plane files, specify each path and file name on the command line:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
      • If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher), add the argument --accept-data-loss to the command to indicate that you accept the loss of the Analytics data:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
    3. Verify that the upgrade was successful by running a health check with the following command:
      apicup subsys health-check <subsystem_name>
      Important: If the health check indicates that a reboot is required, first complete the upgrade for all Portal sites and verify that all sites were upgraded before running the reboot command as instructed at this end of this step.
    4. Ensure that the Portal sites were upgraded:
      1. Use the toolkit apic to obtain the portal service id and endpoint:
        apic portal-services:get -o admin -s <management_server_endpoint> \
                     --availability-zone availability-zone-default <portal-service-name> \
                     --output - --format json
      2. List the sites:
        apic --mode portaladmin sites:list -s <management_server_endpoint> \ 
                     --portal_service_name <portal-service-name> \ 
                     --format json

        Any sites currently upgrading will be listed as UPGRADING. Once all sites have finished upgrading they should have the INSTALLED status and the new platform version listed.

        See also: apic sites:list and Using the sites commands.

      3. After all sites are in INSTALLED state and have the new platform listed, run:
        apic --mode portaladmin platforms:list -s <management_server_endpoint> \
                      --portal_service_id <portal_service_id_from_above_command> \
                      --portal_service_endpoint <portal_service_endpoint_from_above_command> \ 
                      --format json

        The new version of the platform should be the only platform listed.

        See also: apic platforms:list and Using the platforms commands.

    5. If all Portal sites are upgraded and the health check indicates that a reboot is needed, then run the following commands:
      1. apic lock
      2. systemctl reboot
  13. Upgrade the Analytics subsystem
    Important: Upgrading to v10.0.5.x from v10.0.4-ifix3, or from v10.0.1.6-eus (or higher), will result in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrade. For instructions on exporting see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
    1. Install the Analytics subsystem files:
      apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
      Notes:
      • If you are adding one or more control plane files, specify each path and file name on the command line:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
      • If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher), add the argument --accept-data-loss to the command to indicate that you accept the loss of the Analytics data:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
    2. Verify that the upgrade was successful by running a health check with the following command:
      apicup subsys health-check <subsystem_name>
    3. If the health check indicates that a reboot is needed, run the following commands:
      1. apic lock
      2. systemctl reboot
  14. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.6-eus (or higher): Enable the analytics service as explained in Enabling Analytics after upgrading.
  15. If upgrading from a release previous to v10.0.5.3: If you want to use JWT security instead of mTLS, enable this feature as explained in Use JWT security instead of mTLS between subsystems.
  16. Complete a manual backup of the upgraded API Connect subsystems; see Backing up and restoring on VMware.
  17. Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action requires a brief outage while all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs, as they might not restore successfully.
  18. Complete the disaster preparation steps to ensure recovery of API Connect from a disaster event. See Preparing for a disaster.

What to do next

When you have upgraded all of the API Connect subsystems, upgrade your DataPower Gateway Service. See Upgrading DataPower Gateway Service.