Upgrading to the latest release on VMware

You can upgrade API Connect on VMware.

Before you begin

  • Before starting an upgrade, review the supported upgrade paths from prior versions: Upgrade considerations on VMware.
  • Enable keep-alive in your SSH configuration to avoid problems during the upgrade. For information, consult the documentation for your version of SSH.
Attention: Upgrading to v10.0.5.x from v10.0.4-ifix3, or from 10.0.1.6-ifix1-eus (or higher):
  • Upgrading to v10.0.5.x results in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrading. For instructions on exporting the data, see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.

    When you upgrade subsystems in this scenario, you must include the --accept-data-loss parameter on all apicup install commands that result in an upgrade from the older releases. Since you upgrade the Management subsystem first, including the parameter at that point commits you to the loss of data, and the operation cannot be reversed.

  • Delete any template section from your analytics CR before upgrading. For instructions, see Update the analytics extra-values file.

About this task

You can upgrade the Management subsystem, Developer Portal subsystem, and Analytics subsystem. The Gateway subsystem remains available during the upgrade of the other subsystems.

For the optional components API Connect Toolkit and API Connect Local Test Environment, you do not need to upgrade. For these components, install the new version of each after you upgrade the subsystems.

Important notes:

  • When installing the new files, be sure to use the new apicup installer.
  • The upgrade order for subsystems is important. Upgrade the subsystems in the following order: (1) Management, (2) Portal, (3) Analytics, (4) Gateway. Management must be upgraded first. Gateway must be upgraded after Management because upgrading the Management service before the Gateway ensures that any new policies and capabilities will be available to a previously registered Gateway service.
  • When you run the install, the program sends the compressed tar file, which contains the upgrade images, to all cluster members. The compressed tar file is about 2 GB, and transfer can take some time. When the install command exits, the compressed tar file has arrived at each member. The upgrade process is now underway, and might continue for some time depending on factors such as the size of your deployment, your network speed, and so on.
  • The apicup subsys install command automatically runs apicup health-check prior to attempting the upgrade. An error is displayed if a problem is found that will prevent successful upgrade.

    When troubleshooting a problem with an upgrade, you can optionally suppress the health check. See Skipping health check when re-running upgrade.

  • The certificate manager was upgraded in earlier versions of API Connect. If you are upgrading from an earlier release, then after you trigger the upgrade from the apim, lur, and taskmanager pods will be in a CrashLoopBackoff state due to Postgres requiring a certificate refresh. Correct the problem by completing the following steps:
    1. SSH into the server:
      1. Run the following command to connect as the API Connect administrator, replacing ip_address with the appropriate IP address:
        ssh ip_address -l apicadm
      2. When prompted, select Yes to continue connecting.
      3. When you are connected, run the following command to receive the necessary permissions for working directly on the appliance:
        sudo -i
    2. Run the following command to delete the Postgres pods, which refreshes the new certificate:
      kubectl get pod --no-headers=true | grep postgres | grep -v backup | awk '{print $1}' | xargs kubectl delete pod

Procedure

Attention: If you encounter an issue while performing the upgrade, see Troubleshooting upgrades on VMware.

  1. Complete the prerequisites:
    1. Ensure that your deployment meets the upgrade requirements. See Upgrade considerations on VMware.

    2. Complete the steps in Preparing to upgrade on VMware to ensure that your environment is ready for the upgrade.

    3. Verify the API Connect deployment is healthy and fully operational. See Checking cluster health on VMware
    4. Remove any stale upgrade files:
      apic clean-upgrade-files
    5. Verify sufficient free disk space.

      For each appliance node that you are planning to upgrade:

      1. SSH into the appliance, and switch to user root.
      2. Check disk space in /data/secure:
        df -h /data/secure

        Make sure the disk usage shown is below 70%. If it is not, add disk capacity. See Adding disk space to a VMware appliance.

      3. Check free disk space in /:
        df -h /

        Make sure usage is below 70%. If it is not, consider deleting or offloading older /var/log/syslog* files.

    6. Complete a manual backup of the API Connect subsystems as explained in Backing up and restoring on VMware.
      Notes:
      • Do not start an upgrade if a backup is scheduled to run within a few hours.
      • Do not perform maintenance tasks such as rotating key-certificates, restoring from a backup, or starting a new backup, at any time while the upgrade process is running.
  2. Back up the Postgres images on each of the Management Server VMs.
    Follow the steps for the version of API Connect that you are upgrading from:
    • 10.0.1.9 or later, and 10.0.5.1 or later:
      SSH to one of the Management Server VMs:
      ssh <ip_address of management vm> -l apicadm
      Sudo to root:
      sudo -i
      
      Run the following commands:
      
      postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name)
      
      ctr --namespace k8s.io images pull --plain-http=true `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
      ctr --namespace k8s.io images export rmdata.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
      ctr --namespace k8s.io images export backrestrepo.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'`
      ctr --namespace k8s.io images export pgbouncer.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'`
      ctr --namespace k8s.io images export postgres-ha.tar `kubectl get  $postgres_operator  -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'`
      
      postgres_pod=$(kubectl get pod -l role=master -o name)
      ctr --namespace k8s.io images export k8s-init.tar `kubectl get $postgres_pod  -o jsonpath='{.spec.initContainers[].image}'`
    • 10.0.1.8 or earlier, and 10.0.4.0-ifix3 or earlier:
      SSH to one of the Management Server VMs:
      ssh <ip_address of management vm> -l apicadm
      Sudo to root:
      sudo -i
      
      Run the following commands:
      
      postgres_operator=$(kubectl get pod -l app.kubernetes.io/name=postgres-operator -o name)
      
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'`
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'`
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'`
      docker pull `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'`
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_RMDATA")].value}'` -o rmdata.tar
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_PGO_BACKREST_REPO")].value}'` -o backrestrepo.tar
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_PGBOUNCER")].value}'` -o pgbouncer.tar
      docker save `kubectl get $postgres_operator -o jsonpath='{.spec.containers[].env[?(@.name=="RELATED_IMAGE_CRUNCHY_POSTGRES_HA")].value}'` -o postgres-ha..tar
      
      postgres_pod=$(kubectl get pod -l role=master -o name)
      docker save `kubectl get $postgres_pod  -o jsonpath='{.spec.initContainers[].image}'` -o k8s-init.tar
      

    Remember to complete this step on every Management Server VM.

  3. Run the pre-upgrade health check:
    Attention: This is a required step. Failure to run this check before upgrading could result in problems during the upgrade.
    1. Run the health check on the Management subsystem:
      Note: If you have a 2DCDR deployment, run this check only on the active management subsystem.

      You can run the check on the warm-standby after it is made stand-alone. See Upgrading a two data center deployment.

      1. SSH into the server:
        1. Run the following command to connect as the API Connect administrator, replacing ip_address with the appropriate IP address:
          ssh ip_address -l apicadm
        2. When prompted, select Yes to continue connecting.
        3. When you are connected, run the following command to receive the necessary permissions for working directly on the appliance:
          sudo -i
      2. Verify that the apicops utility is installed by running the following command to check the current version of the utility:
        apicops --version

        If the response indicates that apicops is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation.

      3. Run the version-check script and verify that there are no errors:
        apicops version:pre-upgrade
    2. Run the health check on the Portal subsystem:
      1. SSH into the server.
      2. Verify that the apicops utility is installed by running the following command to check the current version of the utility:
        apicops --version

        If the response indicates that apicops is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation.

      3. Run the following command to check the system status, and verify that there are no errors:
        apicops upgrade:check-subsystem-status
    3. Run the health check on the Analytics subsystem:
      1. SSH into the server.
      2. Verify that the apicops utility is installed by running the following command to check the current version of the utility:
        apicops --version

        If the response indicates that apicops is not available, install it now. See The API Connect operations tool: apicops in the API Connect documentation.

      3. Run the following command to check the system status, and verify that there are no errors:
        apicops upgrade:check-subsystem-status
  4. Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action does require a brief outage while all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs, as they might not restore successfully. VM snapshots can offer a faster recovery when compared to redeploying OVAs and restoring from normal backups.
    Important: VM snapshots are not an alternative to the standard API Connect backups that are described in the previous steps. You must complete the API Connect backups in order to use the API Connect restore feature.
  5. Obtain the API Connect files:

    You can access the latest files from IBM Fix Central by searching for the API Connect product and your installed version.

    The following files are used during upgrade on VMware:

    IBM API Connect Security Signatures Bundle File
    The signatures_<version>.zip contains the set of .asc files needed for signature verification so you can validate the downloaded product files.
    IBM API Connect <version> Management Upgrade File for VMware
    Management subsystem files for upgrade
    IBM API Connect <version> Developer Portal Upgrade File for VMware
    Developer Portal subsystem files for upgrade
    IBM API Connect <version> Analytics Upgrade File for VMware
    Analytics subsystem files for upgrade
    IBM API Connect <version> Install Assist for <operating_system_type>
    The apicup installation utility. Required for all installations on VMware.

    The following files are not used during upgrade, but can be installed as new installations used to replace prior versions

    IBM API Connect <version> Toolkit for <operating_system_type>
    Toolkit command line utility. Packaged standalone, or with API Designer or Loopback:
    • IBM API Connect <version> Toolkit for <operating_system_type>
    • IBM API Connect <version> Toolkit with Loopback for <operating_system_type>
    • IBM API Connect <version> Toolkit Designer with Loopback for <operating_system_type>

    The toolkit is not required for API Connect installation. After installation, you can download the toolkit from the Cloud Manager UI and API Manager UI. For more information, see Installing the toolkit.

    IBM API Connect <version> Local Test Environment
    Optional test environment. The LTE is not required for API Connect installation and can be optionally downloaded and installed at any time. For more information, see Testing an API with the Local Test Environment
  6. If necessary, download (from the same Fix Pack page) any Control Plane files that are needed.

    Control Plane files provide support for specific Kubernetes versions. The IBM API Connect <version> Management Upgrade File for VMware file contains the latest Control Plane file. An upgrade from the most recent API Connect version to the current version does not need a separate Control Plane file. However, when upgrading from older versions of API Connect, you must install one or more control plane files to ensure that all current Kubernetes versions are supported.

    Consult the following table to see if your deployment needs one or more separate Control Plane files.

    Table 1. Control Plane files needed for upgrade
    Version to upgrade from Instructions for upgrading to 10.0.5.7
    • 10.0.5.6
    • 10.0.5.5
    • 10.0.5.4
    • 10.0.5.3
    • 10.0.1.15-eus
    • 10.0.1.12-eus
    No control-plane file needed.

    On the Fix Central page, type "control-plane" in the Filter fix details field to only show the list of control plane files in the results table. Then, click the Description header to sort the results so you can easily locate the control plane files that you need.

    For information on the Control Plane files used with older releases, see Control Plane files for earlier releases.

  7. Verify that the subsystem upgrade files are not corrupted.
    1. Run the following command separately for the Management, the Developer Portal, and the Analytics upgrade files:
      • Mac or Linux:
        sha256sum <upgrade-file-name>

        Example:

        sha256sum upgrade_management_v10.0.5.7
      • Windows:
        C:\> certUtil -hashfile C:\<upgrade-file-name> SHA256

        Example:

        C:\> certUtil -hashfile C:\upgrade_management_v10.0.5.7 SHA256
    2. Compare the result with the checksum values to verify that the files are not corrupted.

      If the checksum does not match the value for the corresponding version and subsystem, then the upgrade file is corrupted. Delete it and download a new copy of the upgrade file. The following list shows the checksum values for the current release.

      Checksum values:

      10.0.5.7 analytics : 8f5139c5ec9cba77649449f62900ea44f95af4229c80cec45da9d33e4dac1a62
      10.0.5.7 management : 89adc4bf59aef4d0a43f3b738cfe9bbd3cd6035d3a832e6b1e4ca5ba21f2f1be
      10.0.5.7 portal : eb0f1d50050b8ae0bf92fa6a26147d1a109f0d44c740751c4ff56590ccfe5606

      For information on the checksum files used with older releases, see Checksum values for earlier releases.

  8. Install the installation utility.
    1. Locate the apicup installation utility file for your operating system, and place it in your project directory
    2. Rename the file for your OS type to apicup. Note that the instructions in this documentation refer to apicup.
    3. OSX and Linux® only: Make the apicup file an executable file by entering chmod +x apicup.
    4. Set your path to the location of your apicup file.
      OSX and Linux
      export PATH=$PATH:/Users/your_path/
      Windows
      set PATH=c:\your_path;%PATH%
    5. From within your project directory, specify the API Connect license:
      apicup licenses accept <License_ID>

      The <License_ID> is specific to the API Connect Program Name you purchased. To view the supported license IDs, see API Connect licenses.

  9. If upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher): Disassociate and delete your Analytics services.
    Important: Upgrading to v10.0.5.x from v10.0.4-ifix3, or from v10.0.1.6-eus (or higher) will result in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrade. For instructions on exporting see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
    1. In Cloud Manager UI, click Topology.
    2. In the section for the Availability Zone that contains the Analytics service, locate the Gateway service that the Analytics service is associated with.
    3. Click the actions menu, and select Unassociate analytics service.
      Remember to disassociate each Analytics service from all Gateways.
    4. In the section for the Availability Zone that contains the Analytics services, locate each Analytics service and click Delete.
  10. Upgrade the Management subsystem:
    1. Run the upgrade on the Management subsystem:
      1. Install the Management subsystem files.
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
        Notes:
        • If you are adding one or more control plane files, specify each path and file name on the command line:
          apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
        • If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher), add the argument --accept-data-loss to the command to indicate that you accept the loss of the Analytics data:
          apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss

          Remember, when you include this parameter for the Management subsystem upgrade, you commit to the loss of data and the operation cannot be undone.

      2. Verify that the upgrade was successful by running a health check with the following command:
        apicup subsys health-check <subsystem_name>
      3. If upgrading to 10.0.5.6, verify that the health check did not return a false-positive response.
        Note: This step only applies if you upgraded to API Connect 10.0.5.6 and is due to a known issue that will be corrected in a later release.

        Run the following command to get the current version of API Connect on the upgraded subsystem:

        kubectl get apic

        If the health check indicates a successful upgrade but the subsystem is still running the older version of API Connect, skip the remaining steps in this procedure and see False positive result from health check after upgrading to 10.0.5.6 for troubleshooting instructions.

        If the subsystem is running the newer version of API Connect then the upgrade was successful; continue with the remaining steps in this procedure.

      4. If the health check indicates that a reboot is needed, run the following command:
        systemctl reboot
    2. Use apicops to validate the certificates.

      For information on the apicops tool, see The API Connect operations tool: apicops.

      1. Run the following command:
        apicops upgrade:stale-certs -n <namespace>
      2. Delete any stale certificates that are managed by cert-manager.
        If a certificate failed the validation and it is managed by cert-manager, you can delete the stale certificate secret, and let cert-manager regenerate it. Run the following command:
        kubectl delete secret <stale-secret> -n <namespace>
      3. Restart the corresponding pod so that it can pick up the new secret.
        To determine which pod to restart, see the following topics:
    3. Restart all nats-server pods.
      1. Run the following command to connect as the API Connect administrator, replacing <ip_address> with the appropriate IP address:
        ssh <ip_address> -l apicadm
      2. When prompted, select Yes to continue connecting.
      3. When you are connected, run the following command to receive the necessary permissions for working directly on the appliance:
        sudo -i
      4. Restart all nats-server pods by running the following command:
        kubectl -n <namespace> delete po -l app.kubernetes.io/name=natscluster
    4. If needed, delete old Postgres client certificates.
      If you are upgrading from 10.0.1.x or 10.0.4.0-ifix1, or if you previously installed any of those versions before upgrading to 10.0.5.x, there might be old Postgres client certificates. To verify, run the following command:
      kubectl -n <namespace> get certs | grep db-client

      For example, if you see that both -db-client-apicuser and apicuser exist, apicuser is no longer in use. Remove the old certificates by running one of the following commands, depending on how many old certifications left in your system:

      kubectl -n <namespace> delete certs  apicuser pgbouncer primaryuser postgres replicator

      or:

      kubectl -n <namespace> delete certs  apicuser pgbouncer postgres replicator
    5. If you see that the Postgres pods are in the imagePullBackOff state, import the images that you saved in step 2 to every Management Server VM:
      Note: Not all image tar files will need to be imported, depending on your environment; however, there is no harm in importing all the image tar files that are saved.
      1. SSH into the VM as root.
      2. Import each saved image into the Containerd registry. For example:
        ctr --namespace k8s.io image import crunchy-pgbouncer.tar --digests=true
      3. Re-tag the import-<datestamp>@sha256:nnn images to their appropriate tags and digests. For example:
        ctr --namespace k8s.io image tag <source> <target>

      Remember to repeat this step for every Management Server VM.

  11. If there are multiple Management subsystems in the same project, set the Portal subsystem's platform-api and consumer-api certificates to match those used by the appropriate Management subsystem to ensure that the Portal subsystem is correctly associated with that Management subsystem.

    This step only applies if you installed more than one Management subsystem into a single project.

    A Portal subsystem can be associated with only one Management subsystem. To associate the upgraded Portal subsystem with the appropriate Management subsystem, manually set the mgmt-platform-api and mgmt-consumer-api certificates to match the ones used by the Management subsystem.

    1. Run the following commands to get the certificates from the Management subsystem:
      apicup certs get mgmt-subsystem-name platform-api -t cert > platform-api.crt
      apicup certs get mgmt-subsystem-name platform-api -t key > platform-api.key
      apicup certs get mgmt-subsystem-name platform-api -t ca > platform-api-ca.crt
      
      apicup certs get mgmt-subsystem-name consumer-api -t cert > consumer-api.crt
      apicup certs get mgmt-subsystem-name consumer-api -t key > consumer-api.key
      apicup certs get mgmt-subsystem-name consumer-api -t ca > consumer-api-ca.crt

      where mgmt-subsystem-name is the name of the specific Management subsystem that you want to associate the new Portal subsystem with.

    2. Run the following commands to set the Portal's certificates to match those used by the Management subsystem:
      apicup certs set ptl-subsystem-name platform-api Cert_file_path Key_file_path CA_file_path
      
      apicup certs set ptl-subsystem-name consumer-api Cert_file_path Key_file_path CA_file_path
  12. Upgrade the Portal subsystem.
    1. If you did not verify that your Portal customizations are compatible with Drupal 10, do that now.

      In API Connect 10.0.5.3, the Developer Portal moved from Drupal 9 to Drupal 10 (this upgrade also requires PHP 8.1). The upgrade tooling will update your Developer Portal sites; however, if you have any custom modules or themes, it is your responsibility to ensure their compatibility with Drupal 10 and PHP 8.1 before starting the upgrade. Review the Guidelines on upgrading your Developer Portal from Drupal 9 to Drupal 10 to ensure that any customizations to the Developer Portal are compatible with Drupal 10 and PHP 8.1.

    2. Install the Portal subsystem files:
      apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
      Notes:
      • If you are adding one or more control plane files, specify each path and file name on the command line:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
      • If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher), add the argument --accept-data-loss to the command to indicate that you accept the loss of the Analytics data:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
    3. Verify that the upgrade was successful by running a health check with the following command:
      apicup subsys health-check <subsystem_name>
      Important: If the health check indicates that a reboot is required, first complete the upgrade for all Portal sites and verify that all sites were upgraded before running the reboot command at the end of this step.
    4. If upgrading to 10.0.5.6, verify that the health check did not return a false-positive response.
      Note: This step only applies if you upgraded to API Connect 10.0.5.6 and is due to a known issue that will be corrected in a later release.

      Run the following command to get the current version of API Connect on the upgraded subsystem:

      kubectl get apic

      If the health check indicates a successful upgrade but the subsystem is still running the older version of API Connect, skip the remaining steps in this procedure and see False positive result from health check after upgrading to 10.0.5.6 for troubleshooting instructions.

      If the subsystem is running the newer version of API Connect then the upgrade was successful; continue with the remaining steps in this procedure.

    5. Run the following toolkit commands to ensure that the Portal site upgrades are complete:
      1. Log in as an admin user:
        apic login -s <server_name> --realm admin/default-idp-1 --username admin --password <password>
      2. Get the portal service ID and endpoint:
        apic portal-services:get -o admin -s <management_server_endpoint> \
                     --availability-zone availability-zone-default <portal-service-name> \
                     --output - --format json
      3. List the sites:
        apic --mode portaladmin sites:list -s <management_server_endpoint> \ 
                     --portal_service_name <portal-service-name> \ 
                     --format json

        Any sites currently upgrading display the UPGRADING status; any site that completed its upgrade displays the INSTALLED status and the new platform version. Verify that all sites display the INSTALLED status before proceeding.

        For more information on the sites command, see apic sites:list and Using the sites commands.

      4. After all sites are in INSTALLED state and have the new platform listed, run:
        apic --mode portaladmin platforms:list -s <server_name> --portal_service_name <portal_service_name>
        

        Verify that the new version of the platform is the only platform listed.

        For more information on the platforms command, see apic platforms:list and Using the platforms commands.

    6. If all Portal sites are upgraded and the health check indicates that a reboot is needed, then run the following command:
      systemctl reboot
  13. Upgrade the Analytics subsystem
    Important: Upgrading to v10.0.5.x from v10.0.4-ifix3, or from v10.0.1.6-eus (or higher) will result in the deletion of existing analytics data. If you want to retain your analytics data then you must export it before upgrade. For instructions on exporting see Additional considerations for upgrading analytics from a version prior to v10.0.5.0.
    1. Install the Analytics subsystem files:
      apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive>
      Notes:
      • If you are adding one or more control plane files, specify each path and file name on the command line:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> <path_to_control_plane_file_1> <path_to_control_plane_file_n>
      • If you are upgrading to v10.0.5.x from v10.0.4-ifix3, or upgrading from v10.0.1.7-eus (or higher), add the argument --accept-data-loss to the command to indicate that you accept the loss of the Analytics data:
        apicup subsys install <subsystem_name> <subsystem_upgrade_tar_archive> --accept-data-loss
    2. Verify that the upgrade was successful by running a health check with the following command:
      apicup subsys health-check <subsystem_name>
    3. If upgrading to 10.0.5.6, verify that the health check did not return a false-positive response.
      Note: This step only applies if you upgraded to API Connect 10.0.5.6 and is due to a known issue that will be corrected in a later release.

      Run the following command to get the current version of API Connect on the upgraded subsystem:

      kubectl get apic

      If the health check indicates a successful upgrade but the subsystem is still running the older version of API Connect, skip the remaining steps in this procedure and see False positive result from health check after upgrading to 10.0.5.6 for troubleshooting instructions.

      If the subsystem is running the newer version of API Connect then the upgrade was successful; continue with the remaining steps in this procedure.

    4. If the health check indicates that a reboot is needed, run the following command:
      systemctl reboot
  14. If you are upgrading from v10.0.4-ifix3, or upgrading from v10.0.1.6-eus (or higher): Enable the analytics service as explained in Enabling Analytics after upgrading.
  15. If you are upgrading from a release earlier than v10.0.5.3: If you want to use JWT security instead of mTLS, enable this feature as explained in Use JWT security instead of mTLS between subsystems.
  16. Complete a manual backup of the upgraded API Connect subsystems; see Backing up and restoring on VMware.
  17. Optional: Take a Virtual Machine (VM) snapshot of all your VMs; see Using VM snapshots for infrastructure backup and disaster recovery for details. This action requires a brief outage while all of the VMs in the subsystem cluster are shut down - do not take snapshots of running VMs, as they might not restore successfully.
  18. Complete the disaster preparation steps to ensure recovery of API Connect from a disaster event. See Preparing for a disaster.

What to do next

When you have upgraded all of the API Connect subsystems, upgrade your DataPower Gateway Service. See Upgrading DataPower Gateway Service.