Upgrading Management subsystem from 2018 to v10

Upgrade the management subsystem to v10 by installing the latest v10 management subsystem and loading the management database information from 2018.

Before you begin

You should have previously completed the following steps in Upgrading v2018 subsystems to v10:
  • Download IBM® API Connect <version> scripts and assets required for v2018 > <Latest_v10_version> side-by-side upgrades and IBM® API Connect <version> Management for VMware from Fix Central.
  • Install apicup installation utility.

About this task

Install the latest v10 management subsystem, load the 2018 management database information, and run a cleanup script after the load completes.


  1. Use the v10 version of apicup to map your 2018 configuration settings from apiconnect-up.yml to the v10 configuration file apiconnect-up-v10.yaml:
    apicup subsys get <subsystem_name>
    Note: If your subsystem name on 2018 was greater than 63 characters, and you did not manually shorten it prior to upgrading, the upgrade shortened it automatically. To view the new subsystem name:
    apicup subsys list

    For more information, see Automatic shortening of subsystem names with greater than 63 characters

    1. Review the output from the command. Note the following:
      • Some settings will be automatically converted from their 2018 values, but you should review all values to ensure they are still valid.
    2. Update the license value by running the following command:
      apicup licenses accept <License_ID>

      The <License_ID> is specific to the API Connect program pame that you purchased. To view the supported license IDs, see API Connect licenses.

    3. Use apicup to provide a completely new backup location for the management subsystem in v10. See Reconfiguring backup settings for the management subsystem.
      • For the management subsystem, you cannot use the same backup locations as were used for 2018.
        • For s3 backups you must specify a different s3 bucket for the version 10 deployment, to avoid having version 2018 backups mixed with version 10 backups for the management server.
        • For SFTP backups, create a different folder for version 10 backups, or move the 2018 backups to another location.
      • For v10, there is a new (additional) setting database-backup-s3provider. This settings is used with S3 backups. This value must be set before install, if S3 backups are being used. The management subsystem might not come online without this parameter being set.
  2. Set a temporary configuration to support loading the v2018 data:
    1. Place the following text in to a new yaml file, with a name of your choosing:
        - name: apim
          enabled: false
        - name: apim-schema
          enabled: false
        - name: apim-data
          enabled: false
        - name: taskmanager
          enabled: false
        - name: lur
          enabled: false
        - name: lur-schema
          enabled: false
        - name: lur-data
          enabled: false
        - name: analytics-proxy
          enabled: false
        - name: portal-proxy
          enabled: false
        - name: billing
          enabled: false
        - name: juhu
          enabled: false
        - name: websocket-proxy
          enabled: false
        - name: ui
          enabled: false
    2. Set the new YAML file as an extra-values configuration file:
      apicup subsys set <management_subsystem_name> extra-values-file=<path-to-extra-values-file>/<your_extra_values_file>.yaml
  3. Create your ISO file in the V2018 project directory that you copied from the old deployment.
    apicup subsys install mgmt --out mgmtplan-out

    The --out parameter and value are required.

    In this example, the ISO file is created in the myProject/mgmtplan-out directory.

  4. Deploy the ISO and IBM API Connect <version> Management for VMware ova file. Go to Deploying the Management subsystem OVA file.
    Note: Wait for the cluster to form - this can take some time.
  5. Verify the installation:
    apicup subsys health-check <management_subsystem_name>
    Note: This is not a standard install. Some services are disabled during this process, so there will be some pods missing. As a result, you cannot use the Cloud Manager UI to verify the management subsystem until the data from v2018 is loaded.
  6. Copy (scp) the entire dr-upgrade.tgz to the v10 management system VM. Extract the file (tar -xf) and verify that the directory contains the following files:
    • dr-upgrade-load.py

      A script to load the v2018 data.

    • dr-upgrade-cleanup.py

      A script to cleanup after load of the v2018 data

    • 2018-extracted-data.zip

      The data from your v2018 deployment

    The directory contains other template files that are used by the scripts.

  7. Now load 2018 data into v10 database.
    1. ssh into the appliance as root.
    2. Change directory to the dr-upgrade directory

      The dr-upgrade-load.py script requires 2018-extracted-data.zip to be in the same directory

    3. Run the python script dr-upgrade-load.py:
      # sudo -i
      # python3 dr-upgrade-load.py
      Tip: You can optionally run the script in the background:
      nohup python3 dr-upgrade-load.py &> load.log &
      tail -f load.log
      • The length of time needed for the load script to complete scales with the size of the data in your system. The upgrade-load job creates a Kubernetes pod named apiconnect-v10-upgrade-load-<xxxxxx>. You can follow the Kubernetes logs if you want to track the progress of the script.
      • The python script outputs the location (volume_path) of the data/logs directories to backup.
      • Once dr-upgrade-load.py returns successfully the load is complete.
        Note: If dr-upgrade-load.py encounters an error:
        1. ssh into the appliance as root, and gather the upgrade logs. Check pod logs and PV logs ({volume_path}).

          To create a zip file with the contents of directories with the logs:

          zip -r upgrade-pvc.zip {{volume_path}}
        2. Contact IBM Support for guidance on how to resolve the errors.
        3. After the errors are resolved, run the clean up script:
          python3 dr-upgrade-cleanup.py --load-error

          Be sure to include the required parameter --load-error. Note that the health-check will fail until the load script is run again in the next step.

        4. Re-run dr-upgrade-load.py
          python3 dr-upgrade-load.py --rerun

          Be sure to include the required parameter --rerun.

  8. Wait for the completion of the job and then return to your local machine (used for configuration) and use apicup to unset the extra-values file

    To unset:

    apicup subsys set <management_subsystem_name> extra-values-file=
  9. Set an extra-values file if needed for your customizations:
    apicup subsys set <management_subsystem_name> extra-values-file=<your_extra-values-file>

    For example, see Setting rate limits for public APIs on the management service.

  10. Install the subsystem:
    apicup subsys install <management_subsystem_name>
  11. Verify all management resources have come up
    apicup subsys health-check <management_subsystem_name>

    If the subsystem is healthy, the health-check command returns silently with a 0 return code.

    If an error is encountered, see Upgrade Limitations.

    Your load of v2018 data is now complete.

  12. When the dr-upgrade-load job has succeeded, backup the PVC data and logs from the dr-upgrade-load job:
    1. ssh into the appliance as root.
    2. Find the path to the volume:
      kubectl get pv | grep 'pv-claim-apiconnect-v10-upgrade' | awk '{print $1}' | xargs kubectl get pv -ojsonpath='{.spec.local.path}'
    3. Zip up the directory:
      zip -r upgrade-pvc.zip {{volume_path}}
    4. Run the cleanup script:
      python3 dr-upgrade-cleanup.py

      This script cleans up the following:

      • Load job apiconnect-v10-upgrade-load
      • Load PVC pv-claim-apiconnect-v10-upgrade
      • 2018-extracted-data.zip
      • Unpacked 2018-extracted-data.zip from the PV
  13. Restart all nats-server pods.
    1. SSH into the server:
      1. Run the following command to connect as the API Connect administrator, replacing ip_address with the appropriate IP address:
         ssh ip_address -l apicadm
      2. When prompted, select Yes to continue connecting.
      3. When you are connected, run the following command to receive the necessary permissions for working directly on the appliance:
        sudo -i
    2. Restart all nats-server pods by runnning the following command:
      kubectl -n <namespace> delete po -l app.kubernetes.io/name=natscluster
  14. Next, upgrade the other subsystems. Return to Upgrading v2018 subsystems to v10.