Upgrading Management subsystem from 2018 to v10
Upgrade the management subsystem to v10 by installing the latest v10 management subsystem and loading the management database information from 2018.
Before you begin
- Download IBM® API Connect <version> scripts and assets required for v2018 > <Latest_v10_version> side-by-side upgrades and IBM® API Connect <version> Management for VMware from Fix Central.
- Install
apicup
installation utility.
About this task
Install the latest v10 management subsystem, load the 2018 management database information, and run a cleanup script after the load completes.
Procedure
- Use the v10 version of
apicup
to map your 2018 configuration settings fromapiconnect-up.yml
to the v10 configuration fileapiconnect-up-v10.yaml
:apicup subsys get <subsystem_name>
Note: If your subsystem name on 2018 was greater than 63 characters, and you did not manually shorten it prior to upgrading, the upgrade shortened it automatically. To view the new subsystem name:apicup subsys list
For more information, see Automatic shortening of subsystem names with greater than 63 characters
- Review the output from the command. Note the following:
- Some settings will be automatically converted from their 2018 values, but you should review all values to ensure they are still valid.
- Update the license value by running the following command:
apicup licenses accept <License_ID>
The <
License_ID
> is specific to the API Connect program pame that you purchased. To view the supported license IDs, see API Connect licenses. - Use
apicup
to provide a completely new backup location for the management subsystem in v10. See Reconfiguring backup settings for the management subsystem.Important:- For the management subsystem, you cannot use the same backup locations as were used for 2018.
- For s3 backups you must specify a different s3 bucket for the version 10 deployment, to avoid having version 2018 backups mixed with version 10 backups for the management server.
- For SFTP backups, create a different folder for version 10 backups, or move the 2018 backups to another location.
- For v10, there is a new (additional) setting
database-backup-s3provider
. This settings is used with S3 backups. This value must be set before install, if S3 backups are being used. The management subsystem might not come online without this parameter being set.
- For the management subsystem, you cannot use the same backup locations as were used for 2018.
- Review the output from the command. Note the following:
- Set a temporary configuration to support loading the v2018 data:
- Place the following text in to a new yaml file, with a name of your
choosing:
spec: template: - name: apim enabled: false - name: apim-schema enabled: false - name: apim-data enabled: false - name: taskmanager enabled: false - name: lur enabled: false - name: lur-schema enabled: false - name: lur-data enabled: false - name: analytics-proxy enabled: false - name: portal-proxy enabled: false - name: billing enabled: false - name: juhu enabled: false - name: websocket-proxy enabled: false - name: ui enabled: false
- Set the new YAML file as an extra-values configuration
file:
apicup subsys set <management_subsystem_name> extra-values-file=<path-to-extra-values-file>/<your_extra_values_file>.yaml
- Place the following text in to a new yaml file, with a name of your
choosing:
- Create your ISO file in the V2018 project directory that you copied from the old
deployment.
apicup subsys install mgmt --out mgmtplan-out
The
--out
parameter and value are required.In this example, the ISO file is created in the myProject/mgmtplan-out directory.
- Deploy the ISO and IBM API Connect <version> Management for
VMware ova file. Go to Deploying the Management subsystem OVA file. Note: Wait for the cluster to form - this can take some time.
- Verify the installation:
apicup subsys health-check <management_subsystem_name>
Note: This is not a standard install. Some services are disabled during this process, so there will be some pods missing. As a result, you cannot use the Cloud Manager UI to verify the management subsystem until the data from v2018 is loaded. - Copy (
scp
) the entire dr-upgrade.tgz to the v10 management system VM. Extract the file (tar -xf
) and verify that the directory contains the following files:dr-upgrade-load.py
A script to load the v2018 data.
dr-upgrade-cleanup.py
A script to cleanup after load of the v2018 data
2018-extracted-data.zip
The data from your v2018 deployment
The directory contains other template files that are used by the scripts.
- Now load 2018 data into v10 database.
- ssh into the appliance as
root
. - Change directory to the
dr-upgrade
directoryThe
dr-upgrade-load.py
script requires2018-extracted-data.zip
to be in the same directory - Run the python script
dr-upgrade-load.py
:# sudo -i # python3 dr-upgrade-load.py
Tip: You can optionally run the script in the background:nohup python3 dr-upgrade-load.py &> load.log & tail -f load.log
- The length of time needed for the load script to complete scales with the size of the data in
your system. The upgrade-load job creates a Kubernetes pod named
apiconnect-v10-upgrade-load-<xxxxxx>
. You can follow the Kubernetes logs if you want to track the progress of the script. - The python script outputs the location (volume_path) of the data/logs directories to backup.
- Once
dr-upgrade-load.py
returns successfully the load is complete.Note: Ifdr-upgrade-load.py
encounters an error:ssh
into the appliance as root, and gather the upgrade logs. Check pod logs and PV logs ({volume_path}).To create a zip file with the contents of directories with the logs:
zip -r upgrade-pvc.zip {{volume_path}}
- Contact IBM Support for guidance on how to resolve the errors.
- After the errors are resolved, run the clean up
script:
python3 dr-upgrade-cleanup.py --load-error
Be sure to include the required parameter
--load-error
. Note that the health-check will fail until the load script is run again in the next step. - Re-run
dr-upgrade-load.py
python3 dr-upgrade-load.py --rerun
Be sure to include the required parameter
--rerun
.
- The length of time needed for the load script to complete scales with the size of the data in
your system. The upgrade-load job creates a Kubernetes pod named
- ssh into the appliance as
- Wait for the completion of the job and then return to your local machine (used for
configuration) and use
apicup
to unset the extra-values fileTo unset:
apicup subsys set <management_subsystem_name> extra-values-file=
- Set an extra-values file if needed for your customizations:
apicup subsys set <management_subsystem_name> extra-values-file=<your_extra-values-file>
For example, see Setting rate limits for public APIs on the management service.
- Install the subsystem:
apicup subsys install <management_subsystem_name>
- Verify all management resources have come up
apicup subsys health-check <management_subsystem_name>
If the subsystem is healthy, the health-check command returns silently with a
0
return code.If an error is encountered, see Upgrade Limitations.
Your load of v2018 data is now complete.
- When the dr-upgrade-load job has succeeded, backup the PVC data and logs from the
dr-upgrade-load job:
- ssh into the appliance as root.
- Find the path to the
volume:
kubectl get pv | grep 'pv-claim-apiconnect-v10-upgrade' | awk '{print $1}' | xargs kubectl get pv -ojsonpath='{.spec.local.path}'
- Zip up the directory:
zip -r upgrade-pvc.zip {{volume_path}}
- Run the cleanup script:
python3 dr-upgrade-cleanup.py
This script cleans up the following:
- Load job
apiconnect-v10-upgrade-load
- Load PVC
pv-claim-apiconnect-v10-upgrade
2018-extracted-data.zip
- Unpacked
2018-extracted-data.zip
from the PV
- Load job
- Restart all nats-server pods.
- SSH into the server:
- Run the following command to connect as the API Connect administrator, replacing ip_address with
the appropriate IP address:
ssh ip_address -l apicadm
- When prompted, select Yes to continue connecting.
- When you are connected, run the following command to receive the necessary permissions for
working directly on the appliance:
sudo -i
- Run the following command to connect as the API Connect administrator, replacing ip_address with
the appropriate IP address:
- Restart all nats-server pods by runnning the following command:
kubectl -n <namespace> delete po -l app.kubernetes.io/name=natscluster
- SSH into the server:
- Next, upgrade the other subsystems. Return to Upgrading v2018 subsystems to v10.