OpenShift: Upgrading from 2018 to the latest 10.0.1.x-eus
Perform an online upgrade of IBM® API Connect 2018 to the latest version 10.0.1.x-eus on OpenShift.
Before you begin
Be sure to complete all of the steps in OpenShift: Preparing the 2018 deployment for upgrade.
If you need to perform a disaster recovery to revert to 2018, you will need the backed-up
apicup
project as well as the backups of all the API Connect subsystems.
About this task
Run the apicops fix on the database, install operators, create upgrade CRs, and then perform the upgrade.
Procedure
-
On the 2018 deployment, use the
apicops
utility to resolve any known inconsistencies and synchronization problems with the database:The
apicops preupgrade
command must be run in fix mode just prior to starting the management upgrade. The fix mode fixes any inconsistencies and synchronization problems in the database.Important: Upgrade will fail if theapicops
command is not successfully completed within 2 hours of starting the Management upgrade. If the Management upgrade must be run more than once, you must run the command in fix mode within 2 hours each time you run the upgrade.- Obtain the latest 2018 release of
apicops
from https://github.com/ibm-apiconnect/apicops#latest-v2018-release. - Set up the environment. See https://github.com/ibm-apiconnect/apicops#requirements.
- Run the
apicops
report with the following command:apicops preupgrade -m report --upgrade_to v10 -l apicops_preupgrade_get_out.txt
- Run the
apicops
fix with the following command:apicops preupgrade -m fix --upgrade_to v10 -l apicops_preupgrade_fix_out.txt
- The
-m
option runs the utility in fix mode. Running the utility in fix mode resolves most problems. If the utility identifies a problem it cannot resolve, contact IBM Support. - Set the
-l
option to send output to a log file.
Ifapicops
indicates that you need to have aKUBECONFIG
environment variable set then run the following command before trying the fix again:export KUBECONFIG=~/.kube/config
- The
- Obtain the latest 2018 release of
-
From the OpenShift web console, click + (plus sign), and select create
CatalogSource resources.
Create the resources by completing the following steps:
- Create the ioc-catsrc.yaml file with IBM Operator Catalog source; for
example :
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-operator-catalog namespace: openshift-marketplace spec: displayName: "IBM Operator Catalog" publisher: IBM sourceType: grpc image: icr.io/cpopen/ibm-operator-catalog updateStrategy: registryPoll: interval: 45m
- Apply the IBM Operator Catalog source with the following
command:
oc apply -f ioc-catsrc.yaml -n openshift-marketplace
- Create the cs-catsrc.yaml file with the IBM Cloud Pak foundational services
(previously called Common Services) operator catalog
source:
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: opencloud-operators namespace: openshift-marketplace spec: displayName: IBMCS Operators image: icr.io/cpopen/ibm-common-service-catalog:latest publisher: IBM sourceType: grpc updateStrategy: registryPoll: interval: 45m
- Apply the IBM Cloud Pak foundational services operator catalog source with the following
command:
oc apply -f cs-catsrc.yaml -n openshift-marketplace
- Create the ioc-catsrc.yaml file with IBM Operator Catalog source; for
example :
-
Install the API Connect V2018 Upgrade
operator:
- Open the OCP Operator Hub in the namespace where you previously installed API Connect.
- Set the filter to "IBM API Connect V2018 Upgrade" to quickly locate the operators required for API Connect.
- Click the IBM API Connect V2018 Upgrade tile.
-
Click Install again and set the following options for the API Connect V2018 Upgrade
operator:
- Set the Update Channel to v1.0.12.
Selecting a channel installs the latest upgrade operator version that is available for the channel.
- For the Install Mode, select A specific namespace on the cluster and select the namespace where you previously installed API Connect
- Set the Approval Strategy to Automatic
- Set the Update Channel to v1.0.12.
- Click Install and wait while the upgrade operator is installed.
- Verify that the operator installed successfully by clicking Installed Operators, selecting the namespace where you installed API Connect, and making sure that the IBM API Connect V2018 Upgrade operator is listed.
-
Create a cert-manager Issuer, for example:
apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigning-issuer labels: { app.kubernetes.io/instance: "management", app.kubernetes.io/managed-by: "ibm-apiconnect", app.kubernetes.io/name: "selfsigning-issuer" } spec: selfSigned: {}
- Create an entitlement key; for example:
oc create secret docker-registry ibm-entitlement-key \ --docker-username=cp \ --docker-password=<entitlement_key> \ --docker-server=cp.icr.io \ --namespace=<APIC_namespace>
-
Create CRs and upgrade subsystems.
Note: One of the major changes between Version 2018 and Version 10 is the change from Cassandra to Postgres for the Management database. The upgrade process uses the following steps to upgrade the Management subsystem database:
- Back up the Management database. (backup and restore must be configured for management subsystem on 2018).
- Scale down the 2018 pods except Cassandra database
- Extract data from Cassandra database by running an
extract
job (theextract
job creates logs). - Install the Postgres database.
- Load the extracted data into Postgres database and perform the necessary transformations by
running a
load
job (theload
job creates a log).
For each subsystem, complete the following steps to create an upgrade CR that deploys the new version of the subsystem. If a subsystem upgrade fails, you can roll back the upgrade and try again. Make sure that each subsystem is successfully upgraded before proceeding to the next. Upgrade subsystems in the following sequence to ensure that dependencies are satisfied:
1. Management
2. Portal
3. Analytics
4. Gateway
- In the Operator Hub, click Installed Operators > IBM API Connect V2018 Upgrade
- Select subsystem Upgrade (for example, Management Upgrade).
- Click Create subsystemUpgradeFromV2018 (for example, Create ManagementUpgradeFromV2018.
-
Select the YAML view and update the CR to customize the settings in the
CR for your environment.
In particular, provide values for the following settings:
license accept
- set totrue
(installation fails if this is left as the default value offalse
)license use
- set toproduction
ornonproduction
according to your license purchasenamespace
- namespace where API Connect is installedstorageClassName
- a storage class that is available in your clustercertManagerIssuer name
- the name of the cert-manager Issuer that you created in the previous step (or another issuer that you want to use)profile
- the deployment profile that you want to install for each subsystem:- Management subsystem: one node
n1xc4.m16
, or three-nodesn3xc4.m16
- Portal subsystem: one node
n1xc2.m8
or three nodesn3xc4.m8
- Analytics subsystem: one node
n1xc2.m16
or three nodes:n3xc4.m16
- Gateway subsystem: one node
n1xc4.m8
or three nodesn3xc4.m8
- Management subsystem: one node
subsystemName
- the name of your V2018 subsystem
-
Click Create to apply the CR and upgrade the subsystem.
If a subsystem upgrade fails, you can rollback the change and try again in step 7.
Management Upgrade CR example:apiVersion: management.apiconnect.ibm.com/v1beta1 kind: ManagementUpgradeFromV2018 metadata: name: mgmt-upgrade namespace: apic spec: upgradeVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 50Gi license: accept: true use: production certManagerIssuer: name: selfsigning-issuer kind: Issuer profile: n3xc4.m16 subsystemName: mgmt triggerRollback: false managementSpec: databaseVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 120Gi dbArchiveVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 30Gi dbBackupVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 120Gi messageQueueVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 10Gi
Management Upgrade CR with backup settings:
You can optionally configure backup settings for the Version 10 Management subsystem before upgrading. If you choose to specify the database backup configuration in the management upgrade CR, the backup configuration will be carried forward to the Version 10 Management subsystem. If you choose not to include the backup configuration now, you can add it after the upgrade as explained in step 9.
For information on creating the backup secret and configuring backup settings, see Configuring backup settings for a fresh install of the Management subsystem on OpenShift or Cloud Pak for Integration.
To configure the backup settings now:
- Create a backup secret for the Management subsystem (required if you will include backup settings in the CR).
- Add the
databaseBackup
section to thespec.managementSpec
section of the CR, as shown in the following example.
apiVersion: management.apiconnect.ibm.com/v1beta1 kind: ManagementUpgradeFromV2018 metadata: name: mgmt-upgrade namespace: apic spec: upgradeVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 50Gi license: accept: true use: production certManagerIssuer: name: selfsigning-issuer kind: Issuer profile: n3xc4.m16 subsystemName: mgmt triggerRollback: false managementSpec: databaseVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 120Gi dbArchiveVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 30Gi dbBackupVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 120Gi messageQueueVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 10Gi databaseBackup: credentials: ibmdbcreds host: s3.us-east.cloud-object-storage.appdomain.cloud/us-east path: $BUCKET_PATH/$BUCKET_NAME protocol: objstore restartDB: accept: false s3provider: ibm
Notes:- The
databaseBackup.path
setting for 10.0.1.x-eus must be different from the path you used for 2018. - The
databaseBackup.restartDB.accept
option must be set tofalse
for upgrading.
Portal Upgrade CR example:apiVersion: portal.apiconnect.ibm.com/v1beta1 kind: PortalUpgradeFromV2018 metadata: name: ptl-upgrade namespace: apic spec: license: accept: true use: production profile: n3xc4.m16 priorityList: - portal.apps.vlxocp-761.cp.fyre.ibm.com/e2etest-3/cat1 - portal.apps.vlxocp-761.cp.fyre.ibm.com/e2etest-3/sandbox - portal.apps.vlxocp-761.cp.fyre.ibm.com/e2etest-3/spacecatalog - portal.apps.vlxocp-761.cp.fyre.ibm.com/e2etest-6/cat1 - portal.apps.vlxocp-761.cp.fyre.ibm.com/e2etest-6/sandbox - portal.apps.vlxocp-761.cp.fyre.ibm.com/e2etest-6/spacecatalog certManagerIssuer: name: selfsigning-issuer kind: Issuer portalSpec: siteName: "20181012" adminVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 12Gi backupVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 30Gi databaseLogsVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 12Gi databaseVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 30Gi webVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 12Gi certVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 4Gi subsystemName: portal triggerRollback: false
Analytics Upgrade CR example:apiVersion: analytics.apiconnect.ibm.com/v1beta1 kind: AnalyticsUpgradeFromV2018 metadata: name: a7s-upgrade namespace: apic spec: license: accept: true use: production analyticsSpec: storage: data: volumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 50Gi master: volumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 50Gi certManagerIssuer: name: selfsigning-issuer kind: Issuer profile: n3xc4.m16 subsystemName: analytics triggerRollback: false
Gateway Upgrade CR example:apiVersion: gateway.apiconnect.ibm.com/v1beta1 kind: GatewayUpgradeFromV2018 metadata: name: gwv6-upgrade namespace: apic spec: license: accept: true use: production subsystemName: gwv6 triggerRollback: false profile: n1xc4.m8 gatewaySpec: syncDelaySeconds: 600 tokenManagementServiceVolumeClaimTemplate: storageClassName: rook-ceph-block volumeSize: 30Gi
-
If needed, rollback a failed upgrade and try again:
If all of the subsystems upgraded successfully, skip this step.
-
Perform a rollback of the failed subsystem upgrade:
- Management and Analytics only: Run the following command to set the
policy:
oc adm policy add-scc-to-group anyuid system:serviceaccounts:<APIC_namespace>
This step is not needed for a Portal rollback.
- Edit the upgrade CR and change the
triggerRollback
setting totrue
. - Click Create subsystemUpgradeFromV2018 to apply to the modified CR for the rollback.
- Management and Analytics only: Run the following command to set the
policy:
-
Re-run the upgrade:
- Management and Analytics only: Remove the policy that you set for the rollback by running the
following
command:
ooc adm policy remove-scc-from-group anyuid system:serviceaccounts:<APIC_namespace>
- Edit the upgrade CR and change the
triggerRollback
setting tofalse
. - Click Create subsystemUpgradeFromV2018 to apply to the modified CR for the upgrade.
- Management and Analytics only: Remove the policy that you set for the rollback by running the
following
command:
-
Perform a rollback of the failed subsystem upgrade:
- If you have not upgraded the OpenShift cluster to OpenShift
version 4.10, do that now.
- Change the channel in OpenShift to 4.10, and wait for the upgrade to finish.
- Wait for nodes to all show the newer version of Kubernetes.
-
If you chose not to include the backup configuration when you created the management upgrade CR
in step 6, configure backup settings
on the new 10.0.1.x-eus Management subsystem now.
For more information, see Backups on OpenShift.
What to do next
If the upgrade failed and cannot be corrected, revert to the API Connect 2018 deployment using the disaster recovery process. For information, see OpenShift: Recovering the V2018 deployment.
If the upgrade succeeded, complete the steps in OpenShift: Post-upgrade steps to remove 2018 files that were left over from the upgrade process.