OpenShift: Upgrading from 2018 to v10

Perform an online upgrade of IBM® API Connect 2018 to the latest release of v10 on OpenShift using a top-level upgrade CR.

Before you begin

Be sure to complete all of the steps in OpenShift: Preparing the 2018 deployment for upgrade. If you need to perform a disaster recovery to revert to 2018, you will need the backed-up apicup project as well as the backups of all the API Connect subsystems.

Attention: This task uses a single, top-level upgrade CR to upgrade all API Connect subsystems at once. Only use these instructions if all 4 API Connect v2018 subsystems are installed in a single namespace.

If your v2018 deployment installed subsystems into different namespaces, you must upgrade the subsystems individually using separate upgrade CRs. Follow the instructions for a Kubernetes deployment and upgrade the subsystems in the following sequence:

  1. Upgrading the Management subsystem from 2018 to v10
  2. Upgrading the Developer Portal subsystem from 2018 to v10
  3. Upgrading the Analytics subsystem from v2018 to v10
  4. Upgrading the gateway subsystem from 2018 to v10

About this task

Run the apicops fix on the database, install operators, create the top-level upgrade CR, and then perform the upgrade.

Procedure

  1. On the 2018 deployment, use the apicops utility to resolve any known inconsistencies and synchronization problems with the database:

    The apicops preupgrade command must be run in fix mode just prior to starting the management upgrade. The fix mode fixes any inconsistencies and synchronization problems in the database.

    Important: Upgrade will fail if the apicops command is not successfully completed within 2 hours of starting the Management upgrade. If the Management upgrade must be run more than once, you must run the command in fix mode within 2 hours each time you run the upgrade.
    1. Obtain the latest 2018 release of apicops from https://github.com/ibm-apiconnect/apicops#latest-v2018-release.
    2. Set up the environment. See https://github.com/ibm-apiconnect/apicops#requirements.
    3. Run the apicops report with the following command:
      apicops preupgrade -m report --upgrade_to v10 -l apicops_preupgrade_get_out.txt
      
    4. Run the apicops fix with the following command:
      apicops preupgrade -m fix --upgrade_to v10 -l apicops_preupgrade_fix_out.txt
      
      • The -m option runs the utility in fix mode. Running the utility in fix mode resolves most problems. If the utility identifies a problem it cannot resolve, contact IBM Support.
      • Set the -l option to send output to a log file.
      If apicops indicates that you need to have a KUBECONFIG environment variable set then run the following command before trying the fix again:
      export KUBECONFIG=~/.kube/config
  2. Take a full backup of the API Connect 2018 Management subsystem. This is in addition to the backup taken in OpenShift: Preparing the 2018 deployment for upgrade, which was taken before the pre-upgrade operations. The pre-upgrade operations will have changed some of the contents of the database and so it is recommended to have backups from before and after.
  3. From the OpenShift web console, click + (plus sign), and select create CatalogSource resources.

    Create the resources by completing the following steps:

    1. Create the ioc-catsrc.yaml file with IBM Operator Catalog source; for example :
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: "IBM Operator Catalog" 
        publisher: IBM
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog
        updateStrategy:
          registryPoll:
            interval: 45m
    2. Apply the IBM Operator Catalog source with the following command:
      oc apply -f ioc-catsrc.yaml -n openshift-marketplace
    3. Create the cs-catsrc.yaml file with the IBM Cloud Pak foundational services (previously called Common Services) operator catalog source:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: opencloud-operators
        namespace: openshift-marketplace
      spec:
        displayName: IBMCS Operators
        image: icr.io/cpopen/ibm-common-service-catalog:latest
        publisher: IBM
        sourceType: grpc
        updateStrategy:
          registryPoll:
            interval: 45m
    4. Apply the IBM Cloud Pak foundational services operator catalog source with the following command:
      oc apply -f cs-catsrc.yaml -n openshift-marketplace
  4. Install the API Connect V2018 Upgrade operator:
    1. Open the OCP Operator Hub in the namespace where you previously installed API Connect.
    2. Set the filter to "IBM API Connect V2018 Upgrade" to quickly locate the operators required for API Connect.
    3. Click the IBM API Connect V2018 Upgrade tile.
    4. Click Install again and set the following options for the API Connect V2018 Upgrade operator:
      • Set the Update Channel to v2.0.6.

        Selecting a channel installs the latest upgrade operator version that is available for the channel.

      • For the Install Mode, select A specific namespace on the cluster and select the namespace where you previously installed API Connect
      • Set the Approval Strategy to Automatic
    5. Click Install and wait while the upgrade operator is installed.
    6. Verify that the operator installed successfully by clicking Installed Operators, selecting the namespace where you installed API Connect, and making sure that the IBM API Connect V2018 Upgrade operator is listed.
  5. (10.0.5.2 and earlier) Create a cert-manager Issuer:
    Note: Skip this step if you are upgrading to version 10.0.5.3 or later because the cert-manager Issuer is created automatically.

    Example cert-manager Issuer

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: selfsigning-issuer
      labels: {
        app.kubernetes.io/instance: "management",
        app.kubernetes.io/managed-by: "ibm-apiconnect",
        app.kubernetes.io/name: "selfsigning-issuer"
      }
    spec:
      selfSigned: {}
  6. Create an entitlement key; for example:
    oc create secret docker-registry ibm-entitlement-key \
        --docker-username=cp \
        --docker-password=<entitlement_key> \
        --docker-server=cp.icr.io \
        --namespace=<APIC_namespace>
  7. Create the top-level CR.
    1. In the Operator Hub, click Installed Operators > IBM API Connect V2018 Upgrade
    2. Select the API Connect Upgrade tab.
    3. Create the top-level CR for upgrade by clicking APIConnectUpgradeFromV2018.
    4. Select the YAML view and populate the CR with the settings for your environment.
      In particular, provide values for the following settings:
      • license accept - set to true (installation fails if this is left as the default value of false)
      • license use - set to production or nonproduction according to your license purchase
      • Set license: to the License ID for the version of API Connect that you purchased. See API Connect licenses.
      • namespace - namespace where API Connect is installed
      • storageClassName - a storage class that is available in your cluster (you must specify a default storage class)
      • certManagerIssuer name - the name of the cert-manager Issuer that you created in the previous step (or another issuer that you want to use)
      • profile - the deployment profile that you want to install, see: API Connect deployment profiles for OpenShift and Cloud Pak for Integration.
      • subsystemName - the name of your V2018 subsystem

      Example top-level CR:

      apiVersion: apiconnect.ibm.com/v1beta1
       kind: APIConnectUpgradeFromV2018
       metadata:
         name: topcr-upgrade
         namespace: test1
       spec:
         license:
           accept: true
           license: L-GVEN-GFUPVE
           use: nonproduction
         management:
           subsystemName: mgmt
           triggerRollback: false
         gateway:
           subsystemName: gw
           triggerRollback: false
         analytics:
           analyticsSpec:
             storage:
               enabled: true
               shared:
                 volumeClaimTemplate:
                   storageClassName: rook-ceph-block
                   volumeSize: 50Gi
               type: shared
           subsystemName: a7s
           triggerRollback: false
         storageClassName: rook-ceph-block
         profile: n3xc4.m16
         portal:
           adminVolumeClaimTemplate:
             storageClassName: rook-ceph-block
             volumeSize: 12Gi
           backupVolumeClaimTemplate:
             storageClassName: rook-ceph-block
             volumeSize: 30Gi
           databaseLogsVolumeClaimTemplate:
             storageClassName: rook-ceph-block
             volumeSize: 12Gi
           databaseVolumeClaimTemplate:
             storageClassName: rook-ceph-block
             volumeSize: 30Gi
           webVolumeClaimTemplate:
             storageClassName: rook-ceph-block
             volumeSize: 12Gi
           certVolumeClaimTemplate:
             storageClassName: rook-ceph-block
             volumeSize: 4Gi
           subsystemName: ptl
           triggerRollback: false
         v10InstanceName: topcr
      Note: Note: To avoid issues with PVCs being sized too small, specify the PVC sizes of your existing 2018 deployment in the spec.portal section of the CR.

      The PVC sizes allocated might need to be adjusted further (beyond the minimum and existing 2018 values) depending on the number and content of the sites required.

      The minimum PVC sizes are large enough to cater for a small number of portal sites. As a rough estimate, for every 10 additional small portal sites (with a limited number of apps, apis, consumer orgs, members and portal content) the amount of space allocated should be increased in line with the following estimates:
      • databaseVolumeClaimTemplate: +10 Gi
      • webVolumeClaimTemplate: +1 Gi
      • backupVolumeClaimTemplate: +2 Gi
      If you have sites that have a lot of provider org content and lots of documentation, images, and additional content stored in the portal, then you may require a lot more space to be allocated to your PVCs. The following 2 PVCs do not need to be scaled based on the number of sites:
      • databaseLogsVolumeClaimTemplate
      • adminVolumeClaimTemplate
    5. Click Create to apply the CR and upgrade the deployment.
  8. If needed, rollback a failed upgrade and try again:
    1. Set triggerRollback to true in the top-level upgrade CR (in the section under the failed subsystem) to rollback the upgrade for that subsystem.
    2. Edit the top-level upgrade CR to correct the issue, and then change the value of triggerRollback to false, which stats the upgrade of that subsystem.
  9. Verify that the upgrade completed.

    Keep checking the status using oc get apic and wait for the upgrade to complete.

  10. Restart all nats-server pods by running the following command:
    oc -n <namespace> delete po -l app.kubernetes.io/name=natscluster
  11. Configure backup settings on the upgraded deployment as explained in Backups on OpenShift.
  12. Set the topCRName-mgmt-admin-pass secret to match the Cloud Manager admin password.

    The secret with name topCRName-mgmt-admin-pass (containing the Cloud Manager admin password) will have the password that was used when v10 was launched, but this does not impact logging in to Cloud Manager. After the upgrade, complete the following steps to set the secret to match the correct password:

    1. Log in to Cloud Manager using the admin password that you used in v2018.
    2. Edit the topCRName-mgmt-admin-pass secret and set the password to match the v2018 password, so that it is consistent.

What to do next

If the upgrade failed and cannot be corrected, revert to the API Connect 2018 deployment using the disaster recovery process. For information, see OpenShift: Recovering the V2018 deployment.

If the upgrade succeeded, complete the steps in OpenShift: Post-upgrade steps to remove 2018 files that were left over from the upgrade process.