v10 migration requirements

Ensure your deployment meets the migrations prerequisites and requirements.

Subsystem requirements

All subsystems must be in a healthy state.

You must manually create storage classes like block storage or file system storage. Applies to all subsystems and, when installing on CP4I, to Platform Navigator.

Ensure that any network route between the target installation and the source system is disabled. The target system must not be able to communicate with the source system.

  • Management subsystem:
    • You must configure management subsystems with either S3 or SFTP backups.
    • If s3 storage is being used, Postgres in the management subsystem must be stopped on the source system.
      Note:

      Shutting down Postgres is necessary in this scenario because two APIC deployments cannot both use the same S3 backup location. The target API Connect deployment will not start if Postgres on the source deployment is up and connecting to the same S3 backup location.

      When Postgres is stopped, there is no impact on the published APIs in the gateways. However, the Cloud Manager and API Manager UI will not available because the underlying microservices are down. Users are able to browse the APIs in the portal. But new consumer orgs or applications or users cannot be created.

    • If the Postgres cluster name on the target is different than the source system, restore will fail. For more information on configuring the target system, see Installing the target system on Kubernetes, OpenShift, and Cloud Pak for Integration.
  • Developer Portal subsystem:
    • Developer Portal subsystems must be configured with either S3 or SFTP backups.
    • For Developer Portal subsystem, you must have all the mappings between old and new endpoints and hostnames.
  • Gateway subsystem:
    • If a custom gateway image is being used on the source system, the same image must be used on the target system
    • For a gateway subsystem, you must have all the mappings between old and new endpoints and hostnames.

Scripting requirements

  • Python 3 and PyYAML python module must be present on the system where the scripts are run.

    To install the PyYAML module:

    pip3 install pyyaml
  • Do not have any spaces in the complete directory path where the scripts are executed.
  • Ensure that the API Connect version of the apic toolkit is the same version as for the scripts, and that the toolkit is located in the path where the scripts are executed. Validate toolkit commands to make sure the Cloud Manager and API Manager hostnames are resolved and able to run few commands. Make sure apic toolkit commands are working before trying out the scripts.
  • If you are using hostnames, you may need to update the /etc/hosts file for the apic toolkit to work.
  • When the scripts are run on the target system, the configuration (data directory) saved from the source system must be available in the location where scripts are present.
  • Save the output logs for every script being executed.

Credentials requirements

  • In all the scripts where the credentials are passed on the command line, these credentials are the admin org (Cloud Manager) credentials.
  • You must be able to use kubectl (for native Kubernetes) or oc (OpenShift) commands to access the source system cluster and the target system cluster.
  • To administer the admin org, you must have credentials with administrative role to access Cloud Manager.
  • For provider orgs, if you do not use the migration user, and plan to use provider org credentials, you must have the provider org credentials for the API Manager interface.
  • You must manually create:
    • Secrets for entitlement keys to download images from Docker registry.
    • DataPower admin credentials and secret, if the target system is native Kubernetes.
  • After migration, the users on the target system will be in the same user registry as in the source system. For example, in CP4I, users from a local user registry (LUR) are not moved to a Zen user registry.

How to obtain hostnames

The hostnames used in the scripts (--server or -s or -api_manager_hostname) and also in the files gateway_portal_mapping.yaml or provider_org_credentials.yaml can be obtained using the following commands.

The command must be run on the source cluster or target cluster, based on the step for which the hostname is needed.

  • Kubernetes:
    kubectl get ingress -n <NAMESPACE>
  • OpenShift and Cloud Pak for Integration:
    oc get routes -n <NAMESPACE>

Example output on Kubernetes:

kubectl get ingress -n ask1
NAME                      CLASS    HOSTS                                     ADDRESS                                                                                             PORTS     AGE
analytics1-ac-endpoint    <none>   ac1.testuser1-master.fyre.example.com           1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
analytics1-ai-endpoint    <none>   ai1.testuser1-master.fyre.example.com           1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
apigw1-gateway            <none>   rgw1.testuser1-master.fyre.example.com          1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d5h
apigw1-gateway-manager    <none>   rgwd1.testuser1-master.fyre.example.com         1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d5h
management-admin          <none>   admin.testuser1-master.fyre.example.com         1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
management-api-manager    <none>   manager.testuser1-master.fyre.example.com       1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
management-consumer-api   <none>   consumer.testuser1-master.fyre.example.com      1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
management-platform-api   <none>   api.testuser1-master.fyre.example.com           1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
portal1-portal-director   <none>   api.portal1.testuser1-master.fyre.example.com   1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
portal1-portal-web        <none>   portal1.testuser1-master.fyre.example.com       1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d7h
v5gw1-gateway             <none>   gw1.testuser1-master.fyre.example.com           1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,1.2.104.161,1.2.104.166   80, 443   5d5h
v5gw1-gateway-manager     <none>   gwd1.testuser1-master.fyre.example.com          1.2.104.133,1.2.104.137,1.2.104.147,1.2.104.157,1.2.104.158,

How to obtain realm values

The value of realm (--realm or -r) used while running the scripts can be obtained as shown in the following examples. The realm values are for Cloud Manager (admin org) and provider org.

  • Example for Cloud Manager (admin org):

    The value is passed as a command line option while running the script.

    apic identity-providers:list --scope admin --fields title,realm --server api.test1-master.fyre.example.com
    total_results: 3
    results:
      - title: Cloud Manager User Registry
        realm: admin/default-idp-1
      - title: googleoidc
        realm: admin/googleoidc
      - title: ldap1
        realm: admin/ldap1
    
  • Example for provider org realm values:

    The value is used in provider_org_credentials.yaml while updating portal and gateway information in the provider orgs, when the default migration user is not used. See Step 4 and Step 6 in Restoring the target system on Kubernetes, OpenShift, and Cloud Pak for Integration.

    apic identity-providers:list --scope provider --fields title,realm --server api.testuser1-master.fyre.example.com
    total_results: 3
    results:
      - title: API Manager User Registry
        realm: provider/default-idp-2
      - title: googleoidc
        realm: provider/googleoidc
      - title: ldap1
        realm: provider/ldap1
    

Backing up subsystems

It is necessary to backup the management and portal subsystem databases on the source system:

Limitations

  1. Analytics service and data is not migrated.
  2. If migrating from v10 non-CP4I environment (source apic cluster) to v10 CP4i environment (target apic cluster), make sure the Cloud Manager password for admin user is not 7iron-hide on the source cluster. If default password 7iron-hide is used, migration will fail after step 3 in Restoring the target system on Kubernetes, OpenShift, and Cloud Pak for Integration, as configurator job assumes that the install is a fresh installation, and changes the password automatically.