Restoring the target system

Restore the saved configuration and backups from the source system onto the target system.

Before you begin

  • Complete Installing the target system.
  • Ensure that any network route between the target installation and the source system is disabled. The target system must not be able to communicate with the source system.

About this task

Use scripts to:

  • Restore the Management subystem
  • Register Gateways and Portals on the target system
  • Update Portals in the Catalog to point to new Portal systems
  • Restore Portal backups for each site
  • Update Gateways in the Catalog to point to new Gateway systems

All scripts run on the target system must be run from a location where the data directory is present so that the source configuration can be used.

Note: You can optionally run scripts in --silent mode, particularly for the following scenarios:
  • Register Gateways and Portals in the target system. (Step 3)
  • Update Portals in the Catalog to point to new Portal systems. (Step 4)
  • Update Gateways in the Catalog to point to new Gateway systems. (Step 6)

If there are multiple Portals and Management subsystems present in the data directory (as saved during Preparing the source system), the restore script prompts you to select the subystem name to use for migration while running the script. If running in silent mode, that value must be supplied as an additional flag.

Deployments with multiple Portals and Gateways are a common scenario. The saved data can have multiple Management subsystems only if you ran save_v10_source_configuration.py against multiple clusters with different Management subsystem names, without cleaning the data directory.

Procedure

  1. Before restoring the Management subsystem backup, determine whether there have been any delta changes to your source system since you backed up the Management subsystem database and Portal database.
    • If you shutdown the Postgres database, as instructed in Installing the target system, there are no delta changes. Continue to step 2.
    • If you did not shutdown the Postgres database, there could be some delta changes to the Management database, because the Management subsystem on the source system remains in READY state. In this case, to ensure you capture the latest configuration, repeat Preparing the source system. By doing so, the Management and Portal databases are backed up again. The latest backup will then be used to restore to the target system.
  2. Restore a Management subsystem backup onto the target system.
    1. On the target API Connect system, ensure that the Management subsystem is up and running,
    2. Run restore_management_db.py. Examples:
      • OpenShift or Kubernetes target system:
        python3 restore_management_db.py -n <mgmt namespace>

        or:

        python3 restore_management_db.py -mgmt_ns <mgmt namespace>
      • Cloud Pak for Integration target system:
        python3 restore_management_db.py -n <namespace> -s <platform API hostname>
      • If for some reason the management subsystem is not healthy, restore using the command:
        python3 restore_management_db.py -n <mgmt namespace> -ignore_health_check
  3. Register Gateways and Portals on the target system.
    1. Prerequisites:
      • The register_gateway_portals_in_target.py script must be run on the target system after the Management database from the source system has been restored.
      • The Gateway and Developer Portal subsystems that need to be registered must be in a healthy state.
    2. Usage notes:
      • The credentials provided from this step will be same as the credentials used on the source system, because the Management database from the source system is restored.
      • Keep the endpoints ready for the new Gateway and Portal services so that they can be registered by running register_gateway_portals_in_target.py. To view endpoints, use oc get routes.
      • If you run register_gateway_portals_in_target.py for a Cloud Pak for Integration target system, and you see the error The API Connect Gateway Service is already registered to another API Manager instance, restart all the gateway pods, wait for them to become healthy, and run register_gateway_portals_in_target.py again.
      • Make sure the mapping of old URLs with the new ones are correct. For example, when the source system is a VMware appliance, if wrongly mapped (wrong values provided) communication errors occur during Gateway registration. Example errors messages:
        • "Error: An error occurred communicating with the gateways subsystem "
        • "error: 'Client network socket disconnected before secure TLS connection was established'"

        On the appliance, the Gateway Manager URL uses port 3000.

    3. Run register_gateway_portals_in_target.py in either interactive mode or silent mode:

      • Interactive mode

        The register_gateway_portals_in_target.py script displays the Portal and Gateway endpoints from the source system, and prompts the user to enter the corresponding values from the target system. Examples:

        • If Management, Portal and Gateway subsystems are in the same namespace:
          python3 register_gateway_portals_in_target.py -s <platform API hostname> -u <username> -p <password> -r <realm name> -n <namespace>
        • If Management, Portal and Gateway subsystems are in different namespaces, use -mgmt_ns, -ptl_ns, and -gw_ns to specify each, for example:
          python3 register_gateway_portals_in_target.py -s <platform API hostname> -u <username> -p <password> -r <realm name> -mgmt_ns <mgmt namespace> -ptl_ns <portal namespace> -gw_ns <gway namespace>
        • If you are using SSO authentication, replace:
          -u <username> -p <password>
          with:
          --sso
        • Multiple namespaces can alternatively be specified with "ns1|ns2|ns3" notation. For example:

          python3 register_gateway_portals_in_target.py -n "ns1|ns2|ns3" -u admin -p <password> -s <platform API hostname> -r admin/default-idp-1 -silent
          python3 register_gateway_portals_in_target.py -s <platform API hostname> -u admin -p <password> -r admin/default-idp-1 -mgmt_ns ns1 -ptl_ns "ns1|ns2" -gw_ns "ns1|ns2|ns3" -silent
      • Silent mode

        The gateway_portal_mapping.yaml file, with mapping between old and new endpoints for Gateway, Portal, and Analytics endpoints, must be located in the same directory as register_gateway_portals_in_target.py.

        This gateway_portal_mapping.yaml file is generated when saving the source API Connect system configuration with properties data filled in. For every Gateway, Portal, and Analytics subsystem registered with Management in the source system, the yaml has the endpoint mapping between source and target systems.

        Remaining data must be filled to use the yaml in silent mode. All values must begin with https://.

        • Example:
          python3 register_gateway_portals_in_target.py -n <namespace> -u <username> -p <password> -s <platform API hostname> -r <realm name> -silent
        • Example: If the subsystems are in multiple namespaces.

          For example, ns1, ns2, ns3 namespaces contain the subsystems. Both of the following commands work the same way:

          python3 register_gateway_portals_in_target.py -n "ns1|ns2|ns3" -u <username> -p <password> -s <platform API hostname> -r <realm name> -silent
          python3 register_gateway_portals_in_target.py -s <platform API hostname> -u <username> -p <password> -r <realm name> -mgmt_ns ns1 -ptl_ns "ns1|ns2" -gw_ns "ns1|ns2|ns3" -silent
        • Example gateway_portal_mapping.yaml file with:
          • Two Gateways
          • One Portal or Analytics subsystem
          • All data mapped to the target API Connect system:
          analytics_mapping:
            registered_analytics_name:
              https://ac.source-apic.example.com: https://a1-a7s-ac-endpoint-apic1.apps.vlxocp-3278.cp.fyre.example.com
          gateway_mapping:
            registered_apigw_name:
              https://rgw.source-apic.fyre.example.com: https://a1-gw-gateway-apic1.apps.vlxocp-3278.cp.fyre.example.com
              https://rgwd.source-apic.fyre.example.com: https://a1-gw-gateway-manager-apic1.apps.vlxocp-3278.cp.fyre.example.com
            registered_v5gw_name:
              https://gw.source-apic.fyre.example.com: https://v5gw.gw.apps.vlxocp-3278.cp.fyre.example.com
              https://gwd.source-apic.fyre.example.com: https://v5gw.gwd.apps.vlxocp-3278.cp.fyre.example.com
          portal_mapping:
            registered_portal_name:
              https://api.portal.source-apic.fyre.example.com: https://a1-ptl-portal-director-apic1.apps.vlxocp-3278.cp.fyre.example.com
              https://portal.source-apic.fyre.example.com: https://a1-ptl-portal-web-apic1.apps.vlxocp-3278.cp.fyre.example.com
        • Example gateway_portal_mapping.yaml file with:
          • Only two Gateways
          • One Portal or Analytics
          • One Gateway not mapped

            Note that "not mapped" means that the data in the Management subsystem related to this Gateway is not mapped, and the Gateway will not be migrated during this migration phase.

            The Gateway could be migrated later; for example, after the second Gateway is created on the target system. Until the data related to the not-migrated Gateway or Portal is migrated or deleted, there will be stale data in the database.

            The same behavior can be achieved by leaving the values empty, or removing the entire section (name and mapping).

          analytics_mapping:
            registered_analytics_name:
              https://ac.source-apic.example.com: https://a1-a7s-ac-endpoint-apic1.apps.vlxocp-3278.cp.fyre.example.com
          gateway_mapping:
            registered_apigw_name:
              https://rgw.source-apic.fyre.example.com: https://a1-gw-gateway-apic1.apps.vlxocp-3278.cp.fyre.example.com
              https://rgwd.source-apic.fyre.example.com: https://a1-gw-gateway-manager-apic1.apps.vlxocp-3278.cp.fyre.example.com
            registered_v5gw_name:
              https://gw.source-apic.fyre.example.com: NEW_GATEWAY_API_ENDPOINT_BASE_IN_TARGET_SYSTEM
              https://gwd.source-apic.fyre.example.com: NEW_GATEWAY_ENDPOINT_IN_TARGET_SYSTEM
          portal_mapping:
            registered_portal_name:
              https://api.portal.source-apic.fyre.example.com: https://a1-ptl-portal-director-apic1.apps.vlxocp-3278.cp.fyre.example.com
              https://portal.source-apic.fyre.example.com: https://a1-ptl-portal-web-apic1.apps.vlxocp-3278.cp.fyre.example.com
  4. Use update_to_new_portals.py to update Portals in the catalog to point to new Portal systems.
    Note: If there is no Portal subsystem on the source system, skip this step. Also, skip this step if no Portal sites are created in any of the provider orgs. Continue with 6.
    1. Review prerequisites:
      • The Management and Portal subsystems on the target API Connect system must be in a healthy state.
      • The update_to_new_portals.py script must be run on the target system after the following actions have been completed:
        • The Management database from the source system has been restored.
        • The new Gateway or Portal instance is registered in the Cloud Manager, using script or from the UI.
      • The credentials for accessing the admin org (Cloud Manager UI) and, if required, the provider org (API Manager UI) must be available for running the script.
    2. Run the update_to_new_portals.py script in one of two modes, based on how the Provide Org credentials are available. The modes are:
      • Using a common migration user created by the script
      • Using credentials provided by the user in a yaml file
      Important: The provider org credentials must have administrative access.

    Examples:

    • Using a common migration user created by the script :

      The update_to_new_portals.py script runs in this mode by default. This script creates a migration user muser1 in migrationur local user registry, and uses it to migrate Portal information in all the provider orgs. Once the migration is done, the user and user registry are deleted. There is no change in data once the migration is complete.

      Examples:

      • If the Management and Portal subsystems are in the same namespace
        python3 update_to_new_portals.py -s <platform API hostname> -u <username> -p <password> -r <realm name> -n <namespace> -api_manager_hostname <platform API hostname>  -silent
      • If using SSO authentication:
        python3 update_to_new_portals.py -s <platform API hostname> --sso -n <namespace> -api_manager_hostname API_MANAGER_HOSTNAME -silent
      • If the Management and Portal subsystems are in different namespaces:
        python3 update_to_new_portals.py -s <platform API hostname> -u <username> -p <password> -r <realm name> -mgmt_ns <mgmt namespace> -ptl_ns <portal namespace>E -api_manager_hostname <platform API hostname>E -silent
    • Using credentials provided by the user in a provider_org_credentials.yaml file:

      The provider_org_credentials.yaml file, with some data filled in, is generated by the save_v10_source_configuration.py script you ran in Preparing the source system. The yaml file is available in the same directory as the script. The script has to be edited and all details filled in before it can be used.

      Examples:

      • If Management and Portal subsystems are in the same namespace:
        python3 update_to_new_portals.py -s <platform API hostname> -u <username> -p <password> -r <realm name>  -n <namespace> -no_migration_user -silent
      • If using SSO authentication:
        python3 update_to_new_portals.py -s <platform API hostname> --sso -n <namespace> -no_migration_user -silent
      • If Management and Portal subsystems are in different namespaces:
        python3 update_to_new_portals.py -s <platform API hostname> -u <username> -p <password> -r <realm name>  -mgmt_ns <mgmt namespace> -ptl_ns <portal namespace> -no_migration_user -silent

      Example provider_org_credentials.yaml:

      • With credentials given for every provider org. In this example, only two orgs are present.
        provider_org_credentials:
          apiManagerHostName: <platform API hostname>
          PORG_NAME_1:
            password: PASSWORD_FOR_THIS_REALM_TO_BE_CHANGED
            realm: provider/default-idp-2
            username: USERNAME_FOR_THIS_REALM_TO_BE_CHANGED
          PORG_NAME_2:
            password: PASSWORD_FOR_THIS_REALM_TO_BE_CHANGED
            realm: provider/default-idp-2
            username: USERNAME_FOR_THIS_REALM_TO_BE_CHANGED
          useSameCredentialsForAllProviderOrgs: false
      • With credentials given for just one provider org. Also requires useSameCredentialsForAllProviderOrgs set to true. This method can be used only when the same credentials are used for all provider orgs.
        Note: In some cases, there are multiple provider orgs in the Management database.
        provider_org_credentials:
          apiManagerHostName: <platform API hostname>
          PORG_NAME_1:
            password: PASSWORD_FOR_THIS_REALM_TO_BE_CHANGED
            realm: provider/default-idp-2
            username: USERNAME_FOR_THIS_REALM_TO_BE_CHANGED
          useSameCredentialsForAllProviderOrgs: true
      • With SSO credentials and just one provider org:
        provider_org_credentials:
          apiManagerHostName: <platform API hostname>
          useSameCredentialsForAllProviderOrgs: false
          porg_anme:
            #sso authentication
            #the key(that can expire in sometime) has to be generated separately using https://<apiManagerHostname>/auth/manager/sign-in/?from=TOOLKIT
            apiKey: some_key
  5. Restore Portal backups for each site.
    Note: If there is no Portal subsystem on the source system, skip this step. Continue with 6.

    To restore the Portal sites on the target system, you can either use the using script restore_portal_db.py or complete a manual restore.

    • Restoring through the script:
      python3 restore_portal_db.py -n PORTAL_SUBSYSTEM_NAMESPACE
    • Restoring through manual steps:
      1. Log in to the Portal admin container:
        oc exec -it <www_pod> -c admin -- bash
      2. List the site backups:
        remote_transfer_backup -l

        The list of backups that need to be restored is available in the output logs as a report of Step 4.

      3. Download each site backup, and run it for all sites:
        remote_transfer_backup -d <SITE_BACKUP_FILENAME>
      4. Restore each of the site backups:
        restore_site -f -u <SITE_URL> <SITE_BACKUP_FILENAME>
    Note: If there are more than one Portal subsystems on the target system, as needed to correspond to the Portal subsystems on the source system, run the script for each of the matching source and target Portal subsystems.

    For example, if there are two Portals on the target system, matching what is on the source system, and these Portals are installed in namespaces target1 (for the top-level CR) and ptl2 (for second portal), then the script will be run twice, as follows:

    python3 restore_portal_db.py -n target1
    
    python3 restore_portal_db.py -n ptl2
  6. Run update_to_new_gateways.py to update Gateways in the Catalog to point to new Gateway systems.
    1. Review prerequisites:
      • The Gateway subsystem must be in a healthy state.
      • The update_to_new_gateways.py script must be run on the target system after the following actions have been completed:
        • The Management database from the source system has been restored.
        • The new Gateway instance is registered in the Cloud Manager, using script or from the UI.
      • The credentials for accessing the admin org (Cloud Manager UI) and, if required, the provider org (API Manager UI) must be available for running the script.
    2. Run the update_to_new_gateways.py script in one of two modes, based on how the Provider Org credentials are available. The modes are:
      • Using a common migration user created by the script
      • Using credentials provided by the user in a yaml file
      Important: The provider org credentials must have administrative access.
      • Using a common migration user created by the script:

        The update_to_new_gateways.py script runs in this mode by default. This script creates a migration user muser1 in migrationur local user registry, and uses it to migrate Portal information in all the provider orgs. Once the migration is done, the user and user registry are deleted. There is no change in data once the migration is complete.

        Examples:

        • If the Management and Gateway subsystems are in the same namespace:
          python3 update_to_new_gateways.py -s <platform API hostname> -u <username> -p <password> -r <realm name>  -n <namespace> -api_manager_hostname <platform API hostname>  -silent
        • If using SSO authentication:
          python3 update_to_new_gateways.py -s <platform API hostname> --sso -n <namespace> -api_manager_hostname <platform API hostname> -silent
        • If the Management and Gateway subsystems are in different namespaces:
          python3 update_to_new_gateways.py -s <platform API hostname> -u <username> -p <password> -r <realm name>  -mgmt_ns <mgmt namespace> -ptl_ns <portal namespace> -api_manager_hostname <platform API hostname> -silent
      • Using credentials provided by the user in a provider_org_credentials.yaml file:

        The provider_org_credentials.yaml file, with some data filled in, is generated by the save_v10_source_configuration.py script you ran in Preparing the source system. The yaml file is available in the same directory as the script. The script has to be edited and all details filled in before it can be used.

        • If the Management and Gateway subsystems are in the same namespace:
          python3 update_to_new_gateways.py -s <platform API hostname> -u <username> -p <password> -r <realm name>  -n <namespace> -no_migration_user -silent
        • If using SSO authentication:
          python3 update_to_new_gateways.py -s <platform API hostname> --sso -n <namespace> -no_migration_user -silent
        • If the Management and Gateway subsystems are in different namespaces:

          Note that if multiple gateway subsystems present across namespaces, you can give multiple namespaces separated by the "|" symbol for -gw_ns.

          python3 update_to_new_gateways.py -s <platform API hostname> -u <username> -p <password> -r <realm name> -mgmt_ns <mgmt namespace> -gw_ns GW_NAMESPACE -no_migration_user -silent
        • Example provider_org_credentials.yaml
          • With credentials given for every provider org, and only two orgs present:
            provider_org_credentials:
              apiManagerHostName: <platform API hostname>
              PORG_NAME_1:
                password: PASSWORD_FOR_THIS_REALM_TO_BE_CHANGED
                realm: provider/default-idp-2
                username: USERNAME_FOR_THIS_REALM_TO_BE_CHANGED
              PORG_NAME_2:
                password: PASSWORD_FOR_THIS_REALM_TO_BE_CHANGED
                realm: provider/default-idp-2
                username: USERNAME_FOR_THIS_REALM_TO_BE_CHANGED
              useSameCredentialsForAllProviderOrgs: false
            
          • With credentials given for just one provider org, and useSameCredentialsForAllProviderOrgs is set to true.

            This can be used only when same credentials are used for all provider orgs.

            Note: There can be multiple provider orgs in the Management database.
            provider_org_credentials:
              apiManagerHostName: <platform API hostname>
              PORG_NAME_1:
                password: PASSWORD_FOR_THIS_REALM_TO_BE_CHANGED
                realm: provider/default-idp-2
                username: USERNAME_FOR_THIS_REALM_TO_BE_CHANGED
              useSameCredentialsForAllProviderOrgs: true
            
          • With SSO credentials and just one provider org:
            provider_org_credentials:
              apiManagerHostName: <platform API hostname>
              useSameCredentialsForAllProviderOrgs: false
              porg_anme:
                #sso authentication
                #the key(that can expire in sometime) has to be generated separately using https://<apiManagerHostname>/auth/manager/sign-in/?from=TOOLKIT
                apiKey: some_key
  7. Continue to Verifying the target system.