Restore the saved configuration and backups from the source deployment onto the target
deployment.
Before you begin
- Complete the installation of v10 for your target platform:
- If you want to migrate any changes on your v2018 deployment that happened since you did the
extract process, repeat the extract steps:
- Ensure that any network route between the target installation and the
source system is disabled. The target system must not be able to communicate with the source
system.
- OVA Users: All operations are run from the
apicv10install
directory,
which you copied to your VMs after installation: Installing the v10 target system on VMware.
SSH to your target VMs as the apicadm
user, run 'sudo -i'
, and
cd
to this directory:ssh apicadm@<VM hostname>
sudo -i
cd /root/<apicv10install directory>
- Ensure you have the v10 toolkit CLI setup in your
apicv10install
directory, see Installing the
v10 toolkit. Set your PATH variable as follows:
EXPORT PATH=<full path of apicv10install directory>:$PATH
Verify
that you can log in to the Management server as the Cloud Manager admin user with the toolkit CLI:apic login --server <platform_api_endpoint> --username admin --realm admin/default-idp-1
where
<platform_api_endpoint> is the FQDN of the platform api endpoint that is
defined for your v10 Management subsystem. To find this:
The password is the same as the admin user password for the Cloud
Manager UI, and is set to the default
for a new v10 installation on your platform. For more information, see v10 toolkit CLI
login.
- Update your
gateway_portal_mapping.yaml
mapping file in your
apicv10install
directory, adding your v10 endpoints. For any subsystems that you do
not want to migrate, leave them set to the placeholder value. The example below shows a mapping file
where the gateway service called v5gway_service_1
is not
migrated.analytics_mapping:
analytics_service1:
https://analytics_ingestion.v2018.example.com: https://analytics_ingestion.v10.example.com
gateway_mapping:
apigway_service1:
https://gateway.v2018.example.com: https://gateway.v10.example.com
https://gateway-manager.v2018.example.com: https://gateway-manager.v10.example.com
v5gway_service_1:
https://gw.source-apic.fyre.example.com: https://NEW_GATEWAY_ENDPOINT_HOST_IN_TARGET_SYSTEM
https://gwd.source-apic.fyre.example.com: https://NEW_GATEWAY_API_ENDPOINT_BASE_HOST_IN_TARGET_SYSTEM
portal_mapping:
portal_service_1:
https://portal-director.v2018.example.com: https://portal-director.v2018.example.com
https://portal-web.v2018.example.com: https://portal-web.v2018.example.com
You can
find your new v10 endpoints with:
Note: On the gateway appliance, the Gateway Manager URL uses port 3000
. The
port number is appended to the v10 endpoint in the mappings file. For example,
https://gateway-manager.v10.example.com:3000
.
Unmigrated subsystems can be
migrated later, see Migrating additional v2018 subsystems: Staged migration.Note: If you want your
v10 target deployment to use the same endpoint URLs as your v2018 source deployment, you can skip
updating this file.
About this task
In this task, migration scripts are used to:
- Restore the Management subsystem.
- Register Gateways and Portals on the target system.
- Update Portals in the catalog to point to new Portal systems.
- Restore Portal backups for each site.
- Update Gateways in the catalog to point to new Gateway systems.
All operations in this task are run from your apicv10install
directory.
Note: It is recommended that you keep the console output of all python scripts that
are run. On most MacOS and Linux environments, appending the following to the end of the command
saves the output to the specified file:
2>&1 | tee <output filename>
Procedure
- Verify that all pods in your v10 target deployment are running and no errors are
reported:
- OpenShift and Cloud Pak for Integration
only: Run the following
command:
# oc adm policy add-scc-to-user anyuid system:serviceaccount:<apic namespace>:default
Where
<apic namespace> is your v10
API Connect
namespace.
- Run the load script:
python3 load_v2018_data_to_v10.py -storage_class <storage_class> -registry_secret <image_pull_secret> -load_storage_size 50 -load_image <load_image_url> -n <apic_namespace> -operator_namespace <apic operator namespace> -s <platform_api_endpoint> -silent
Where
- <storage_class> is the name of the storage class your v10 API Connect deployment is using.
OVA users: This must be set to
local-storage
.
- Non-OVA users:
<image_pull_secret> is the secret that is used to pull the API Connect images. Not needed for
OVA. For OpenShift, the cluster default is used if not specified.
- <load_image_url> is the full name and path of the
ibm-apiconnect-management-v2018-to-v10-load
image. On OVA this URL is obtained as
described in the installation steps: Get OVA
load image URL
- <apic_namespace> is the namespace where your v10 API Connect subsystem instances
are installed. OVA users: This must be set to
default
.
- <apic operator namespace> is the namespace where you v10 API Connect operators are
installed. OVA users: This must be set to
default
.
- <platform_api_endpoint> is the FQDN of the platform api endpoint that is
defined for your v10 Management subsystem. To find this:
The script takes some time to complete. Check the status of the pods in your
API Connect subsystem
namespace and check for any pods in Error state. The script attempts to pull the
API Connect migration load image
from your image repository, failure to pull the image can be observed by checking the pods:
If see output such as:
v2018-mgmt-load-bczdf 0/1 ImagePullBackOff 0 2m4s
terminate
the load script and investigate why the image cannot be loaded. When the load script is rerun it
cleans up failed pods from previous attempts. OVA users: if the problem is not clear, for example an
incorrect load image path, gather logs and open a support case.
Note: The v2018 load script
overwrites any data that you configured on your v10 target deployment, with the v2018 source data.
For example, the admin password for your v10 Cloud
Manager UI is replaced with the one from
your v2018 source deployment.
- If you want your v10 target deployment to use the same endpoint URLs as your v2018 source
deployment, skip to step 8.
- Register Gateways and Portals on the target system.
Run the
register_gateway_portals_in_target.py
script:
OVA users: On your Management VM, run:
python3 register_gateway_portals_in_target.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -n default -silent
Where
- <platform_api_endpoint> is the FQDN of the platform api endpoint that is
defined for your v10 Management subsystem.
- <password> is the Cloud
Manager UI admin user password, from
your v2018 deployment.
Kubernetes, OpenShift, and
Cloud Pak for Integration users:
- If Management, Portal and Gateway subsystems are in the same namespace:
python3 register_gateway_portals_in_target.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -n <apic_namespace> -silent
Where
- <platform_api_endpoint> is the FQDN of the platform api endpoint that is
defined for your v10 Management subsystem.
- <password> is the Cloud
Manager UI admin user password, from
your v2018 deployment.
- <apic_namespace> is the namespace where your v10 API Connect subsystem instances
are installed.
- If Management, Portal and Gateway subsystems are in different namespaces, use
-mgmt_ns
, -ptl_ns
, and -gw_ns
to specify each,
for
example:python3 register_gateway_portals_in_target.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -mgmt_ns <mgmt_namespace> -ptl_ns <portal namespace> -gw_ns <gateway_namespace> -silent
- If you are using SSO authentication,
replace:
-u <username> -p <password>
with:--sso
-
Multiple namespaces can alternatively be specified with "ns1|ns2|ns3"
notation.
For example:
python3 register_gateway_portals_in_target.py -n "ns1|ns2|ns3" -u admin -p <password> -s <platform_api_endpoint> -r admin/default-idp-1 -silent
python3 register_gateway_portals_in_target.py -s PLATFORM_API_HOSTNAME -u admin -p <password> -r admin/default-idp-1 -mgmt_ns ns1 -ptl_ns "ns1|ns2" -gw_ns "ns1|ns2|ns3" -silent
Note: If you run
register_gateway_portals_in_target.py
for a Cloud Pak for
Integration target system, and you see the error
The API Connect Gateway Service is already registered to another API Manager instance
restart
all the gateway pods, wait for them to reach
Ready
state, and run
register_gateway_portals_in_target.py
again.
- Run
update_to_new_portals.py
to update portals defined
in your catalogs to point to your new Portal subsystems.
Note:
If your source system has no portal services, skip this step. Also, skip this step if no portal
sites are created for any of you catalogs.
If there is no change in your portal endpoints, then include the flag
-dont_use_toolkit_for_portal_backup
when you run the
update_to_new_portals.py
script.
OVA users: On your Management VM (not your Portal VM),
run:
python3 update_to_new_portals.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -n default -api_manager_hostname <platform_api_endpoint> -silent
Kubernetes, OpenShift, and
Cloud Pak for Integration users:
- If the Management and Portal subsystems are in the same
namespace:
python3 update_to_new_portals.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -n <apic_namespace> -api_manager_hostname <platform_api_endpoint> -silent
- If the Management and Portal subsystems are in different namespaces:
python3 update_to_new_portals.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -mgmt_ns <mgmt_namespace> -ptl_ns <portal namespace> -gw_ns <gateway_namespace> -silent
- If you are using SSO authentication,
replace:
-u USERNAME -p PASSWORD
with:--sso
- Run
update_to_new_gateways.py
to update Gateways in
the catalog to point to new Gateway systems.
Note: The response might indicate that you should run a health check but that is not needed because
you will run a health check later as part of the verification task.
OVA users: On your Management VM,
run:
python3 update_to_new_gateways.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -n default -api_manager_hostname <platform_api_endpoint> -silent
Kubernetes, OpenShift, and
Cloud Pak for Integration users:
- If the Management and Gateway subsystems are in the same
namespace:
python3 update_to_new_gateways.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -n <apic_namespace> -api_manager_hostname <platform_api_endpoint> -silent
- If the Management and Gateway subsystems are in different namespaces:
python3 update_to_new_gateway.py -s <platform_api_endpoint> -u admin -p <password> -r admin/default-idp-1 -mgmt_ns <mgmt_namespace> -gw_ns <gateway_namespace> -silent
- If you are using SSO authentication,
replace:
-u USERNAME -p PASSWORD
with:--sso
- Restore Portal backups for each site.
Note:
If your source system has no portal services, skip this step. Also, skip this step if no portal
sites are created for any of you catalogs.
If there is no change in your portal endpoints, then include the flag -s <platform API
hostname>
when you run the restore_portal_db.py
script.
To restore the Portal sites on the target system, you can either use the script
restore_portal_db.py to restore all sites, or restore sites individually with
the restore_site
command.
Restoring all sites with the restore script:
Restoring sites individually, using
restore_site
. For OVA users, these steps
should be run from the Portal VM:
- Identify the portal
www
pod:
- Exec into the portal
www
pod's admin
container:
- OpenShift:
oc exec -it -n <portal_subsystem_namespace> <www_pod> -c admin -- bash
- Kubernetes:
kubectl exec -it -n <portal_subsystem_namespace> <www_pod> -c admin -- bash
- OVA:
kubectl exec -it <www_pod> -c admin -- bash
- List the site backups:
remote_transfer_backup -l
- Download each site backup, and run it for all
sites:
remote_transfer_backup -d <site_backup_filename>
- Restore each of the site backups:
restore_site -f -u <site_url> <site_backup_filename>
Post-restore verification
- Check that your v10 migration was successful and remove any v2018 subsystems or data that
you do not want: Verifying the v10 target deployment.