Installing the target system on Kubernetes, OpenShift, and Cloud Pak for Integration
On the target system, configure a cluster, create the necessary secrets, and install API Connect.
Before you begin
- Complete Preparing the source system.
- Ensure that any network route between the target installation and the source system is disabled. The target system must not be able to communicate with the source system.
About this task
Run the create_secrets_in_target.py
script to create the necessary secrets. If
necessary, shutdown the Postgres database, and then install API Connect, either manually or by
running the install_apic_on_ocp.py
script.
Procedure
- Configure a target cluster either Kubernetes or OpenShift or
Cloud Pak for Integration, where you intend to migrate the source system data.
- Note that this configuration is for the cluster only. No API Connect software is installed yet.
- Make sure
KUBECONFIG
is set to this target cluster, so you can access the cluster usingoc
orkubectl
.
- Create the namespaces in the target cluster where you want the API Connect subsystems to
run.
- The namespaces can be different than the source system values.
- If you have additional subsystems to be installed in different namespaces, create these namespaces also.
- Run
create_secrets_in_target.py
to create the following secrets:- Database backup secrets for Management and Portal subsystems
- Application credentials
- Encryption secrets from the source system
Usage examples:- If both Management and Portal are in the same namespace, or if Portal is not installed.
python3 create_secrets_in_target.py -n NAMESPACE
- If both Management and Portal are in different
namespaces:
python3 create_secrets_in_target.py -mgmt_ns <MGMT_NAMESPACE> -ptl_ns <PTL_NAMESPACE>
- If you are creating multiple Portal subsystems in the target API Connect system, to match the
configuration of the source system, you must run the script more than once for each Portal subsystem
from the source system.
The script detects that there are two Portal subsystems in the source system, and prompts you to select the subsystem name that matches the target Portal, as shown in the following commands:
- For Management and first Portal on the target system:
python3 create_secrets_in_target.py -mgmt_ns <MGMT_NAMESPACE> -ptl_ns <PTL_NAMESPACE1>
- For the second Portal on the target system:
python3 create_secrets_in_target.py -skip_mgmt -ptl_ns <PTL_NAMESPACE2>
- For Management and first Portal on the target system:
- If applicable to your deployment, shutdown the Postgres database on your source
system.
You can shutdown the Postgres database now, to ensure that there are no delta changes to your source system while the remainder of the migration process takes place.
- Launch the target API Connect system.
- Review prerequisites:
- Have a new OpenShift cluster ready with enough resources (CPU, memory, and disk) as required for the installation profile you will use.
- As needed, create an entitlement key to download the images related to API Connect.
- The target system must match the topology (number of Gateways, Portals and Analytics) of the source system to ensure that all the data in the Management database is migrated. If all the data in the Management database is not migrated, the not migrated (stale) data must be manually deleted. Alternatively, you can attempt the remaining migration later after installing the additional Portals and Gateways. See v10 migration planning.
- Launch the target API Connect system either manually or by using the
install_apic_on_ocp.py
script:- To manually launch from the OpenShift cluster UI:
This method is the recommended way to launch the target API Connect system for production systems.
This method is also used for advanced customization or when using custom certificates while launching the target API Connect system.
- Install and launch the target API Connect cluster. See the instructions for your deployment type:
- API Connect on OpenShift: Deploying on OpenShift and Cloud Pak for Integration
- Cloud Pak for Integration: Installation procedures with IBM Cloud Pak for Integration
- Before creating the top level CR, use the yaml view to add the properties which you saved from
the source system configuration.
For the Management subsystem, the saved configuration is in
data/config.yaml
. For the Portal subsystem, the saved configuration is indata/config_portal.yaml
.Important: If you do not use the configuration from the source system, the migration fails.- Management subsystem properties:
- encryptionSecret
- Management subsystem name
- site name
- database backup credentials
- custom application credentials
- originalUID
- Portal subsystem properties:
- encryptionSecret
- site name
- database backup credentials
- originalUID
Note: If target system is Kubernetes, the properties must be in the Management and Portal subsystem CRs. - Management subsystem properties:
- The top level CR must look like the following example. The values for this top CR are from the
generated artifacts (data/config.yaml and
data/config_portal.yaml) after running Preparing the source system. Note that
the example shows a value for
spec.management.name
. You can obtain this value from config.yaml.apiVersion: apiconnect.ibm.com/v1beta1 kind: APIConnectCluster metadata: labels: app.kubernetes.io/instance: apiconnect app.kubernetes.io/managed-by: ibm-apiconnect app.kubernetes.io/name: apiconnect-minimum name: c1 namespace: test1 spec: analytics: storage: enabled: true type: unique license: accept: true license: SOME_KEY use: nonproduction management: customApplicationCredentials: - name: atm-cred secretName: management-atm-cred - name: ccli-cred secretName: management-ccli-cred - name: cui-cred secretName: management-cui-cred - name: dsgr-cred secretName: management-dsgr-cred - name: juhu-cred secretName: management-juhu-cred - name: cli-cred secretName: management-cli-cred - name: ui-cred secretName: management-ui-cred databaseBackup: credentials: ftp-mgmt-backup-secret host: SFTP_SERVER_HOST path: /root/backup/v10_management port: 22 protocol: sftp restartDB: accept: false encryptionSecret: secretName: management-enc-key name: management originalUID: d435b534-ce93-463a-9005-692c2af94df7 siteName: be023200 portal: encryptionSecret: secretName: portal1-enc-key originalUID: 1adc672a-0e77-4890-b476-c41fcedb0b64 portalBackup: credentials: ftp-mgmt-backup-secret host: 1.2.3.4 path: /root/backup/portal1 port: 22 protocol: sftp siteName: cb06099f profile: n1xc7.m48 storageClassName: rook-ceph-block version: 10.0.8.0
- You can also use the script
install_apic_on_ocp.py
with-no_install
flag to generate the top level CR yaml, and use the yaml in the OpenShift UI and customize further.
- Install and launch the target API Connect cluster. See the instructions for your deployment type:
- To launch using the
install_apic_on_ocp.py
script:The script is used to launch API Connect on OpenShift or API Connect with Cloud Pak for Integration using a top level CR that specifies one of each subsystem type:- Management
- Developer Portal
- Analytics
- API Gateway
Prerequisite:- The OpenShift cluster must be ready, and accessible using
oc
. - Important: The script is used to launch ONLY the specific version of API Connect based on the downloaded version of the artifacts.
Usage:
- Run the script with
--help
option to show additional flags available like-production and -profile
. - License use is set to
nonproduction
by default unless the flag-production
is used while running this script. - The profile defaults to the one replica profile
n1xc7.m48
, to specify a different profile use the-profile
flag. For details of the available profiles, see API Connect deployment profiles for OpenShift and Cloud Pak for Integration. The one replica profile deploys 1 replica of each pod and is only for light, non-critical workloads such as basic development and testing. - Optionally, you can use the
-no_install
flag if you want to generate the yaml files (top level CR and other files needed in installation) without installing. In the following example commands, add the-no_install
flag to the existing command parameters. You can use this top level CR from the OpenShift UI, and create an API Connect cluster. - Note : If CP4I environment, check if configurator job is completed and also check the output of:
oc get apiconnectcluster -n NAMESPACE
Example command invocations:
Note: Each of the example commands includes parameters for a three replica profile deployment:-production -profile n3xc16.m48
. If you have a one replica profile, omit these parameters.- API Connect on OpenShift:
- If the apiconnect and DataPower operators must be installed in all
namespaces:
python3 install_apic_on_ocp.py -license LICENSE_VALUE -n NAMESPACE -name topCRName -storageclass_apic STORAGE_CLASS_NAME -production -profile n3xc16.m48
- If the apiconnect and DataPower operators must be installed in a specific
namespace:
python3 install_apic_on_ocp.py -license LICENSE_VALUE -n NAMESPACE -name topCRName -storageclass_apic STORAGE_CLASS_NAME -operator_in_specific_namespace -production -profile n3xc16.m48
- If the apiconnect and DataPower operators must be installed in all
namespaces:
- API Connect in Cloud Pak for Integration:
- If operators are in all
namespaces:
python3 install_apic_on_ocp.py -license LICENSE_VALUE1 -n NAMESPACE -name topCRName -storageclass_apic STORAGE_CLASS_NAME1 -cp4i -production -profile n3xc16.m48
- If operators are in a specific
namespace:
python3 install_apic_on_ocp.py -license LICENSE_VALUE1 -n NAMESPACE -name topCRName -storageclass_apic STORAGE_CLASS_NAME1 -cp4i -operator_in_specific_namespace -production -profile n3xc16.m48
- If operators are in all namespaces, with additional flags
-production
(for license to be set to production) and-profile
(using n3 profile in this example):python3 install_apic_on_ocp.py -license LICENSE_VALUE1 -n NAMESPACE -name topCRName -storageclass_apic STORAGE_CLASS_NAME1 -cp4i -production -profile n3xc16.m48
- If operators are in all
namespaces:
- To manually launch from the OpenShift cluster UI:
- Review prerequisites:
- Optional: Install any additional Gateways, Developer Portals, or Analytics
subsystems that are needed.
- Complete this step now if your target system will have additional Gateways, Developer Portals, or Analytics subsystems to match what is there in the source API Connect system, and you want to complete data migration for all subsystems in one pass through the procedures in Restoring the target system on Kubernetes, OpenShift, and Cloud Pak for Integration.
- Skip this step now, and proceed to step 7
if either of the following cases are true:
- If your target system will not need any additional Gateways, Developer Portals, or Analytics subsystems.
- Your target system will have additional Gateways, Developer Portals, or Analytics subsystems but you want to first complete data migration based on the top CR installation (1 Management, 1 Portal, 1 Gateway, 1 Analytics), before you install any additional subsystems.
Note: To review the data migration choices, see v10 migration planning.If you choose complete this step now, see:
- Next, restore the configuration from the source system. Continue with Restoring the target system on Kubernetes, OpenShift, and Cloud Pak for Integration.