Installing a two data center deployment

How to install a two data center disaster recovery (2DCDR) deployment on VMware.

Before you begin

Ensure that you understand the concepts of a 2DCDR deployment in API Connect. For more information, see Two data center warm-standby deployment strategy on VMware.

Review the information in VMware deployment overview and requirements.

Familiarize yourself with the instructions in Preparing to install API Connect in VMware. Complete these instructions with the additional steps in this topic that are specific to installing a 2DCDR deployment.

Restriction: It is not possible to use the Automated API behavior testing application (Installing the Automated API behavior testing application) in a 2DCDR configuration.

About this task

Installing API Connect as a 2DCDR deployment is similar to a single installation. The deployment in each data center is effectively an instance of the same API Connect deployment. The following endpoints must be the same on both data centers:
Management subsystem endpoints
  • platform-api
  • consumer-api
  • cloud-admin-ui
  • api-manager-ui
  • consumer-catalog-ui
Portal subsystem endpoints
  • portal-admin
  • portal-www

To install a 2DCDR deployment, you follow the normal stand-alone API Connect installation steps, but before you generate the ISO files you must set some additional 2DCDR properties.

The 2DCDR properties are described in the following table.
Table 1. 2DCDR properties
Setting Description
multi-site-ha-enabled Indicates whether 2DCDR deployment is enabled. Set to true to enable 2DCDR. Set to false to disable 2DCDR. The default value is false.
multi-site-ha-mode The 2DCDR mode of the data center. Possible values are:
  • active - indicates the primary data center.
  • passive - indicates the warm-standby data center.
management-replication Required only for the management subsystem. The external ingress name for the management subsystem in the current data center in the 2DCDR deployment. The name is a unique fully qualified hostname that the other data center uses to communicate with the current data center.
portal-replication Required only for the portal subsystem. The external ingress name for the portal subsystem in the current data center in the 2DCDR deployment. The name is a unique fully qualified hostname that the other data center uses to communicate with the current data center.
replication-peer-fqdn The ingress hostname for the other data center in the 2DCDR deployment. This information is required so that the two data centers can communicate with each other.
siteName Unique descriptive name for the API Connect data center. Must be different in each data center. This name is used in the hostnames, and so can contain only a-z and 0-9 characters. You can set a siteName only when the subsystems are first deployed, and you cannot change the name after deployment. If you don't set siteName at first deployment, the siteName is autogenerated.
Important: Use a single project directory for all subsystems so that the API Connect deployments in both data centers have the same certificate chains. If network access to both data centers from the same project directory is not possible, you can copy the project directory across to the other data center. If you are copying the project directory to your other data center, ensure that you keep backups of both project directories.

Procedure

  • Installing on the active data center

    The following example shows how to configure the 2DCDR settings for deploying on the active data center (DC). In this example, the active data center is called dallas (DC1), and the warm-standby data center is called raleigh (DC2).

    1. Complete the steps in Preparing the management subsystem for deployment up to step 18 to configure the standard (non-2DCDR) settings for the management subsystem in Dallas (DC1).
    2. Configure the 2DCDR values to set Dallas to be active for the management subsystem in DC1.
      Run the following commands:
      apicup subsys set mgmt_dallas multi-site-ha-enabled=true
      apicup subsys set mgmt_dallas multi-site-ha-mode=active
      apicup certs set mgmt_dallas management-replication-ingress --clear
      apicup subsys set mgmt_dallas management-replication=mgrreplicationdallas.cluster1.example.com
      apicup subsys set mgmt_dallas replication-peer-fqdn=mgrreplicationraleigh.cluster2.example.com
      apicup subsys set mgmt_dallas site-name=dallas
      where
      • mgmt_dallas is the name of the management subsystem that you are configuring.
      • mgrreplicationdallas.cluster1.example.com is the ingress hostname for the current data center in the two data center deployment.
      • mgrreplicationraleigh.cluster2.example.com is the ingress hostname for the other data center in the two data center deployment.
      • dallas is the name of the site in the current data center.
    3. Complete the remainder of the steps in Preparing the management subsystem for deployment from step 18 to verify that the configuration settings are valid, create your ISO file, and deploy the OVF template.
    4. Complete the steps in Preparing the portal subsystem for deployment up to step 21 to configure the standard (non-2DCDR) settings for the portal subsystem in Dallas (DC1).
    5. Configure the 2DCDR values to set Dallas to be active for the portal subsystem in DC1.
      Run the following commands:
      apicup subsys set port_dallas multi-site-ha-enabled=true
      apicup subsys set port_dallas multi-site-ha-mode=active
      apicup certs set port_dallas portal-replication-ingress --clear
      apicup subsys set port_dallas portal-replication=ptlreplicationdallas.cluster1.example.com
      apicup subsys set port_dallas replication-peer-fqdn=ptlreplicationraleigh.cluster2.example.com
      apicup subsys set port_dallas site-name=dallas
      where
      • port_dallas is the name of the portal subsystem that you are configuring.
      • ptlreplicationdallas.cluster1.example.com is the ingress hostname for the current data center in the two data center deployment.
      • ptlreplicationraleigh.cluster2.example.com is the ingress hostname for the other data center in the two data center deployment.
      • dallas is the name of the site in the current data center.
    6. Complete the remainder of the steps in Preparing the portal subsystem for deployment from step 21 to verify that the configuration settings are valid, create your ISO file, and deploy the OVF template.
    7. Check the status of the installation by running:
      apicup subsys status subsystem_name
    8. Set your dynamic router to direct all traffic to DC1. The endpoints that must be directed to DC1 are:
      Management subsystem endpoints
      • platform-api
      • consumer-api
      • cloud-admin-ui
      • api-manager-ui
      • consumer-catalog-ui
      Portal subsystem endpoints
      • portal-admin
      • portal-www
  • Installing on the warm-standby data center

    The following example shows how to set the 2DCDR values for the remote data center called raleigh (DC2), which is being installed as the warm-standby location.

    Important: Remember to use the same project directory as for the active data center. If you cannot use the same project directory because its location does not have network access to both data centers, you can copy the project directory over. Ensure that backups are kept of both project directories in each data center.
    1. Complete the steps in Preparing the management subsystem for deployment up to step 18 to configure the standard (non-2DCDR) settings for the management subsystem in Raleigh (DC2).
    2. Configure the 2DCDR values to set the management subsystem to be the warm-standby in the Raleigh (DC2) data center.
      Run the following commands:
      apicup subsys set mgmt_raleigh multi-site-ha-enabled=true
      apicup subsys set mgmt_raleigh multi-site-ha-mode=passive
      apicup certs set mgmt_raleigh management-replication-ingress --clear
      apicup subsys set mgmt_raleigh management-replication=mgrreplicationraleigh.cluster2.example.com
      apicup subsys set mgmt_raleigh replication-peer-fqdn=mgrreplicationdallas.cluster1.example.com
      apicup subsys set mgmt_raleigh site-name=raleigh
      where
      • mgmt_raleigh is the name of the management subsystem that you are configuring.
      • mgrreplicationraleigh.cluster2.example.com is the ingress hostname for the current data center in the two data center deployment.
      • mgrreplicationdallas.cluster1.example.com is the ingress hostname for the other data center in the two data center deployment.
      • raleigh is the name of the site in the current data center.
    3. Set the encryption key manually for Raleigh (DC2) by copying it from Dallas (DC1).
      Run the following commands:
      apicup certs get mgmt_dallas encryption-secret -t key > mgmt-encryption-secret
      apicup certs set mgmt_raleigh encryption-secret mgmt-encryption-secret
    4. Check that the management encryption key is set the same on both DC1 and DC2.
      Run the following commands, and verify that the outputs match:
      apicup certs get mgmt_dallas encryption-secret -t key
      apicup certs get mgmt_raleigh encryption-secret -t key
    5. Complete the remainder of the steps in Preparing the management subsystem for deployment from step 18 to verify that the configuration settings are valid, create your ISO file, and deploy the OVF template.
    6. Complete the steps in Preparing the portal subsystem for deployment up to step 21 to configure the standard (non-2DCDR) settings for the portal subsystem in Raleigh (DC2).
    7. Configure the 2DCDR values to set Raleigh to be warm-standby for the portal subsystem on DC2.
      Run the following commands:
      apicup subsys set port_raleigh multi-site-ha-enabled=true
      apicup subsys set port_raleigh multi-site-ha-mode=passive
      apicup certs set port_raleigh portal-replication-ingress --clear
      apicup subsys set port_raleigh portal-replication=ptlreplicationraleigh.cluster2.example.com
      apicup subsys set port_raleigh replication-peer-fqdn=ptlreplicationdallas.cluster1.example.com
      apicup subsys set port_raleigh site-name=raleigh
      where
      • port_raleigh is the name of the portal subsystem that you are configuring.
      • ptlreplicationraleigh.cluster2.example.com is the ingress hostname for the current data center in the two data center deployment.
      • ptlreplicationdallas.cluster1.example.com is the ingress hostname for the other data center in the two data center deployment.
      • raleigh is the name of the site in the current data center.
    8. Copy the encryption key for the portal subsystem on DC1 over to the portal subsystem on DC2.
      Run the following command to get the encryption key for the portal subsystem on DC1, and put it into a file that is called port-encryption-secret:
      apicup certs get port_dallas encryption-secret -t key > port-encryption-secret
      Run the following command to use the port-encryption-secret file to set the same encryption key on DC2:
      apicup certs set port_raleigh encryption-secret port-encryption-secret
      Where port_dallas is the name of the portal subsystem on DC1, and port_raleigh is the name of the portal subsystem on DC2.
      To check that the encryption key is set the same on both DC1 and DC2, you can run the following commands and verify that the outputs match:
      apicup certs get port_dallas encryption-secret -t key
      apicup certs get port_raleigh encryption-secret -t key
    9. Complete the remainder of the steps in Preparing the portal subsystem for deployment from step 21 to verify that the configuration settings are valid, create your ISO file, and deploy the OVF template.
    10. Check the status of the installation by running:
      apicup subsys status subsystem_name
  • Converting a single data center deployment to two data centers
    Important:
    • Use the existing deployment as the active site. The new site must be set to warm-standby. If you set your existing deployment to warm-standby then all your management and portal data is erased.
    • The ingress endpoint values from the existing deployment become the global ingress endpoints for the two center deployment. The following endpoints must be set the same on your new deployment, as they are on your existing deployment:
      Management subsystem endpoints
      • platform-api
      • consumer-api
      • cloud-admin-ui
      • api-manager-ui
      • consumer-catalog-ui
      Portal subsystem endpoints
      • portal-admin
      • portal-www
    • Configure your network to be able to route the endpoints to either data center, depending on which is the active.
    • You cannot configure a site-name for the existing management or portal cluster because the site-name property can be configured only at first deployment.

    The following example shows how to update the 2DCDR values to make the existing data center Dallas the active, and for installing and setting the new remote data center Raleigh as warm-standby.

    1. Configure the 2DCDR values to set Dallas to be active for the management subsystem on DC1.
      Run the following commands:
      apicup subsys set mgmt_dallas multi-site-ha-enabled=true
      apicup subsys set mgmt_dallas multi-site-ha-mode=active
      apicup certs set mgmt_dallas management-replication-ingress --clear
      apicup subsys set mgmt_dallas management-replication=mgrreplicationdallas.cluster1.example.com
      apicup subsys set mgmt_dallas replication-peer-fqdn=mgrreplicationraleigh.cluster2.example.com
      where
      • mgmt_dallas is the name of the management subsystem that you are configuring.
      • mgrreplicationdallas.cluster1.example.com is the ingress hostname for the current data center in the two data center deployment.
      • mgrreplicationraleigh.cluster2.example.com is the ingress hostname for the other data center in the two data center deployment.
      Note: As the management subsystem already exists, you cannot set the site-name property.
    2. Configure the 2DCDR values to set Dallas to be active for the portal subsystem on DC1.
      Run the following commands:
      apicup subsys set port_dallas multi-site-ha-enabled=true
      apicup subsys set port_dallas multi-site-ha-mode=active
      apicup certs set port_dallas portal-replication-ingress --clear
      apicup subsys set port_dallas portal-replication=ptlreplicationdallas.cluster1.example.com
      apicup subsys set port_dallas replication-peer-fqdn=ptlreplicationraleigh.cluster2.example.com
      where
      • port_dallas is the name of the portal subsystem that you are configuring.
      • ptlreplicationdallas.cluster1.example.com is the ingress hostname for the current data center in the two data center deployment.
      • ptlreplicationraleigh.cluster2.example.com is the ingress hostname for the other data center in the two data center deployment.
      Note: As the portal subsystem already exists, you cannot set the site-name property.
    3. Run apicup to update the settings in each subsystem, for example:
      apicup subsys install subsystem_name
      where subsystem_name is the name of the management or portal subsystem that you want to update, for example mgmt_dallas.
    4. Check the status of the installation by running:
      apicup subsys status subsystem_name
    5. Add the new DC, Raleigh (DC2) by completing the steps in Preparing the management subsystem for deployment up to step 18 to create a management subsystem in Raleigh (DC2).
      Note:
      • Remember to use the same project directory as for the active data center.
      • Remember to set the same endpoints as for DC1:
        Management subsystem endpoints
        • platform-api
        • consumer-api
        • cloud-admin-ui
        • api-manager-ui
        • consumer-catalog-ui
        Portal subsystem endpoints
        • portal-admin
        • portal-www
    6. Configure the 2DCDR values to set Raleigh to be warm-standby for the management subsystem on DC2.
      Run the following commands:
      apicup subsys set mgmt_raleigh multi-site-ha-enabled=true
      apicup subsys set mgmt_raleigh multi-site-ha-mode=passive
      apicup certs set mgmt_raleigh management-replication-ingress --clear
      apicup subsys set mgmt_raleigh management-replication=mgrreplicationraleigh.cluster2.example.com
      apicup subsys set mgmt_raleigh replication-peer-fqdn=mgrreplicationdallas.cluster1.example.com
      apicup subsys set mgmt_raleigh site-name=raleigh
      where
      • mgmt_raleigh is the name of the management subsystem that you are configuring.
      • mgrreplicationraleigh.cluster2.example.com is the ingress hostname for the current data center in the two data center deployment.
      • mgrreplicationdallas.cluster1.example.com is the ingress hostname for the other data center in the two data center deployment.
      • raleigh is the name of the site in the current data center.
      Note: As this is a new deployment of the management subsystem, you can set the site-name property.
    7. Set the encryption key manually for Raleigh (DC2) by copying it from Dallas (DC1).
      Run the following commands:
      apicup certs get mgmt_dallas encryption-secret -t key > mgmt-encryption-secret
      apicup certs set mgmt_raleigh encryption-secret mgmt-encryption-secret
    8. Check that the management encryption key is set the same on both DC1 and DC2.
      Run the following commands, and verify that the outputs match:
      apicup certs get mgmt_dallas encryption-secret -t key
      apicup certs get mgmt_raleigh encryption-secret -t key
    9. Complete the remainder of the steps in Preparing the management subsystem for deployment from step 18 to verify that the configuration settings are valid, create your ISO file, and deploy the OVF template.
    10. Complete the steps in Preparing the portal subsystem for deployment up to and including step 21 to create a portal subsystem in Raleigh (DC2).
    11. Configure the 2DCDR values to set Raleigh to be warm-standby for the portal subsystem on DC2.
      Run the following commands:
      apicup subsys set port_raleigh multi-site-ha-enabled=true
      apicup subsys set port_raleigh multi-site-ha-mode=passive
      apicup certs set port_raleigh portal-replication-ingress --clear
      apicup subsys set port_raleigh portal-replication=ptlreplicationraleigh.cluster2.example.com
      apicup subsys set port_raleigh replication-peer-fqdn=ptlreplicationdallas.cluster1.example.com
      apicup subsys set port_raleigh site-name=raleigh
      where
      • port_raleigh is the name of the portal subsystem that you are configuring.
      • ptlreplicationraleigh.cluster2.example.com is the ingress hostname for the current data center in the two data center deployment.
      • ptlreplicationdallas.cluster1.example.com is the ingress hostname for the other data center in the two data center deployment.
      • raleigh is the name of the site in the current data center.
    12. Run the following command to generate the default portal certificates on the warm-standby site:
      apicup certs generate port_raleigh
    13. Copy the encryption key for the portal subsystem on DC1 over to the portal subsystem on DC2.
      Run the following command to get the encryption key for the portal subsystem on DC1, and put it into a file that is called port-encryption-secret:
      apicup certs get port_dallas encryption-secret -t key > port-encryption-secret
      Run the following command to use the port-encryption-secret file to set the same encryption key on DC2:
      apicup certs set port_raleigh encryption-secret port-encryption-secret
      Where port_dallas is the name of the portal subsystem on DC1, and port_raleigh is the name of the portal subsystem on DC2.
      To check that the encryption key is set the same on both DC1 and DC2, you can run the following commands and verify that the outputs match:
      apicup certs get port_dallas encryption-secret -t key
      apicup certs get port_raleigh encryption-secret -t key
    14. Complete the remainder of the steps in Preparing the portal subsystem for deployment from step 21 to verify that the configuration settings are valid, create your ISO file, and deploy the OVF template.
    15. Check the status of the installation by running:
      apicup subsys status subsystem_name
    16. Set your dynamic router to direct all traffic to DC1. The endpoints that must be directed to DC1 are:
    Management subsystem endpoints
    • platform-api
    • consumer-api
    • cloud-admin-ui
    • api-manager-ui
    • consumer-catalog-ui
    Portal subsystem endpoints
    • portal-admin
    • portal-www
  • Validating your two data center deployment
    Use the apicup health-check command to confirm that the management subsystems are ready and synchronized:
    apicup subsys health-check <management subsystem>
    If the management subsystem is healthy, the health-check command returns silently with no output.
    If you want to see more information in the health-check output, add the -v flag:
    apicup subsys health-check <management subsystem> -v
    ok(true) - Expected apicup version(10.0.8.0) to match apic version(10.0.8.0)
    ...
    ok(true) - ManagementCluster (specified multi site ha mode: active, current ha mode: active, ha status: Ready, ha message: Active DB is ready) is Ready | State: 17/17 | Phase: Running | Message: All services ready. HA status Ready - see HAStatus in CR for details

Example

Example of the management subsystem values that you can view by running the apicup subsys get mgmt_name command:
apicup subsys get mgmt_dallas
Appliance settings                                                                                                                            
==================                                                                                                                            
                                                                                                                                              
Name                          Value                                                                                                            Description 
----                          -----                                                                                                            ------
additional-cloud-init-file                                                                                                                     (Optional) Path to additional cloud-init yml file 
data-device                   sdb                                                                                                              VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO) 
default-password              $6$rounds=4096$vtcqpAVK$dzqrOeYP33WTvTug38Q4Rld5l8TmdQgezzTnkX/PFwkzTiZ2S0CqNRr1S4b08tOc4p.OEg4BtzBe/r8RAk.gW/   (Optional) Console login password for `apicadm` user, password must be pre-hashed 
dns-servers                   [8.8.8.8]                                                                                                        List of DNS servers 
extra-values-file                                                                                                                              (Optional) Path to additional configuration yml file 
k8s-pod-network               172.16.0.0/16                                                                                                    (Optional) CIDR for pods within the appliance 
k8s-service-network           172.17.0.0/16                                                                                                    (Optional) CIDR for services within the appliance 
public-iface                  eth0                                                                                                             Device for API/UI traffic (Eg: eth0) 
search-domain                 [subnet1.example.com]                                                                                            List for DNS search domains 
ssh-keyfiles                  [/home/vsphere/.ssh/id_rsa.pub]                                                                                  List of SSH public keys files
traffic-iface                 eth0                                                                                                             Device for cluster traffic (Eg: eth0) 
                                                                                                                                              
                                                                                                                                              
Subsystem settings                                                                                                                            
==================                                                                                                                            
                                                                                                                                              
Name                          Value                                                                                                            Description 
----                          -----                                                                                                            ------
deployment-profile            n1xc4.m16                                                                                                        Deployment profile (n1xc2.m16/n3xc4.m16) for analytics, (n1xc4.m16/n3xc4.m16) for management, (n1xc2.m8/n3xc4.m8) for portal 
license-use                   production                                                                                                       License use (production/nonproduction) 
multi-site-ha-enabled         true                                                                                                            Multi site HA enabled 
multi-site-ha-mode            active                                                                                                           Multi site HA mode (active/passive) 
replication-peer-fqdn         mgrreplicationraleigh.cluster2.example.com                                                                                                                 Replication peer fully qualified name (replication endpoint of active mode site) 
site-name                     dallas                                                                                                                 Site name, used in k8s resource names 
test-and-monitor-enabled      false                                                                                                            Test and Monitor enabled 
                                                                                                                                              
                                                                                                                                              
Endpoints                                                                                                                                     
=========                                                                                                                                     
                                                                                                                                              
Name                          Value                                                                                                            Description 
----                          -----                                                                                                            ------
api-manager-ui                api-manager-ui.testsrv0231.subnet1.example.com                                                                   FQDN of API manager UI endpoint 
cloud-admin-ui                cloud-admin-ui.testsrv0231.subnet1.example.com                                                                   FQDN of Cloud admin endpoint 
consumer-api                  consumer-api.testsrv0231.subnet1.example.com                                                                     FQDN of consumer API endpoint 
hub                                                                                                                                            FQDN of Test and Monitor hub endpoint, only required if Test and Monitor is enabled 
management-replication        mgrreplicationdallas.cluster1.example.com                                                                                                                 FQDN of Management replication endpoint, only required if HA is enabled 
platform-api                  platform-api.testsrv0231.subnet1.example.com                                                                     FQDN of platform API endpoint 
Example of the portal subsystem values that you can view by running the apicup subsys get port_name command:
apicup subsys get port_dallas
Appliance settings                                                                                                                            
==================                                                                                                                            
                                                                                                                                              
Name                          Value                                                                                                            Description 
----                          -----                                                                                                            ------
additional-cloud-init-file                                                                                                                     (Optional) Path to additional cloud-init yml file 
data-device                   sdb                                                                                                              VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO) 
default-password              $6$rounds=4096$vtcqpAVK$dzqrOeYP33WTvTug38Q4Rld5l8TmdQgezzTnkX/PFwkzTiZ2S0CqNRr1S4b08tOc4p.OEg4BtzBe/r8RAk.gW/   (Optional) Console login password for `apicadm` user, password must be pre-hashed 
dns-servers                   [192.168.1.201]                                                                                                  List of DNS servers 
extra-values-file                                                                                                                              (Optional) Path to additional configuration yml file 
k8s-pod-network               172.16.0.0/16                                                                                                    (Optional) CIDR for pods within the appliance 
k8s-service-network           172.17.0.0/16                                                                                                    (Optional) CIDR for services within the appliance 
public-iface                  eth0                                                                                                             Device for API/UI traffic (Eg: eth0) 
search-domain                 [subnet1.example.com]                                                                                            List for DNS search domains 
ssh-keyfiles                  [/home/vsphere/.ssh/id_rsa.pub]                                                                                  List of SSH public keys files 
traffic-iface                 eth0                                                                                                             Device for cluster traffic (Eg: eth0) 
                                                                                                                                              
                                                                                                                                              
Subsystem settings                                                                                                                            
==================                                                                                                                            
                                                                                                                                              
Name                          Value                                                                                                            Description 
----                          -----                                                                                                            ------
deployment-profile            n1xc2.m8                                                                                                         Deployment profile (n1xc2.m16/n3xc4.m16) for analytics, (n1xc4.m16/n3xc4.m16) for management, (n1xc2.m8/n3xc4.m8) for portal 
license-use                   production                                                                                                    License use (production/nonproduction) 
multi-site-ha-enabled         true                                                                                                            Multi site HA enabled 
multi-site-ha-mode            active                                                                                                           Multi site HA mode (active/passive) 
replication-peer-fqdn         ptlreplicationraleigh.cluster2.example.com                                                                                                                 Replication peer fully qualified name (replication endpoint of active mode site) 
site-name                     dallas                                                                                                                 Site name, used in k8s resource names 
                                                                                                                                              
                                                                                                                                              
Endpoints                                                                                                                                     
=========                                                                                                                                     
                                                                                                                                              
Name                          Value                                                                                                            Description 
----                          -----                                                                                                            ------
portal-admin                  portal-api.example.com                                                                                           FQDN of Portal admin endpoint 
portal-replication            ptlreplicationdallas.cluster1.example.com                                                                                                                 FQDN of Portal replication endpoint, only required if HA is enabled 
portal-www                    portal-www.example.com                                                                                           FQDN of Portal web endpoint