Installing a two data center deployment
How to install a two data center disaster recovery (2DCDR) deployment on VMware.
Before you begin
Ensure that you understand the concepts of a 2DCDR deployment in API Connect. For more information, see Two data center warm-standby deployment strategy on VMware.
Review the information in VMware deployment overview and requirements.
Familiarize yourself with the instructions in Preparing to install API Connect in VMware. Complete these instructions with the additional steps in this topic that are specific to installing a 2DCDR deployment.
Restriction: It is not possible to use the Automated API behavior testing
application (Installing the Automated API behavior testing application) in a 2DCDR configuration.
About this task
Installing API Connect as a 2DCDR deployment is
similar to a single installation. The deployment in each data center is effectively an instance of
the same API Connect
deployment. The following endpoints must be the same on both data centers:
- Management subsystem endpoints
-
platform-api
consumer-api
cloud-admin-ui
api-manager-ui
consumer-catalog-ui
- Portal subsystem endpoints
-
portal-admin
portal-www
To install a 2DCDR deployment, you follow the normal stand-alone API Connect installation steps, but before you generate the ISO files you must set some additional 2DCDR properties.
The 2DCDR properties are
described in the following table.
Setting | Description |
---|---|
multi-site-ha-enabled |
Indicates whether 2DCDR deployment is enabled.
Set to true to enable 2DCDR. Set to
false to disable 2DCDR. The default value is
false . |
multi-site-ha-mode |
The 2DCDR mode of
the data center. Possible values are:
|
management-replication |
Required only for the management subsystem. The external ingress name for the management subsystem in the current data center in the 2DCDR deployment. The name is a unique fully qualified hostname that the other data center uses to communicate with the current data center. |
portal-replication |
Required only for the portal subsystem. The external ingress name for the portal subsystem in the current data center in the 2DCDR deployment. The name is a unique fully qualified hostname that the other data center uses to communicate with the current data center. |
replication-peer-fqdn |
The ingress hostname for the other data center in the 2DCDR deployment. This information is required so that the two data centers can communicate with each other. |
siteName |
Unique descriptive name for the API Connect data center.
Must be different in each data center. This name is used in the hostnames, and so can contain only
a-z and 0-9 characters. You can set a siteName
only when the subsystems are first deployed, and you cannot change the name after deployment. If you
don't set siteName at first deployment, the siteName is
autogenerated. |
Important: Use a single project directory for all subsystems so that the API Connect deployments in
both data centers have the same certificate chains. If network access to both data centers from the
same project directory is not possible, you can copy the project directory across to the other data
center. If you are copying the project directory to your other data center, ensure that you keep
backups of both project directories.
Procedure
Example
apicup subsys get mgmt_name
command:apicup subsys get mgmt_dallas
Appliance settings
==================
Name Value Description
---- ----- ------
additional-cloud-init-file (Optional) Path to additional cloud-init yml file
data-device sdb VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO)
default-password $6$rounds=4096$vtcqpAVK$dzqrOeYP33WTvTug38Q4Rld5l8TmdQgezzTnkX/PFwkzTiZ2S0CqNRr1S4b08tOc4p.OEg4BtzBe/r8RAk.gW/ (Optional) Console login password for `apicadm` user, password must be pre-hashed
dns-servers [8.8.8.8] List of DNS servers
extra-values-file (Optional) Path to additional configuration yml file
k8s-pod-network 172.16.0.0/16 (Optional) CIDR for pods within the appliance
k8s-service-network 172.17.0.0/16 (Optional) CIDR for services within the appliance
public-iface eth0 Device for API/UI traffic (Eg: eth0)
search-domain [subnet1.example.com] List for DNS search domains
ssh-keyfiles [/home/vsphere/.ssh/id_rsa.pub] List of SSH public keys files
traffic-iface eth0 Device for cluster traffic (Eg: eth0)
Subsystem settings
==================
Name Value Description
---- ----- ------
deployment-profile n1xc4.m16 Deployment profile (n1xc2.m16/n3xc4.m16) for analytics, (n1xc4.m16/n3xc4.m16) for management, (n1xc2.m8/n3xc4.m8) for portal
license-use production License use (production/nonproduction)
multi-site-ha-enabled true Multi site HA enabled
multi-site-ha-mode active Multi site HA mode (active/passive)
replication-peer-fqdn mgrreplicationraleigh.cluster2.example.com Replication peer fully qualified name (replication endpoint of active mode site)
site-name dallas Site name, used in k8s resource names
test-and-monitor-enabled false Test and Monitor enabled
Endpoints
=========
Name Value Description
---- ----- ------
api-manager-ui api-manager-ui.testsrv0231.subnet1.example.com FQDN of API manager UI endpoint
cloud-admin-ui cloud-admin-ui.testsrv0231.subnet1.example.com FQDN of Cloud admin endpoint
consumer-api consumer-api.testsrv0231.subnet1.example.com FQDN of consumer API endpoint
hub FQDN of Test and Monitor hub endpoint, only required if Test and Monitor is enabled
management-replication mgrreplicationdallas.cluster1.example.com FQDN of Management replication endpoint, only required if HA is enabled
platform-api platform-api.testsrv0231.subnet1.example.com FQDN of platform API endpoint
Example
of the portal subsystem values that you can view by running the apicup subsys get
port_name
command:apicup subsys get port_dallas
Appliance settings
==================
Name Value Description
---- ----- ------
additional-cloud-init-file (Optional) Path to additional cloud-init yml file
data-device sdb VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO)
default-password $6$rounds=4096$vtcqpAVK$dzqrOeYP33WTvTug38Q4Rld5l8TmdQgezzTnkX/PFwkzTiZ2S0CqNRr1S4b08tOc4p.OEg4BtzBe/r8RAk.gW/ (Optional) Console login password for `apicadm` user, password must be pre-hashed
dns-servers [192.168.1.201] List of DNS servers
extra-values-file (Optional) Path to additional configuration yml file
k8s-pod-network 172.16.0.0/16 (Optional) CIDR for pods within the appliance
k8s-service-network 172.17.0.0/16 (Optional) CIDR for services within the appliance
public-iface eth0 Device for API/UI traffic (Eg: eth0)
search-domain [subnet1.example.com] List for DNS search domains
ssh-keyfiles [/home/vsphere/.ssh/id_rsa.pub] List of SSH public keys files
traffic-iface eth0 Device for cluster traffic (Eg: eth0)
Subsystem settings
==================
Name Value Description
---- ----- ------
deployment-profile n1xc2.m8 Deployment profile (n1xc2.m16/n3xc4.m16) for analytics, (n1xc4.m16/n3xc4.m16) for management, (n1xc2.m8/n3xc4.m8) for portal
license-use production License use (production/nonproduction)
multi-site-ha-enabled true Multi site HA enabled
multi-site-ha-mode active Multi site HA mode (active/passive)
replication-peer-fqdn ptlreplicationraleigh.cluster2.example.com Replication peer fully qualified name (replication endpoint of active mode site)
site-name dallas Site name, used in k8s resource names
Endpoints
=========
Name Value Description
---- ----- ------
portal-admin portal-api.example.com FQDN of Portal admin endpoint
portal-replication ptlreplicationdallas.cluster1.example.com FQDN of Portal replication endpoint, only required if HA is enabled
portal-www portal-www.example.com FQDN of Portal web endpoint