Installing a two data center deployment on VMware
How to install a two data center disaster recovery (2DCDR) deployment on VMware.
Before you begin
Ensure that you understand the concepts of a 2DCDR deployment in API Connect. For more information, see A two data center deployment strategy on VMware.
Review the information in Requirements for initial deployment on VMware and, if appropriate, What's new for v10 installation from previous versions.
Familiarize yourself with the instructions in First steps for deploying in a VMware environment. Complete these instructions with the additional steps in this topic that are specific to installing a 2DCDR deployment.
About this task
- Management subsystem endpoints
-
platform-api
consumer-api
cloud-admin-ui
api-manager-ui
- Portal subsystem endpoints
-
portal-admin
portal-www
To install a 2DCDR deployment, you follow the normal stand-alone API Connect installation steps, but before you generate the ISO files you must set some additional 2DCDR properties.
Setting | Description |
---|---|
multi-site-ha-enabled |
Indicates whether 2DCDR deployment is enabled.
Set to true to enable 2DCDR. Set to
false to disable 2DCDR. The default value is
false . |
multi-site-ha-mode |
The 2DCDR mode of
the data center. Possible values are:
|
management-replication |
Required only for the management subsystem. The external ingress name for the management subsystem in the current data center in the 2DCDR deployment. The name is a unique fully qualified hostname that the other data center uses to communicate with the current data center. |
portal-replication |
Required only for the portal subsystem. The external ingress name for the portal subsystem in the current data center in the 2DCDR deployment. The name is a unique fully qualified hostname that the other data center uses to communicate with the current data center. |
replication-peer-fqdn |
The ingress hostname for the other data center in the 2DCDR deployment. This information is required so that the two data centers can communicate with each other. |
siteName |
Unique descriptive name for the API Connect data center.
Must be different in each data center. This name is used in the hostnames, and so can contain only
a-z and 0-9 characters. You can set a siteName
only when the subsystems are first deployed, and you cannot change the name after deployment. If you
don't set siteName at first deployment, the siteName is
autogenerated. |
- Use a single APICUP project for all subsystems so that the API Connect deployments in both data centers have the same certificate chains. If network access to both data centers from the same APICUP project location is not possible, you can copy the project directory across to the other data center. If you are copying the project directory to your other data center, ensure that you keep backups of both project directories.
- The original project directory that is created with APICUP during the initial product installation is required to both restore the database and to upgrade your deployment. You cannot restore the database or complete an upgrade without the initial project directory because it contains pertinent information about the cluster.
- Subsystem endpoints for the components cannot be changed.
- The subsystem name of the management and portal subsystems must consist of lowercase alphanumeric characters or '-', contain no spaces, start with an alphabetic character, and end with an alphanumeric character.
Procedure
Example
apicup subsys get mgmt_name
command:apicup subsys get mgmt_dallas
Appliance settings
==================
Name Value Description
---- ----- ------
additional-cloud-init-file (Optional) Path to additional cloud-init yml file
data-device sdb VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO)
default-password $6$rounds=4096$vtcqpAVK$dzqrOeYP33WTvTug38Q4Rld5l8TmdQgezzTnkX/PFwkzTiZ2S0CqNRr1S4b08tOc4p.OEg4BtzBe/r8RAk.gW/ (Optional) Console login password for `apicadm` user, password must be pre-hashed
dns-servers [8.8.8.8] List of DNS servers
extra-values-file (Optional) Path to additional configuration yml file
k8s-pod-network 172.16.0.0/16 (Optional) CIDR for pods within the appliance
k8s-service-network 172.17.0.0/16 (Optional) CIDR for services within the appliance
public-iface eth0 Device for API/UI traffic (Eg: eth0)
search-domain [subnet1.example.com] List for DNS search domains
ssh-keyfiles [/home/vsphere/.ssh/id_rsa.pub] List of SSH public keys files
traffic-iface eth0 Device for cluster traffic (Eg: eth0)
Subsystem settings
==================
Name Value Description
---- ----- ------
deployment-profile n1xc4.m16 Deployment profile (n1xc2.m16/n3xc4.m16) for analytics, (n1xc4.m16/n3xc4.m16) for management, (n1xc2.m8/n3xc4.m8) for portal
license-use production License use (production/nonproduction)
multi-site-ha-enabled true Multi site HA enabled
multi-site-ha-mode active Multi site HA mode (active/passive)
replication-peer-fqdn mgrreplicationraleigh.cluster2.example.com Replication peer fully qualified name (replication endpoint of active mode site)
site-name dallas Site name, used in k8s resource names
test-and-monitor-enabled false Test and Monitor enabled
Endpoints
=========
Name Value Description
---- ----- ------
api-manager-ui api-manager-ui.testsrv0231.subnet1.example.com FQDN of API manager UI endpoint
cloud-admin-ui cloud-admin-ui.testsrv0231.subnet1.example.com FQDN of Cloud admin endpoint
consumer-api consumer-api.testsrv0231.subnet1.example.com FQDN of consumer API endpoint
hub FQDN of Test and Monitor hub endpoint, only required if Test and Monitor is enabled
management-replication mgrreplicationdallas.cluster1.example.com FQDN of Management replication endpoint, only required if HA is enabled
platform-api platform-api.testsrv0231.subnet1.example.com FQDN of platform API endpoint
turnstile FQDN of Test and Monitor turnstile endpoint, only required if Test and Monitor is enabled
Example of the portal subsystem values that you can view by running the apicup
subsys get port_name
command:apicup subsys get port_dallas
Appliance settings
==================
Name Value Description
---- ----- ------
additional-cloud-init-file (Optional) Path to additional cloud-init yml file
data-device sdb VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO)
default-password $6$rounds=4096$vtcqpAVK$dzqrOeYP33WTvTug38Q4Rld5l8TmdQgezzTnkX/PFwkzTiZ2S0CqNRr1S4b08tOc4p.OEg4BtzBe/r8RAk.gW/ (Optional) Console login password for `apicadm` user, password must be pre-hashed
dns-servers [192.168.1.201] List of DNS servers
extra-values-file (Optional) Path to additional configuration yml file
k8s-pod-network 172.16.0.0/16 (Optional) CIDR for pods within the appliance
k8s-service-network 172.17.0.0/16 (Optional) CIDR for services within the appliance
public-iface eth0 Device for API/UI traffic (Eg: eth0)
search-domain [subnet1.example.com] List for DNS search domains
ssh-keyfiles [/home/vsphere/.ssh/id_rsa.pub] List of SSH public keys files
traffic-iface eth0 Device for cluster traffic (Eg: eth0)
Subsystem settings
==================
Name Value Description
---- ----- ------
deployment-profile n1xc2.m8 Deployment profile (n1xc2.m16/n3xc4.m16) for analytics, (n1xc4.m16/n3xc4.m16) for management, (n1xc2.m8/n3xc4.m8) for portal
license-use production License use (production/nonproduction)
multi-site-ha-enabled true Multi site HA enabled
multi-site-ha-mode active Multi site HA mode (active/passive)
replication-peer-fqdn ptlreplicationraleigh.cluster2.example.com Replication peer fully qualified name (replication endpoint of active mode site)
site-name dallas Site name, used in k8s resource names
Endpoints
=========
Name Value Description
---- ----- ------
portal-admin portal-api.example.com FQDN of Portal admin endpoint
portal-replication ptlreplicationdallas.cluster1.example.com FQDN of Portal replication endpoint, only required if HA is enabled
portal-www portal-www.example.com FQDN of Portal web endpoint