Two data center deployment examples and diagrams
Examples of two data centers deployment options.
The descriptions on this page refer to a deployment of API Connect across two data
centers. The data centers are referred to as DC1 and DC2. API Connect subsystems in
each data center are referred to as <subsystemname>_<dc>
, for example:
management_dc1
refers to the management subsystem in DC1.
Remote gateway
With a remote gateway deployment, DC1 has a complete API Connect installation (management, portal, analytics, gateway), and DC2 has one or more gateways deployed that are registered with the management subsystem in DC1. All APIs published in DC1 are also published to the gateways in DC2. If DC1 has a failure, then the gateways in DC2 can continue processing API calls.
- Install all API Connect subsystems in DC1.
- Install one or more gateways in DC2.
- In the Cloud Manager UI,
register your DC2 gateways with
management_dc1
. - Associate your
analytics_dc1
subsystem withgateway_dc1
andgateway_dc2
. - Provider organization owners must configure all catalogs to publish to both
gateway_dc1
andgateway_dc2
.
Cold-standby
In the cold-standby topology, all API Connect subsystems are installed normally in DC1. In DC2 the analytics and gateway subsystems are installed and running, but the management and portal subsystems are in a ready-to-install state (cold-standby). If a failure occurs in DC1, then the management and portal subsystems in DC2 are installed, and the latest database backups from the DC1 management and portal subsystems are restored to them.
- During a failure of DC1,
gateway_dc2
continues to process API calls. Configure your load-balancer to direct all calls togateway_dc2
ifgateway_dc1
stops responding. - Ensure that the latest management and portal subsystem backups are accessible from DC2. This includes your infrastructure configuration backups, and your database backups. See Backing up, restoring, and disaster recovery.
- The recovery point for the management and portal subsystems is the last time a backup of these subsystems was taken. If you have a low recovery point objective (RPO) for management and portal, then schedule frequent backups.
- Portal sites cannot show analytics data from either data center.
- Install all API Connect subsystems in DC1.
- Prepare the API Connect installation in DC2, but do not install the management and portal subsystems. Install only the API Connect operator, the gateway, and the analytics subsystems in DC2.
- Copy the
ingress-ca
secret from DC1 to DC2. See Synchronizing the ingress-ca certificates between sites. - In the Cloud Manager UI in DC1,
register the DC1 subsystems and also the DC2 gateway and analytics subsystems. Associate
analytics_dc2
withgateway_dc2
. - In the API
Manager UI in
DC1, configure all catalogs to use both
gateway_dc1
andgateway_dc2
. Ensure that all products are published to both gateway services. - Configure scheduled backups of your management and portal subsystems in DC1, at a frequency sufficient for your recovery point objective. Ensure that backups are copied to a location that is accessible from DC2.
- Configure your load-balancer to route all API traffic to
gateway_dc2
ifgateway_dc1
fails to respond.
- Ensure that all API Connect traffic is directed to DC2.
- Install the management and portal subsystems in DC2.
- Restore the management and portal subsystem databases in DC2, with the most recent backups taken from DC1.
- Ensure that you have recent backups from management and portal in DC2.
- Restore the recent DC2 database backups to the management and portal subsystems in DC1. If you modified any of the subsystem YAML files or certificates while running management and portal in DC2, then make those same updates on DC1.
- Update your network configuration to route management and portal traffic to DC1.
- Uninstall the management and portal subsystem from your cold-standby data center. See Uninstalling API Connect subsystems.
If you want to be able to see analytics data from both data centers in the Cloud Manager and API Manager UIs, then see Replicating analytics data to a remote data center.
Warm-standby with analytics in both data centers
The management and portal subsystems are deployed in both data centers using the active/warm-standby configuration: Two data center warm-standby deployment on Kubernetes and OpenShift. An analytics subsystem is deployed in each data center to receive API event data from the gateways in that data center. Unlike the management and portal subsystems in the warm-standby data center, the gateway and analytics subsystems are running and communicate with the management subsystem in the active data center.
- Follow the installation instructions for 2DCDR on your platform: Installing API Connect. Install the management and portal subsystems as active in DC1, and as warm-standby in DC2.
- Install gateway and analytics subsystems in both data centers.
- In the Cloud Manager UI of DC1,
register all gateways and analytics subsystems from both data centers. Associate
analytics_dc1
withgateway_dc1
, andanalytics_dc2
withgateway_dc2
. - Provider organization owners must set all catalogs to publish to both
gateway_dc1
andgateway_dc2
. - API calls can be directed to both
gateway_dc1
andgateway_dc2
, and the API events they log are sent to the analytics subsystems that they are associated with.
- Promote
management_dc2
andportal_dc2
to active. - Direct all API Connect traffic to DC2.
- New APIs that are published are sent to
gateway_dc2
, and queued forgateway_dc1
.
If you want to be able to see analytics data from both data centers in the Cloud Manager and API Manager UIs, then see Replicating analytics data to a remote data center.
Advanced deployment scenarios
For more advanced multi-data center deployment options, see the API Connect white paper: IBM API Connect v10.0.8.x WhitePaper.