This topic describes how to configure a cluster of API Connect subsystems (management
server, analytics, and Developer Portal) with three VMs for each subsystem, for use with a load
balancer, to support a high availability (HA) environment.
Before you begin
To create a cluster of hosts for each API Connect subsystem, use apicup
to
create a subsystem with all the required parameters, and to add as many hosts as needed to the
configuration file. The configuration file is a .yml file in the project directory.
For each host of a subsystem added in the .yml file, a separate ISO file is created for the
cluster member VM. Note that the ISO files for each VM must stay attached during the whole lifetime
of the VM.
In an API Connect
cluster in a VMware environment, the first three nodes are master nodes. Additional nodes are worker
nodes.
Note: To add a new host to an existing cluster, you can create the host, regenerate the ISO and
attach it to the new virtual machine, and it will automatically join the cluster.
To use these instructions, you should first review:
About this task
This page presents a concise step-by-step flow, with sample commands, for the configuration of
each subsystem. As you step through the flow, you might want to refer back to the detailed
configuration steps for each subsystem. The detailed instructions provide additional considerations
for each step, and include optional configuration tasks for backups, logging, message queues
(analytics), and password hashing. Each of the subsystem pages describe how to use the VMware
console to deploy the ISOs that you create here.
Detailed configuration steps:
The example commands uses the following values for DNS server, internet gateway, host names and
IP addresses:
Component |
Host names and IP addresses |
DNS Name Server |
IP: 192.168.1.1
|
Internet gateway |
IP: 192.168.1.2
|
Manager on VM1 |
Hostname: manager1.sample.example.com
IP: 192.168.1.101
|
Manager on VM2 |
Hostname: manager2.sample.example.com
IP: 192.168.1.102
|
Manager on VM3 |
Hostname: manager3.sample.example.com
IP: 192.168.1.103
|
Analytics on VM4 |
Hostname: analytics1.sample.example.com
IP: 192.168.1.104
|
Analytics on VM5 |
Hostname: analytics2.sample.example.com
IP: 192.168.1.105
|
Analytics on VM6 |
Hostname: analytics3.sample.example.com
IP: 192.168.1.106
|
Developer Portal on VM7 |
Hostname: portal1.sample.example.com
IP: 192.168.1.107
|
Developer Portal on VM8 |
Hostname: portal2.sample.example.com
IP: 192.168.1.108
|
Developer Portal on VM9 |
Hostname: portal3.sample.example.com
IP: 192.168.1.109
|
Procedure
-
Create the Management subsystems
- Create the Management subsystem.
apicup subsys create mgmt management
- Set deployment-profile to
n3xc4.m16
.
apicup subsys set mgmt deployment-profile=n3xc4.m16
If you omit this step, the deployment-profile defaults to n1xc4.16
, which is for
non-HA environments, and will not support three instances of the subsystem in one cluster.
- Specify the license version you purchased.
apicup subsys set mgmt
license-use=<license_type>
The license_type must be either production
or
nonproduction
. If not specified, the default value is
nonproduction
.
- Set management endpoints
Endpoints can point to VM host names, but in cluster deployments typically point to a load
balancer. The load balancer distributes requests over the 3 VMs. The following values point to a
sample load balancer URL.
Component |
Command |
Management REST API URL |
apicup subsys set mgmt platform-api platform-api.sample.example.com
|
Consumer (Portal) REST API URL |
apicup subsys set mgmt consumer-api consumer-api.sample.example.com
|
Cloud Manager UI |
apicup subsys set mgmt cloud-admin-ui cloud-admin-ui.sample.example.com
|
API Manager UI |
apicup subsys set mgmt api-manager-ui api-manager-ui.sample.example.com
|
- Set the search domain for the VM.
apicup subsys set mgmt search-domain sample.example.com
- Set the DNS Name Server for the VM to look up endpoints.
apicup subsys set mgmt dns-servers 192.168.1.1
- Set a Public Keyfile.
This is the public key of the user account that you want to use to ssh from,
to the appliance.
apicup subsys set mgmt ssh-keyfiles "id_rsa.pub"
- Create the hosts for the subsystem.
You must specify a password to use to encrypt the disks that the appliance uses. Replace the
example password in the previous example with a strong password that meets your security
requirements.
apicup hosts create mgmt manager1.sample.example.com password123
apicup hosts create mgmt manager2.sample.example.com password123
apicup hosts create mgmt manager3.sample.example.com password123
- Set the network interface. Note that the last parameter is the Internet Gateway.
apicup iface create mgmt manager1.sample.example.com eth0 192.168.1.101/255.255.255.0 192.168.1.2
apicup iface create mgmt manager2.sample.example.com eth0 192.168.1.102/255.255.255.0 192.168.1.2
apicup iface create mgmt manager3.sample.example.com eth0 192.168.1.103/255.255.255.0 192.168.1.2
- Set the network traffic interfaces.
apicup subsys set mgmt traffic-iface eth0
apicup subsys set mgmt public-iface eth0
- Verify the host configuration.
apicup hosts list mgmt
Note: This command might return the following messages, which you can
ignore:
* host is missing traffic interface
* host is missing public interface
- Set a hashed password to access the appliance VM through the VMware Remote Console.
Use an operating system utility to create a hashed password, and then use
apicup
to
set the hashed password for your subsystem:
- Validate the installation.
apicup subsys get mgmt --validate
- Create an ISO file in a plan folder. For example,
mgmtplan-out
.
apicup subsys install mgmt --out mgmtplan-out
If you have multiple nodes listed for the hosts, when you run the --out
command, it creates an ISO for each node. When deploying the nodes from VMware, each node gets its
own ISO file attached.
- To deploy the management ISOs on VMware, see Deploying the management subsystem OVA file.
- Create the analytics subsystems
- Create the subsystem:
apicup subsys create analyt analytics
- Set deployment-profile to
n3xc4.m16
.
apicup subsys set analyt deployment-profile=n3xc4.m16
If you omit this step, the deployment-profile defaults to n1xc2.16
, which is for
non-HA environments, and will not support three instances of the subsystem in one cluster.
- Specify the license version you purchased.
apicup subsys set analyt
license-use=<license_type>
The license_type must be either production
or
nonproduction
. If not specified, the default value is
nonproduction
.
- Set analytics endpoints
Endpoints can point to VM host names, but in cluster deployments typically point to a load
balancer. The load balancer distributes requests over the 3 VMs. The following values point to a
sample load balancer URL.
Component |
Command |
analytics-ingestion |
apicup subsys set analyt analytics-ingestion=analytics-ingestion.sample.example.com
|
analytics-client |
apicup subsys set analyt analytics-client=analytics-client.sample.example.com
|
- Set the search domain for the VM.
apicup subsys set analyt search-domain sample.example.com
- Set the DNS Name server for the VM to look up endpoints.
apicup subsys set analyt dns-servers 192.168.1.1
- Set a Public Keyfile.
This is the public key of the user account that you want to use to ssh from,
to the appliance
apicup subsys set analyt ssh-keyfiles "id_rsa.pub"
- Set a hashed password to access the appliance VM through the VMware Remote Console.
Use an operating system utility to create a hashed password, and then use
apicup
to
set the hashed password for your subsystem:
- Create the hosts for the subsystem. You must specify a password to use to encrypt the
disks that the appliance uses. Replace the example password in the following password with a strong
password that meets your security requirements.
apicup hosts create analyt analytics1.sample.example.com password123
apicup hosts create analyt analytics2.sample.example.com password123
apicup hosts create analyt analytics3.sample.example.com password123
- Set the network interface.
Note that the last parameter is the Internet Gateway.
apicup iface create analyt analytics1.sample.example.com eth0 192.168.1.104/255.255.255.0 192.168.1.2
apicup iface create analyt analytics2.sample.example.com eth0 192.168.1.105/255.255.255.0 192.168.1.2
apicup iface create analyt analytics3.sample.example.com eth0 192.168.1.106/255.255.255.0 192.168.1.2
- Check the host configuration for problems.
- Validate the installation.
apicup subsys get analyt --validate
- Create an ISO file in a plan folder. For example,
analytplan-out
.
apicup subsys install analyt --out analytplan-out
If you have multiple nodes listed for the hosts, when you run the --out
command, it creates an ISO for each node. When deploying the nodes from VMware, each node gets its
own ISO file attached.
- To deploy the ISOs, see Deploying the Analytics subsystem OVA file.
- Create the portal subsystems
- Create the portal.
apicup subsys create port portal
- For production environments, set deployment-profile to
n3xc4.m8
.
apicup subsys set port deployment-profile=n3xc4.m8
If you omit this step, the deployment-profile defaults to n1xc4.m8
, which is for
non-HA environments, and will not support three instances of the subsystem in one cluster.
- Specify the license version you purchased.
apicup subsys set port
license-use=<license_type>
The license_type must be either production
or
nonproduction
. If not specified, the default value is
nonproduction
.
- Set the portal endpoints.
Endpoints can point to VM host names, but in cluster deployments typically point to a load
balancer. The load balancer distributes requests over the 3 VMs. The following values point to a
sample load balancer URL.
Component |
Command |
portal-admin |
apicup subsys set port portal-admin=portal-admin.sample.example.com
|
portal-www |
apicup subsys set port portal-www=portal-www.sample.example.com
|
- Set the search domain for the VM.
apicup subsys set port search-domain sample.example.com
- Set the DNS Name server for the VM to look up endpoints.
apicup subsys set port dns-servers 192.168.1.1
- Set a Public Keyfile.
This is the public key of the user account that you want to use to ssh from,
to the appliance
apicup subsys set port ssh-keyfiles "id_rsa.pub"
- Set a hashed password to access the appliance VM through the VMware Remote Console.
Use an operating system utility to create a hashed password, and then use
apicup
to
set the hashed password for your subsystem:
- Create the hosts for the subsystem. You must specify a password to use to encrypt the
disks that the appliance uses.
apicup hosts create port portal1.sample.example.com password123
apicup hosts create port portal2.sample.example.com password123
apicup hosts create port portal3.sample.example.com password123
Replace the example password in the previous example with a strong password that meets your
security requirements.
- Set the network interface.
Note that the last parameter is the Internet Gateway.
apicup iface create port portal1.sample.example.com eth0 192.168.1.107/255.255.255.0 192.168.1.2
apicup iface create port portal2.sample.example.com eth0 192.168.1.108/255.255.255.0 192.168.1.2
apicup iface create port portal3.sample.example.com eth0 192.168.1.109/255.255.255.0 192.168.1.2
- Check the host configuration for problems.
- Validate the installation.
apicup subsys get port --validate
- Create an ISO file in a plan folder. For example,
portplan-out
.
apicup subsys install port --out portplan-out
If you have multiple nodes listed for the hosts, when you run the --out
command, it creates an ISO for each node. When deploying the nodes from VMware, each node gets its
own ISO file attached.
- To deploy the ISOs, see Deploying the Developer Portal subsystem OVA file.