Configuring API Connect subsystems in a cluster on VMware

This topic describes how to configure a cluster of API Connect subsystems (management server, analytics, and Developer Portal) with three VMs for each subsystem, for use with a load balancer, to support a high availability (HA) environment.

Before you begin

To create a cluster of hosts for each API Connect subsystem, use apicup to create a subsystem with all the required parameters, and to add as many hosts as needed to the configuration file. The configuration file is a .yml file in the project directory.

For each host of a subsystem added in the .yml file, a separate ISO file is created for the cluster member VM. Note that the ISO files for each VM must stay attached during the whole lifetime of the VM.

In an API Connect cluster in a VMware environment, the first three nodes are master nodes. Additional nodes are worker nodes.

Note: To add a new host to an existing cluster, you can create the host, regenerate the ISO and attach it to the new virtual machine, and it will automatically join the cluster.

To use these instructions, you should first review:

About this task

This page presents a concise step-by-step flow, with sample commands, for the configuration of each subsystem. As you step through the flow, you might want to refer back to the detailed configuration steps for each subsystem. The detailed instructions provide additional considerations for each step, and include optional configuration tasks for backups, logging, message queues (analytics), and password hashing. Each of the subsystem pages describe how to use the VMware console to deploy the ISOs that you create here.

Detailed configuration steps:

The example commands uses the following values for DNS server, internet gateway, host names and IP addresses:

Component Host names and IP addresses
DNS Name Server
IP: 192.168.1.1
Internet gateway
IP: 192.168.1.2
Manager on VM1
Hostname: manager1.sample.example.com 
IP: 192.168.1.101
Manager on VM2
Hostname: manager2.sample.example.com
IP: 192.168.1.102
Manager on VM3
Hostname: manager3.sample.example.com
IP: 192.168.1.103
Analytics on VM4
Hostname: analytics1.sample.example.com
IP: 192.168.1.104
Analytics on VM5
Hostname: analytics2.sample.example.com
IP: 192.168.1.105
Analytics on VM6
Hostname: analytics3.sample.example.com
IP: 192.168.1.106
Developer Portal on VM7
Hostname: portal1.sample.example.com
IP: 192.168.1.107
Developer Portal on VM8
Hostname: portal2.sample.example.com
IP: 192.168.1.108
Developer Portal on VM9
Hostname: portal3.sample.example.com
IP: 192.168.1.109

Procedure

  1. Create the Management subsystems
    1. Create the Management subsystem.
      apicup subsys create mgmt management 
    2. Set install mode to standard.
      apicup subsys set mgmt mode=standard

      If you omit this step, the install mode defaults to dev, which is for non-HA environments, and will not support three instances of the subsystem in one cluster.

    3. Version 2018.4.1.10 or later: Specify the license version you purchased.

      apicup subsys set mgmt license-version=<license_type>

      The license_type must be either Production or Nonproduction. If not specified, the default value is Nonproduction.

    4. Set management endpoints

      Endpoints can point to VM host names, but in cluster deployments typically point to a load balancer. The load balancer distributes requests over the 3 VMs. The following values point to a sample load balancer URL.

      Component Command
      Management REST API URL
      apicup subsys set mgmt platform-api platform-api.sample.example.com
      
      Consumer (Portal) REST API URL
      apicup subsys set mgmt consumer-api consumer-api.sample.example.com
      
      Cloud Manager UI
      apicup subsys set mgmt cloud-admin-ui cloud-admin-ui.sample.example.com
      
      API Manager UI
      apicup subsys set mgmt api-manager-ui api-manager-ui.sample.example.com
      
    5. Set the search domain for the VM.
      apicup subsys set mgmt search-domain sample.example.com
    6. Set the DNS Name Server for the VM to look up endpoints.
      apicup subsys set mgmt dns-servers 192.168.1.1
    7. Set a Public Keyfile.

      This is the public key of the user account that you want to use to ssh from, to the appliance.

      apicup subsys set mgmt ssh-keyfiles "id_rsa.pub"
    8. Create the hosts for the subsystem.

      You must specify a password to use to encrypt the disks that the appliance uses. Replace the example password in the previous example with a strong password that meets your security requirements.

      apicup hosts create mgmt manager1.sample.example.com password123
      apicup hosts create mgmt manager2.sample.example.com password123
      apicup hosts create mgmt manager3.sample.example.com password123
    9. Set the network interface. Note that the last parameter is the Internet Gateway.
      apicup iface create mgmt manager1.sample.example.com eth0 192.168.1.101/255.255.255.0 192.168.1.2
      apicup iface create mgmt manager2.sample.example.com eth0 192.168.1.102/255.255.255.0 192.168.1.2
      apicup iface create mgmt manager3.sample.example.com eth0 192.168.1.103/255.255.255.0 192.168.1.2
    10. Set the network traffic interfaces.
      apicup subsys set mgmt traffic-iface eth0
      apicup subsys set mgmt public-iface eth0
    11. Verify the host configuration.
      apicup hosts list mgmt
      Note: This command might return the following messages, which you can ignore:
      * host is missing traffic interface 
      * host is missing public interface 
    12. Set a hashed password to access the appliance VM through the VMware Remote Console. Use an operating system utility to create a hashed password, and then use apicup to set the hashed password for your subsystem:
      apicup subsys set mgmt default-password='$1$aTD7uXAO$kNoMAefjGKBwMFiu.8ctr0'
      Important: Review the requirements for creating and using a hashed password. See Setting and using a hashed default password.
    13. Validate the installation.
      apicup subsys get mgmt --validate
    14. Create an ISO file in a plan folder. For example, mgmtplan-out.
      apicup subsys install mgmt --out mgmtplan-out 

      If you have multiple nodes listed for the hosts, when you run the --out command, it creates an ISO for each node. When deploying the nodes from VMware, each node gets its own ISO file attached.

    15. To deploy the ISOs on VMware, see step 17 in Deploying the Management subsystem in a VMware environment.
  2. Create the analytics subsystems
    1. Create the subsystem:
      apicup subsys create analyt analytics
    2. Specify mode=standard.
      apicup subsys set analyt mode=standard
    3. Version 2018.4.1.10 or later: Specify the license version you purchased.

      apicup subsys set analyt license-version=<license_type>

      The license_type must be either Production or Nonproduction. If not specified, the default value is Nonproduction.

    4. Set analytics endpoints

      Endpoints can point to VM host names, but in cluster deployments typically point to a load balancer. The load balancer distributes requests over the 3 VMs. The following values point to a sample load balancer URL.

      Component Command
      analytics-ingestion
      apicup subsys set analyt analytics-ingestion=analytics-ingestion.sample.example.com
      analytics-client
      apicup subsys set analyt analytics-client=analytics-client.sample.example.com
    5. Set the search domain for the VM.
      apicup subsys set analyt search-domain sample.example.com
    6. Set the DNS Name server for the VM to look up endpoints.
      apicup subsys set analyt dns-servers 192.168.1.1
    7. Set a Public Keyfile.

      This is the public key of the user account that you want to use to ssh from, to the appliance

      apicup subsys set analyt ssh-keyfiles "id_rsa.pub"
    8. Set a hashed password to access the appliance VM through the VMware Remote Console. Use an operating system utility to create a hashed password, and then use apicup to set the hashed password for your subsystem:
      apicup subsys set analyt default-password='$1$aTD7uXAO$kNoMAefjGKBwMFiu.8ctr0'
      Important: Review the requirements for creating and using a hashed password. See Setting and using a hashed default password.
    9. Create the hosts for the subsystem. You must specify a password to use to encrypt the disks that the appliance uses. Replace the example password in the following password with a strong password that meets your security requirements.
      apicup hosts create analyt analytics1.sample.example.com password123
      apicup hosts create analyt analytics2.sample.example.com password123
      apicup hosts create analyt analytics3.sample.example.com password123
    10. Set the network interface.

      Note that the last parameter is the Internet Gateway.

      apicup iface create analyt analytics1.sample.example.com eth0 192.168.1.104/255.255.255.0 192.168.1.2
      apicup iface create analyt analytics2.sample.example.com eth0 192.168.1.105/255.255.255.0 192.168.1.2
      apicup iface create analyt analytics3.sample.example.com eth0 192.168.1.106/255.255.255.0 192.168.1.2
    11. Check the host configuration for problems.
      apicup hosts list analyt
    12. Validate the installation.
      apicup subsys get analyt --validate
    13. Create an ISO file in a plan folder. For example, analytplan-out.
      apicup subsys install analyt --out analytplan-out 

      If you have multiple nodes listed for the hosts, when you run the --out command, it creates an ISO for each node. When deploying the nodes from VMware, each node gets its own ISO file attached.

    14. To deploy the ISOs, see step 17 in Deploying the Analytics subsystem in a VMware environment
  3. Create the portal subsystems
    1. Create the portal.
      apicup subsys create port portal
    2. For production environments, specify mode=standard.
      apicup subsys set port mode=standard

      If you omit this step, the install mode defaults to dev, which is for development and testing in non-HA environments only, and will not support three instances of the subsystem in one cluster.

    3. Version 2018.4.1.10 or later: Specify the license version you purchased.

      apicup subsys set port license-version=<license_type>

      The license_type must be either Production or Nonproduction. If not specified, the default value is Nonproduction.

    4. Set the portal endpoints.

      Endpoints can point to VM host names, but in cluster deployments typically point to a load balancer. The load balancer distributes requests over the 3 VMs. The following values point to a sample load balancer URL.

      Component Command
      portal-admin
      apicup subsys set port portal-admin=portal-admin.sample.example.com
      portal-www
      apicup subsys set port portal-www=portal-www.sample.example.com
    5. Set the search domain for the VM.
      apicup subsys set port search-domain sample.example.com
    6. Set the DNS Name server for the VM to look up endpoints.
      apicup subsys set port dns-servers 192.168.1.1
    7. Set a Public Keyfile.

      This is the public key of the user account that you want to use to ssh from, to the appliance

      apicup subsys set port ssh-keyfiles "id_rsa.pub"
    8. Set a hashed password to access the appliance VM through the VMware Remote Console. Use an operating system utility to create a hashed password, and then use apicup to set the hashed password for your subsystem:
      apicup subsys set port default-password='$1$aTD7uXAO$kNoMAefjGKBwMFiu.8ctr0'
      Important: Review the requirements for creating and using a hashed password. See Setting and using a hashed default password.
    9. Create the hosts for the subsystem. You must specify a password to use to encrypt the disks that the appliance uses.
      apicup hosts create port portal1.sample.example.com password123
      apicup hosts create port portal2.sample.example.com password123
      apicup hosts create port portal3.sample.example.com password123

      Replace the example password in the previous example with a strong password that meets your security requirements.

    10. Set the network interface.

      Note that the last parameter is the Internet Gateway.

      apicup iface create port portal1.sample.example.com eth0 192.168.1.107/255.255.255.0 192.168.1.2
      apicup iface create port portal2.sample.example.com eth0 192.168.1.108/255.255.255.0 192.168.1.2
      apicup iface create port portal3.sample.example.com eth0 192.168.1.109/255.255.255.0 192.168.1.2
    11. Check the host configuration for problems.
      apicup hosts list port
    12. Validate the installation.
      apicup subsys get port --validate
    13. Create an ISO file in a plan folder. For example, portplan-out.
      apicup subsys install port --out portplan-out 

      If you have multiple nodes listed for the hosts, when you run the --out command, it creates an ISO for each node. When deploying the nodes from VMware, each node gets its own ISO file attached.

    14. To deploy the ISOs on VMware, see step 17 in Deploying the Developer Portal in a VMware environment