Load balancer configuration in a VMware deployment

When deploying API Connect for High Availability, it is recommended that you configure a cluster with at least three nodes and a load balancer. A sample configuration is provided for placing a load balancer in front of your API Connect OVA deployment.

About this task

API Connect can be deployed on a single node cluster. In this case the ingress endpoints are host names for which the DNS resolution points to the single IP address of the corresponding node hosting a particular subsystem, and no load balancer is required. For high availability, it is recommended to have at least a three node cluster. With three nodes, the ingress endpoints cannot resolve to a single IP address. A load balancer should be placed in front of an API Connect subsystem to route traffic.

Because it is difficult to add nodes once endpoints are configured, a good practice is to configure a load balancer even for single node deployments. With the load balancer in place, you can easily add nodes when needed. Add the node to the list of servers pointed to by the load balancer to avoid changing the ingress endpoints defined during the installation of API Connect.

To support Mutual TLS communication between the API Connect subsystems, configure the load balancer with SSL Passthrough and Layer 4 load balancing. In order for Mutual TLS to be performed directly by the API Connect subsystems, the load balancer should leave the packets unmodified, as is accomplished by Layer 4. Following is a description of the communication between the endpoints that are configured with Mutual TLS:

  • API Manager (with the client certificate portal-client) communicates with the Portal Admin endpoint portal-admin (with the server certificate portal-admin-ingress)
  • API Manager (with the client certificate analytics-client-client) communicates with the Analytics Client endpoint analytics-client (with the server certificate analytics-client-ingress)
  • API Manager (with the client certificate analytics-ingestion-client) communicates with the Analytics Ingestion endpoint analytics-ingestion (with the server certificate analytics-ingestion-ingress)
Set endpoints to resolve to the load balancer
When configuring a load balancer in front of the API Connect subsystems, the ingress endpoints are set to host names that resolve to a load balancer, rather than to the host name of any specific node. For an overview of endpoints, see Deployment overview for endpoints and certificates.
Use this example configuration as a guideline to determine the best way to configure the load balancer for your deployment.

Procedure

  • Appliance deployment
    In this example configuration, the API Connect Management, Portal, Analytics and Gateway subsystems are deployed as three node clusters in Standard mode. An HAProxy load balancer is used. The load balancer is configured with ssl-passthrough and with upstream selection based on SNI. DNS resolution is configured to resolve the endpoints to the IP address of the load balancer. If a single HAProxy node is used, then all endpoints must resolve to the IP address of the single HAProxy. The following example endpoints require DNS resolution for this example:
    • api-manager-ui.sample.example.com
    • cloud-admin-ui.sample.example.com
    • consumer-api.sample.example.com
    • platform-api.sample.example.com
    • admin.portal.sample.example.com
    • web.portal.sample.example.com
    • analytics.client.sample.example.com
    • analytics.ingestion.sample.example.com
    • api-gateway.sample.example.com
    • apic-gw-service.sample.example.com

    Following is an example HAProxy configuration for one HAProxy node distributing traffic to Management and Portal clusters:

    Note:

    When you configure a load balancer in front of a Management subsystem, specify timeouts of at least 240 seconds. Note that large deployments might need larger values.

    The default timeout is typically 50 or 60 seconds, which is not long enough to avoid 409 Conflict or 504 Gateway Timeout errors. The 409 Conflict error can occur when the time needed to complete an operation is sufficiently long that a second request gets issued.

    For example, to specify 240 seconds when using HAProxy as a load balancer, set timeout client and timeout server to 240000.

    # This sample HAProxy configuration file configures one HAProxy node to distribute traffic to
    # Management, Portal, Analytics, and Gateway clusters. Another option is to configure one HAproxy 
    # node per cluster. 
    
    global
    	log /dev/log	local0
    	log /dev/log	local1 notice
    	chroot /var/lib/haproxy
    	stats socket /run/haproxy/admin.sock mode 660 level admin
    	stats timeout 30s
    	user haproxy
    	group haproxy
    	daemon
    
    	# Default SSL material locations
    	ca-base /etc/ssl/certs
    	crt-base /etc/ssl/private
    
    	# Default ciphers to use on SSL-enabled listening sockets.
    	# For more information, see ciphers(1SSL). This list is from:
    	#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
    	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:
    RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
    	ssl-default-bind-options no-sslv3
    
    defaults
    	log	 global
    	mode	http
    	option     httplog
    	option     dontlognull
           timeout connect 5000
           timeout client  240000
           timeout server  240000
    	errorfile 400 /etc/haproxy/errors/400.http
    	errorfile 403 /etc/haproxy/errors/403.http
    	errorfile 408 /etc/haproxy/errors/408.http
    	errorfile 500 /etc/haproxy/errors/500.http
    	errorfile 502 /etc/haproxy/errors/502.http
    	errorfile 503 /etc/haproxy/errors/503.http
    	errorfile 504 /etc/haproxy/errors/504.http
    
    ######## MANAGEMENT CONFIGURATION ########
    frontend fe_management
        mode tcp
        option tcplog
        #
        # Map to the hostname and TCP port for the Management load balancer.
        # In this example, the hostname for the load balancer is ubuntu.sample.example.com.
        #
        bind ubuntu.sample.example.com:443
        tcp-request inspect-delay 5s
        tcp-request content accept if { req_ssl_hello_type 1 }
    
           #
           # The value for the Management endpoints as defined in the apiconnect-up.yml
           # file using the apicup installer. In this example, the endpoints are api-manager-ui.sample.example.com,
           # cloud-admin-ui.sample.example.com, consumer-api.sample.example.com, and
           # platform-api.sample.example.com. Standard SNI structure specifies
           # whether the INCOMING request is for api-manager or cloud-admin or for consumer-api or platform-api 
           # then use "be_management".
           #
        use_backend be_management if  
        { req_ssl_sni -i api-manager-ui.sample.example.com OR req_ssl_sni -i cloud-admin-ui.sample.example.com }
        use_backend be_management if
        { req_ssl_sni -i consumer-api.sample.example.com OR req_ssl_sni -i platform-api.sample.example.com }
    
    	#
    	# be_management is defined to point management traffic to the cluster
    	# containing three management nodes
    	#
    
    backend be_management
        mode tcp
        option tcplog
        balance roundrobin
        option ssl-hello-chk
    
        #
        # One entry per Management node in the cluster.
        # Hostname and TCP Port for each Management node.
        #
        server management0 manager1.sample.example.com:443 check
        server management1 manager2.sample.example.com:443 check
        server management2 manager3.sample.example.com:443 check
    
    
    ######## PORTAL CONFIGURATION ########
    frontend fe_portal
        mode tcp
        option tcplog
        #
        # The Hostname and TCP Port for the Portal Load balancer
        #
        bind ubuntu.sample.example.com:443
        tcp-request inspect-delay 5s
        tcp-request content accept if { req_ssl_hello_type 1 }
    
        #
        # The value for both of the Portal subsystem endpoints as defined in the apiconnect-up.yml file
        #
        use_backend be_portal if 
    { req_ssl_sni -i admin.portal.sample.example.com OR req_ssl_sni -i web.portal.sample.example.com }
    
    
    backend be_portal
        mode tcp
        option tcplog
        balance roundrobin
        option ssl-hello-chk
    
        #
        # One entry per Portal node.
        # Hostname and TCP Port for the Portal node.
        #
        server portal0 portal1.sample.example.com:443 check
        server portal1 portal2.sample.example.com:443 check
        server portal2 portal3.sample.example.com:443 check
    
    ######## ANALYTICS CONFIGURATION ########
    frontend fe_analytics
        mode tcp
        option tcplog
        #
        # The Hostname and TCP Port for the Analytics Load balancer
        #
        bind ubuntu.sample.example.com:443
        tcp-request inspect-delay 5s
        tcp-request content accept if { req_ssl_hello_type 1 }
    
        #
        # The value for both of the Analytics subsystem endpoints as defined in the apiconnect-up.yml file
        #
        use_backend be_analytics if 
    { req_ssl_sni -i analytics.client.sample.example.com OR req_ssl_sni -i analytics.ingestion.sample.example.com }
    
    
    backend be_analytics
        mode tcp
        option tcplog
        balance roundrobin
        option ssl-hello-chk
    
        #
        # One entry per Analytics node.
        # Hostname and TCP Port for the Analytics node.
        #
        server analytics0 analytics1.sample.example.com:443 check
        server analytics1 analytics2.sample.example.com:443 check
        server analytics2 analytics3.sample.example.com:443 check
    
    ######## GATEWAY CONFIGURATION ########
    frontend fe_gateway
        mode tcp
        option tcplog
        #
        # The Hostname and TCP Port for the Gateway Load balancer
        #
        bind ubuntu.sample.example.com:443
        tcp-request inspect-delay 5s
        tcp-request content accept if { req_ssl_hello_type 1 }
    
        #
        # The values for the Gateway subsystem endpoints as defined in the apiconnect-up.yml file.
        #
        use_backend be_gateway if 
    { req_ssl_sni -i api-gateway.sample.example.com OR req_ssl_sni -i apic-gw-service.sample.example.com }
    
    
    backend be_gateway 
        mode tcp
        option tcplog
        balance roundrobin
        option ssl-hello-chk
    
        #
        # One entry per Gateway node.
        # Hostname and TCP Port for the Gateway node.
        #
        server gateway0 gateway1.sample.example.com:443 check
        server gateway1 gateway2.sample.example.com:443 check
        server gateway2 gateway3.sample.example.com:443 check
  • The following diagram depicts a typical layout for a load balancer in front of three API Connect subsystems, with each subsystem containing three Appliance/OVA nodes:diagram for load balancer configuration in an OVA deployment