Configuring load balancer for VMware deployment

When deploying IBM® API Connect for high availability, configure a cluster with at least three nodes and a load balancer. A sample configuration is provided for placing a load balancer in front of your IBM API Connect OVA deployment.

Before you begin

Note:
  • The procedure does not apply to the IBM DataPower® Gateway component that is a part of the IBM API Connect.
  • Third-party software details which IBM does not control, changes over time. Verify accuracy before implementation.

About this task

You can deploy IBM API Connect on a single node cluster. In this case, DNS resolution points to the node’s IP address, and a load balancer is not required. For high availability, use a three-node cluster. In this setup, ingress endpoints must resolve to a load balancer that routes traffic to subsystems. Even for single-node deployments, configuring a load balancer is a good practice. The configuration simplifies future scaling by allowing nodes to be added without modifying the ingress endpoints.

Configuring a load balancer, even for single-node deployments, is a good practice. With the load balancer in place, you can easily add nodes when needed. Add the node to the list of servers pointed to by the load balancer to avoid changing the ingress endpoints defined during the installation of IBM API Connect.

To support Mutual TLS communication between the IBM API Connect subsystems, configure the load balancer with SSL Passthrough and Layer 4 load balancer. The IBM API Connect subsystems establish Mutual TLS directly when the load balancer transmits the packets without modification, as it operates at Layer 4.

If you do not want to use mTLS between IBM API Connect subsystems, then you can Enable JWT instead of mTLS.

Set endpoints to resolve to the load balancer
When you configure a load balancer in front of the IBM API Connect subsystems, the ingress endpoints are set to hostname that resolves to a load balancer, rather than to the hostname of any specific node. For an overview of endpoints, see Deployment overview for endpoints.

This procedure does not apply to the IBM® DataPower® Gateway component. It applies to third-party software, which IBM does not control. Software behavior might change, and this information might become outdated.

Procedure

  1. Deploy the IBM API Connect subsystems as three-node clusters in Standard mode.
  2. Configure HAProxy with SSL pass-through and Layer 4 load balancer.
    Enable upstream selection based on SNI.
  3. Set DNS resolution so that ingress endpoints resolve to the IP address of the HAProxy node.
  4. Set timeout values in HAProxy to 240000 milliseconds (240 seconds).
    Increase timeouts for large deployments to avoid 409 or 504 errors.
  5. Ensure that the load balancer supports the TLS_ECDHE_RSA_WITH_AES256_GCM_SHA384 (0xc030) cipher.
    This cipher is required for SSL handshake between the portal and management subsystems.

Example

After you complete the procedure, your IBM API Connect deployment gets ready for high availability with scalable load balancer.

Example endpoints require DNS resolution.

  • api-manager-ui.sample.example.com
  • cloud-admin-ui.sample.example.com
  • consumer-api.sample.example.com
  • platform-api.sample.example.com
  • admin.portal.sample.example.com
  • web.portal.sample.example.com
  • analytics.ingestion.sample.example.com
  • api-gateway.sample.example.com
  • apic-gw-service.sample.example.com
    Note: Timeouts and cipher support are critical for stable communication between subsystems. Ensure HAProxy is configured as described in the preceding steps to avoid handshake failures and timeout errors.
# This sample HAProxy configuration file configures one HAProxy node to distribute traffic to
# Management, portal, analytics, and gateway clusters. Another option is to configure one HAproxy 
# node per cluster. 

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	# Default ciphers to use on SSL-enabled listening sockets.
	# For more information, see ciphers(1SSL). This list is from:
	#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:
RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
	ssl-default-bind-options no-sslv3

defaults
	log	 global
	mode	http
	option     httplog
	option     dontlognull
       timeout connect 5000
       timeout client  240000
       timeout server  240000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

######## MANAGEMENT CONFIGURATION ########
frontend fe_management
    mode tcp
    option tcplog
    #
    # Map to the hostname and TCP port for the management load balancer.
    # In this example, the hostname for the load balancer is ubuntu.sample.example.com.
    #
    bind ubuntu.sample.example.com:443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }

       #
       # The value for the management endpoints as defined in the apiconnect-up.yml
       # file using the apicup installer. In this example, the endpoints are api-manager-ui.sample.example.com,
       # cloud-admin-ui.sample.example.com, consumer-api.sample.example.com, and
       # platform-api.sample.example.com. Standard SNI structure specifies
       # whether the INCOMING request is for api-manager or cloud-admin or for consumer-api or platform-api 
       # then use "be_management".
       #
    use_backend be_management if  
    { req_ssl_sni -i api-manager-ui.sample.example.com OR req_ssl_sni -i cloud-admin-ui.sample.example.com }
    use_backend be_management if
    { req_ssl_sni -i consumer-api.sample.example.com OR req_ssl_sni -i platform-api.sample.example.com }

	#
	# be_management is defined to point management traffic to the cluster
	# containing three management nodes
	#

backend be_management
    mode tcp
    option tcplog
    balance roundrobin
    option ssl-hello-chk

    #
    # One entry per management node in the cluster.
    # Hostname and TCP Port for each management node.
    #
    server management0 manager1.sample.example.com:443 check
    server management1 manager2.sample.example.com:443 check
    server management2 manager3.sample.example.com:443 check


######## PORTAL CONFIGURATION ########
frontend fe_portal
    mode tcp
    option tcplog
    #
    # The Hostname and TCP Port for the portal Load balancer
    #
    bind ubuntu.sample.example.com:443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }

    #
    # The value for both of the portal subsystem endpoints as defined in the apiconnect-up.yml file
    #
    use_backend be_portal if 
{ req_ssl_sni -i admin.portal.sample.example.com OR req_ssl_sni -i web.portal.sample.example.com }


backend be_portal
    mode tcp
    option tcplog
    balance roundrobin
    option ssl-hello-chk

    #
    # One entry per portal node.
    # Hostname and TCP Port for the portal node.
    #
    server portal0 portal1.sample.example.com:443 check
    server portal1 portal2.sample.example.com:443 check
    server portal2 portal3.sample.example.com:443 check

######## ANALYTICS CONFIGURATION ########
frontend fe_analytics
    mode tcp
    option tcplog
    #
    # The Hostname and TCP Port for the analytics Load balancer
    #
    bind ubuntu.sample.example.com:443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }

    #
    # The value for both of the analytics subsystem endpoints as defined in the apiconnect-up.yml file
    #
    use_backend be_analytics if 
{ req_ssl_sni -i analytics.ingestion.sample.example.com }


backend be_analytics
    mode tcp
    option tcplog
    balance roundrobin
    option ssl-hello-chk

    #
    # One entry per analytics node.
    # Hostname and TCP Port for the analytics node.
    #
    server analytics0 analytics1.sample.example.com:443 check
    server analytics1 analytics2.sample.example.com:443 check
    server analytics2 analytics3.sample.example.com:443 check

######## GATEWAY CONFIGURATION ########
frontend fe_gateway
    mode tcp
    option tcplog
    #
    # The Hostname and TCP Port for the gateway Load balancer
    #
    bind ubuntu.sample.example.com:443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }

    #
    # The values for the gateway subsystem endpoints as defined in the apiconnect-up.yml file.
    #
    use_backend be_gateway if 
{ req_ssl_sni -i api-gateway.sample.example.com OR req_ssl_sni -i apic-gw-service.sample.example.com }


backend be_gateway 
    mode tcp
    option tcplog
    balance roundrobin
    option ssl-hello-chk

    #
    # One entry per gateway node.
    # Hostname and TCP Port for the gateway node.
    #
    server gateway0 gateway1.sample.example.com:443 check
    server gateway1 gateway2.sample.example.com:443 check
    server gateway2 gateway3.sample.example.com:443 check

The following diagram displays a typical layout for a load balancer in front of three IBM API Connect subsystems, each of which consists of three appliance or OVA nodes:diagram for load balancer configuration in an OVA deployment