Load balancer configuration in a Kubernetes deployment

When deploying API Connect for High Availability, it is recommended that you configure a cluster with at least three nodes and a load balancer. A sample configuration is provided for placing a load balancer in front of your API Connect Kubernetes deployment.

About this task

API Connect can be deployed on a single node cluster. In this case the ingress endpoints are host names for which the DNS resolution points to the single IP address of the corresponding node hosting a particular subsystem, and no load balancer is required. For high availability, it is recommended to have at least a three node cluster. With three nodes, the ingress endpoints cannot resolve to a single IP address. A load balancer should be placed in front of an API Connect subsystem to route traffic.

Because it is difficult to add nodes once endpoints are configured, a good practice is to configure a load balancer even for single node deployments. With the load balancer in place, you can easily add nodes when needed. Add the node to the list of servers pointed to by the load balancer and the ingress endpoints defined during installation of API Connect can remain unchanged.

To support Mutual TLS communication between the API Connect subsystems, configure the load balancer with SSL Passthrough and Layer 4 load balancing. In order for Mutual TLS to be performed directly by the API Connect subsystems, the load balancer should leave the packets unmodified, as is accomplished by Layer 4. Following is a description of the communication between the endpoints that are configured with Mutual TLS:

  • API Manager (with the client certificate portal-client) communicates with the Portal Admin endpoint portal-admin (with the server certificate portal-admin-ingress)
  • API Manager (with the client certificate analytics-client-client) communicates with the Analytics Client endpoint analytics-client (with the server certificate analytics-client-ingress)
  • API Manager (with the client certificate analytics-ingestion-client) communicates with the Analytics Ingestion endpoint analytics-ingestion (with the server certificate analytics-ingestion-ingress)
Set endpoints to resolve to the load balancer
When configuring a load balancer in front of the API Connect subsystems, the ingress endpoints are set to host names that resolve to a load balancer, rather than to the host name of any specific node. For an overview of endpoints, see Deployment overview for endpoints and certificates.
Use these example topologies as a guideline to determine the best way to configure the load balancer for your deployment.

Procedure

  • Kubernetes cluster with wildcard DNS

    In this example, API Connect is deployed to a Kubernetes cluster with three master and six worker nodes. The master nodes have the Kubernetes API server listening on port 6443. The Kubernetes API Server is used for communicating with the kubectl command line interface. Note that it is generally not necessary for the Kubernetes API server (on port 6443 in this example) to be exposed outside the cluster through the load balancer, this is just one possible setup shown here.

    The kubernetes/ingress-nginx ingress controller is deployed as a daemonset, so that every worker node in the cluster has an ingress controller pod listening on port 443. The API Connect subsystems (API Manager, Developer Portal, Analytics and Gateway) are all deployed on this same cluster.

    Note:

    When you configure a load balancer in front of a Management subsystem, specify timeouts of at least 240 seconds. Note that large deployments might need larger values.

    The default timeout is typically 50 or 60 seconds, which is not long enough to avoid 409 Conflict or 504 Gateway Timeout errors. The 409 Conflict error can occur when the time needed to complete an operation is sufficiently long that a second request gets issued.

    For example, to specify 240 seconds when using HAProxy as a load balancer, set timeout client and timeout server to 240000.

    Best practice is to ensure that the same values are specified for the timeout settings for the load balancer and for the Kubernetes ingress controller. The ingress controller timeout settings are set in the ingress controller config.map. For more information, see the proxy-read-timeout and proxy-send-timeout settings in Kubernetes ingress controller prerequisites .

    A host running HAProxy acts as the load balancer, with a configuration for proxies in the HAProxy configuration file such as:

    defaults
            log     global
            mode    http
            option  httplog
            option  dontlognull
            timeout connect 5000
            timeout client  240000
            timeout server  240000
            errorfile 400 /etc/haproxy/errors/400.http
            errorfile 403 /etc/haproxy/errors/403.http
            errorfile 408 /etc/haproxy/errors/408.http
            errorfile 500 /etc/haproxy/errors/500.http
            errorfile 502 /etc/haproxy/errors/502.http
            errorfile 503 /etc/haproxy/errors/503.http
            errorfile 504 /etc/haproxy/errors/504.http
    
    frontend apiservers
            bind *:6443
            mode tcp
            option  tcplog
            option forwardfor
            default_backend k8s_apiservers
    
    frontend ingress
            bind *:443
            mode tcp
            option  tcplog
            option forwardfor
            default_backend k8s_ingress
    
    backend k8s_apiservers
            mode tcp
            option  tcplog
            option ssl-hello-chk
            option log-health-checks
            default-server inter 10s fall 2
            server cluster1-master-1 10.169.241.226:6443 check
            server cluster1-master-2 10.169.241.163:6443 check
            server cluster1-master-3 10.169.241.153:6443 check
    
    
    backend k8s_ingress
            mode tcp
            option  tcplog
            option ssl-hello-chk
            option log-health-checks
            default-server inter 10s fall 2
            server cluster1-worker-1 10.169.241.142:443 check
            server cluster1-worker-2 10.169.241.150:443 check
            server cluster1-worker-3 10.169.241.188:443 check
            server cluster1-worker-4 10.169.241.196:443 check
            server cluster1-worker-5 10.169.241.168:443 check
            server cluster1-worker-6 10.169.241.156:443 check

    A wildcard DNS record allows the resolution of *.example.com to the IP address of the HAProxy host.

    The ingress endpoints are defined as:

    # API Manager
    apicup subsys set mgmt api-manager-ui=apim.example.com
    apicup subsys set mgmt cloud-admin-ui=apim.example.com
    apicup subsys set mgmt consumer-api=apim.example.com
    apicup subsys set mgmt platform-api=apim.example.com
    
    # Developer Portal
    apicup subsys set ptl portal-www=portal.example.com
    apicup subsys set ptl portal-admin=portal-admin.example.com
    
    # Analytics
    apicup subsys set a7s analytics-client=a7s-client.example.com
    apicup subsys set a7s analytics-ingestion=a7s-ingestion.example.com
    
    # Gateway
    apicup subsys set gw api-gateway=gw.example.com
    apicup subsys set gw apic-gw-service=gwd.example.com
    The following diagram illustrates the wildcard DNS example:
    diagram for load balancer configuration in a Kubernetes deployment
  • Kubernetes cluster without wildcard DNS

    This example uses the same topology as the previous one, except that there is no wildcard DNS resolution. Instead, individual DNS records are added, all pointing to the IP address of the load balancer. For example: portal.example.com and admin.portal.example.com resolve to the IP address of the load balancer.

    The Developer Portal ingress endpoints can be defined as:

    # Developer Portal
    apicup subsys set ptl portal-www=portal.example.com
    apicup subsys set ptl portal-admin=admin.portal.example.com
  • Kubernetes cluster with xip.io or nip.io

    This example uses the same topology as the first one, except that there is no specific DNS resolution configured and instead a service such as xip.io or nip.io provides wildcard DNS resolution. It is not recommended to use this approach for a production setup, as the ingress endpoints configured during installation of API Connect would be tied to the IP address of the load balancer. This approach can be useful when configuring a test deployment in situations where access to DNS records is not practical or would introduce delay in the deployment. If the load balancer IP is 192.168.100.100, then the ingress endpoints are configured as:

    # Developer Portal
    apicup subsys set ptl portal-www=portal.192.168.100.100.xip.io
    apicup subsys set ptl portal-admin=portal-admin.192.168.100.100.xip.io