As of 15 December 2020, the PROXY protocol is now supported for load balancer and Ingress services in IBM Cloud Kubernetes Service clusters hosted on VPC infrastructure.
When designing modern software architectures, developers often choose to use different kinds of proxy solutions as part of the application stack to solve different kinds of problems. This is especially true for cloud architectures and applications running in cloud environments, where network and application load balancers are often utilized. However, when relaying connections through proxies, the original connection parameters — such as the source address — might get lost.
To address this problem, HAProxy developed the PROXY protocol, which enables backend applications to receive client connection information that is passed through proxy servers and load balancers.
The following examples show how you can use the PROXY protocol in IBM Cloud Kubernetes Service clusters to preserve the source information.
Configuring the PROXY protocol for load balancers
In order to make your application accessible outside of your Kubernetes cluster, you can expose it with a load balancer service. After creating the Kubernetes load balancer service object, the Cloud Controller Manager makes sure that a load balancer is provisioned for you. When using VPC infrastructure, this results in creating a VPC load balancer instance.
To specify additional configuration for load balancer creations, you can add annotations to your Kubernetes service object, such as the service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features
. If you add proxy-protocol
to the list of enabled features, the generated VPC load balancer will add PROXY protocol information to the forwarded traffic.
Example
The application in this example runs in a single-zone Kubernetes 1.19 cluster that uses VPC generation 2 compute. The application accepts HTTP connections and returns information about the received requests, such as the client address.
The application listens on two ports: 9080
for receiving traffic with PROXY protocol headers and 8080
for receiving traffic without the headers. Note that you cannot use one port for both types of traffic and only process PROXY protocol headers when they are sent. The receiver MUST be configured to only receive the protocol described in this specification and MUST not try to guess whether the protocol header is present or not.
To demonstrate the PROXY protocol functionality, expose the application with one load balancer service that has the PROXY protocol feature enabled and with one load balancer that does not:
Now, test access to the application by sending requests to the generated load balancer hostnames.
First, test access through the load balancer that does not have the PROXY protocol feature enabled:
The client address, 10.240.64.4
, is the IP address of the worker node that received the request from the VPC load balancer and is not the address of the original client:
Now test access to the application through the load balancer that uses the PROXY protocol feature:
This time, the client address, 169.53.74.35
, is the actual public IP address of the original client.
Limitations
The PROXY protocol feature is only supported for VPC generation 2 clusters that run Kubernetes version 1.18 or later.
Configuring the PROXY protocol for Ingress ALBs
Behind the curtains, your managed Ingress Application Load Balancers (ALBs) are also exposed by regular Kubernetes load balancer services, similar to the load balancers that exposed the test application in the previous example.
When using Ingress ALBs to expose your HTTP applications, the ALB additionally proxies the traffic that is first proxied by the VPC load balancer. Sounds fun, right? If you want to preserve original client information in this architecture, good news— when your ALBs run the Kubernetes Ingress Controller image, you can preserve the client information by enabling the PROXY protocol.*
To quickly enable the PROXY protocol for your Ingress ALBs, you can use the new ibmcloud ks ingress lb proxy-protocol enable
CLI command. The command accepts two optional flags: --cidr
and --header-timeout
. The --cidr
flag can be specified multiple times to set IP address range(s) of trusted load balancers, while the --header-timeout
flag can be used to set the number of seconds to wait for PROXY protocol headers when the Ingress ALB receives a packet.
After running this command, the public and private load balancers that expose your ALBs are recreated** with the PROXY protocol feature enabled. Also, your ALBs are configured to expect PROXY protocol header on incoming requests.***
If you decide to turn off sending PROXY protocol headers, you can use the ibmcloud ks ingress lb proxy-protocol disable
command.
The public and private load balancers are recreated** again, but this time the PROXY protocol feature is removed from the list of enabled cloud provider features.
In order to see your configuration, you can run the ibmcloud ks ingress load-balancer get
command:
* The Kubernetes Ingress Controller image for your ALBs supports consuming traffic with PROXY protocol header from clients, but it does not use the PROXY protocol when talking to the upstream applications.
** Due to technical limitations and to minimalize your network outage, new load balancers with the PROXY configuration are created first. Then, after the new load balancers are online and active, the old load balancers are deleted. This process temporarily uses two additional IP addresses from your VPC network.
*** The ALBs use the kube-system/ibm-k8s-controller-config
ConfigMap, in which we define the use-proxy-protocol
, proxy-real-ip-cidr
and proxy-protocol-header-timeout
configuration options.
Example
The same application from the previous example, which accepts HTTP connections and returns information about the received requests, is used in this example. This time, the following Ingress resource is defined so that the application is exposed by Ingress ALBs that run the Kubernetes Ingress controller image:
Test access to the app through the Ingress ALB by sending a request to the default Ingress subdomain for the cluster:
The client address, 172.17.24.71
, is the private IP address of an ALB pod that forwarded the incoming traffic to the application:
However, in the Request Headers
section of the app response, some commonly used headers are added by the Ingress ALB. The x-forwarded-for
and x-real-ip
headers contain the IP address of the client that connected to the Ingress ALB. In this case, this is the IP address for the worker node (10.240.64.10
) that received the incoming traffic from the VPC load balancer.
The Ingress ALB logged the request containing the client address:
Now enable the PROXY protocol for the ALBs:
After around 30 minutes, the PROXY protocol configuration is applied, and the ALB pods are restarted:
Now, send another request to the Ingress subdomain to check whether the IP address of the original client is returned in PROXY protocol headers:
The client address is the IP address of an ALB pod that forwarded the traffic to the application again. However, this time, the x-forwarded-for
and x-real-ip
headers contain the actual client IP address, 169.53.74.35
. Also, when using PROXY protocol, the access logs generated by the ALBs contain the actual client address:
Limitations
In order to process PROXY protocol headers, your ALBs must run the Kubernetes Ingress controller image. Check out the announcement and the official documentation for Kubernetes Ingress controller image.
Currently, you cannot configure the PROXY protocol separately for public and private ALBs. The PROXY protocol is simultaneously enabled or disabled for both ALB types.
More information
For more information, check out our official documentation about exposing apps with load balancers and about preserving source IP addresses with Ingress application load balancers.
Contact us
If you have questions, engage our team via Slack by registering here and join the discussion in the #general
channel on our public IBM Cloud Kubernetes Service Slack.