December 17, 2020 By Attila Fábián
Balázs Szekeres
7 min read

As of 15 December 2020, the PROXY protocol is now supported for load balancer and Ingress services in IBM Cloud Kubernetes Service clusters hosted on VPC infrastructure.

When designing modern software architectures, developers often choose to use different kinds of proxy solutions as part of the application stack to solve different kinds of problems. This is especially true for cloud architectures and applications running in cloud environments, where network and application load balancers are often utilized. However, when relaying connections through proxies, the original connection parameters — such as the source address — might get lost.

To address this problem, HAProxy developed the PROXY protocol, which enables backend applications to receive client connection information that is passed through proxy servers and load balancers.

The following examples show how you can use the PROXY protocol in IBM Cloud Kubernetes Service clusters to preserve the source information.

Configuring the PROXY protocol for load balancers

In order to make your application accessible outside of your Kubernetes cluster, you can expose it with a load balancer service. After creating the Kubernetes load balancer service object, the Cloud Controller Manager makes sure that a load balancer is provisioned for you. When using VPC infrastructure, this results in creating a VPC load balancer instance.

To specify additional configuration for load balancer creations, you can add annotations to your Kubernetes service object, such as the service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features. If you add proxy-protocol to the list of enabled features, the generated VPC load balancer will add PROXY protocol information to the forwarded traffic.

Example

The application in this example runs in a single-zone Kubernetes 1.19 cluster that uses VPC generation 2 compute. The application accepts HTTP connections and returns information about the received requests, such as the client address.

The application listens on two ports: 9080 for receiving traffic with PROXY protocol headers and 8080 for receiving traffic without the headers. Note that you cannot use one port for both types of traffic and only process PROXY protocol headers when they are sent. The receiver MUST be configured to only receive the protocol described in this specification and MUST not try to guess whether the protocol header is present or not.

To demonstrate the PROXY protocol functionality, expose the application with one load balancer service that has the PROXY protocol feature enabled and with one load balancer that does not:

apiVersion: v1
kind: Service
metadata:
  name: test-application-pp
  annotations:
    service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 9080
      protocol: TCP
      name: http
  selector:
    app: test-application
---
apiVersion: v1
kind: Service
metadata:
  name: test-application-nopp
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
      name: http
  selector:
    app: test-application
$ kubectl get services
NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)        AGE
test-application-pp     LoadBalancer   172.21.214.69   04e59846-us-south.lb.appdomain.cloud   80:30161/TCP   46s
test-application-nopp   LoadBalancer   172.21.221.74   5d706cab-us-south.lb.appdomain.cloud   80:30675/TCP   45s

Now, test access to the application by sending requests to the generated load balancer hostnames.

First, test access through the load balancer that does not have the PROXY protocol feature enabled:

$ curl 5d706cab-us-south.lb.appdomain.cloud

Hostname: test-application-5ccc6cd54-6lklm

Pod Information:
	-no pod information available-

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=10.240.64.4
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://5d706cab-us-south.lb.appdomain.cloud:8080/

Request Headers:
	accept=*/*
	host=5d706cab-us-south.lb.appdomain.cloud
	user-agent=curl/7.58.0

Request Body:
	-no body in request-

The client address, 10.240.64.4, is the IP address of the worker node that received the request from the VPC load balancer and is not the address of the original client:

$ kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
10.240.64.4   Ready    <none>   20h   v1.19.3+IKS
10.240.64.5   Ready    <none>   20h   v1.19.3+IKS

Now test access to the application through the load balancer that uses the PROXY protocol feature:

$ curl 04e59846-us-south.lb.appdomain.cloud

Hostname: test-application-5ccc6cd54-6lklm

Pod Information:
	-no pod information available-

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=169.53.74.35
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://04e59846-us-south.lb.appdomain.cloud:9080/

Request Headers:
	accept=*/*
	host=04e59846-us-south.lb.appdomain.cloud
	user-agent=curl/7.58.0

Request Body:
	-no body in request-

This time, the client address, 169.53.74.35, is the actual public IP address of the original client.

Limitations

The PROXY protocol feature is only supported for VPC generation 2 clusters that run Kubernetes version 1.18 or later.

Configuring the PROXY protocol for Ingress ALBs

Behind the curtains, your managed Ingress Application Load Balancers (ALBs) are also exposed by regular Kubernetes load balancer services, similar to the load balancers that exposed the test application in the previous example.

When using Ingress ALBs to expose your HTTP applications, the ALB additionally proxies the traffic that is first proxied by the VPC load balancer. Sounds fun, right? If you want to preserve original client information in this architecture, good news— when your ALBs run the Kubernetes Ingress Controller image, you can preserve the client information by enabling the PROXY protocol.*

To quickly enable the PROXY protocol for your Ingress ALBs, you can use the new ibmcloud ks ingress lb proxy-protocol enable CLI command. The command accepts two optional flags: --cidr and --header-timeout. The --cidr flag can be specified multiple times to set IP address range(s) of trusted load balancers, while the --header-timeout flag can be used to set the number of seconds to wait for PROXY protocol headers when the Ingress ALB receives a packet.

After running this command, the public and private load balancers that expose your ALBs are recreated** with the PROXY protocol feature enabled. Also, your ALBs are configured to expect PROXY protocol header on incoming requests.***

$ ibmcloud ks ingress lb proxy-protocol enable --help
NAME:
        enable - [Beta] Enable the PROXY protocol so that client connection information is passed in request headers to ALBs.

USAGE:
        ibmcloud ks ingress lb proxy-protocol enable --cluster CLUSTER [--cidr CIDR ...] [-f] [--header-timeout TIMEOUT] [-q]

PARAMETERS:
    --cluster value, -c value  Specify the cluster name or ID.
    --cidr value               [Beta] The IP address ranges of your load balancers in CIDR format. PROXY headers that are forwarded by load balancers in other IP ranges are not processed.
    --header-timeout value     [Beta] ALBs that run Kubernetes Ingress image only: The timeout value, in seconds, for the load balancer to receive the PROXY protocol headers. (default: 5)
    -f                         Force the command to run with no user prompts.
    -q                         Do not show the message of the day or update reminders.

If you decide to turn off sending PROXY protocol headers, you can use the ibmcloud ks ingress lb proxy-protocol disable command.

The public and private load balancers are recreated** again, but this time the PROXY protocol feature is removed from the list of enabled cloud provider features.

$ ibmcloud ks ingress lb proxy-protocol disable --help
NAME:
        disable - [Beta] Disable the PROXY protocol so that client connection information is no longer passed in request headers to ALBs.

USAGE:
        ibmcloud ks ingress lb proxy-protocol disable --cluster CLUSTER [-f] [-q]

PARAMETERS:
    --cluster value, -c value  Specify the cluster name or ID.
    -f                         Force the command to run with no user prompts.
    -q                         Do not show the message of the day or update reminders.

In order to see your configuration, you can run the ibmcloud ks ingress load-balancer get command:

$ ibmcloud ks ingress load-balancer get --help
NAME:
        get - [Beta] Get the configuration of load balancers that expose Ingress ALBs in your cluster.

USAGE:
        ibmcloud ks ingress load-balancer get --cluster CLUSTER [--output OUTPUT] [-q]

PARAMETERS:
    --cluster value, -c value  Specify the cluster name or ID.
    --output                   Prints the command output in the provided format. Available options: json
    -q                         Do not show the message of the day or update reminders.

* The Kubernetes Ingress Controller image for your ALBs supports consuming traffic with PROXY protocol header from clients, but it does not use the PROXY protocol when talking to the upstream applications.

** Due to technical limitations and to minimalize your network outage, new load balancers with the PROXY configuration are created first. Then, after the new load balancers are online and active, the old load balancers are deleted. This process temporarily uses two additional IP addresses from your VPC network.

*** The ALBs use the kube-system/ibm-k8s-controller-config ConfigMap, in which we define the use-proxy-protocolproxy-real-ip-cidr and proxy-protocol-header-timeout configuration options.

Example

The same application from the previous example, which accepts HTTP connections and returns information about the received requests, is used in this example. This time, the following Ingress resource is defined so that the application is exposed by Ingress ALBs that run the Kubernetes Ingress controller image:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-application
  annotations:
    kubernetes.io/ingress.class: "public-iks-k8s-nginx"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: ImplementationSpecific
        backend:
          service:
            name: test-application-nopp
            port:
              number: 80

Test access to the app through the Ingress ALB by sending a request to the default Ingress subdomain for the cluster:

$ curl proxy-protocol-blog.us-south.containers.appdomain.cloud

Hostname: test-application-5ccc6cd54-lzr9s

Pod Information:
	-no pod information available-

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=172.17.24.71
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://proxy-protocol-blog.us-south.containers.appdomain.cloud:8080/

Request Headers:
	accept=*/*
	host=proxy-protocol-blog.us-south.containers.appdomain.cloud
	user-agent=curl/7.58.0
	x-forwarded-for=10.240.64.10
	x-forwarded-host=proxy-protocol-blog.us-south.containers.appdomain.cloud
	x-forwarded-port=80
	x-forwarded-proto=http
	x-real-ip=10.240.64.10
	x-request-id=43fb5298f60dd965aa809d1be30b3a0f
	x-scheme=http

Request Body:
	-no body in request-

The client address, 172.17.24.71, is the private IP address of an ALB pod that forwarded the incoming traffic to the application:

$ kubectl get pods -n kube-system -o wide | grep 172.17.24.71
public-crbuls7lk20t9qglbsgskg-alb1-98c5d665c-m6pc4   1/1     Running   0          62m   172.17.24.74    10.240.64.9    <none>           <none>

However, in the Request Headers section of the app response, some commonly used headers are added by the Ingress ALB. The x-forwarded-for and x-real-ip headers contain the IP address of the client that connected to the Ingress ALB. In this case, this is the IP address for the worker node (10.240.64.10) that received the incoming traffic from the VPC load balancer.

The Ingress ALB logged the request containing the client address:

$ kubectl logs -n kube-system deployments/public-crbuls7lk20t9qglbsgskg-alb1 | tail -n 1 | jq .
{
  "time_date": "2020-11-12T21:27:27+00:00",
  "client": "10.240.64.10",
  "host": "proxy-protocol-blog.us-south.containers.appdomain.cloud",
  "scheme": "http",
  "request_method": "GET",
  "request_uri": "/",
  "request_id": "43fb5298f60dd965aa809d1be30b3a0f",
  "status": 200,
  "upstream_addr": "172.17.24.72:8080",
  "upstream_status": 200,
  "request_time": 0.001,
  "upstream_response_time": 0,
  "upstream_connect_time": 0,
  "upstream_header_time": 0
}

Now enable the PROXY protocol for the ALBs:

$ ibmcloud ks ingress lb proxy-protocol enable --cluster proxy-protocol-blog

After you run this command, existing load balancers are deleted and recreated, which can cause service disruptions. Two unused IP addresses for each new load balancer must be available in each subnet during the load balancer recreation. Modify the PROXY protocol configuration? [y/N]> y
OK
Note: It might take up to 30 minutes for your configuration to be fully applied.

After around 30 minutes, the PROXY protocol configuration is applied, and the ALB pods are restarted:

$ kubectl get pods -n kube-system --watch-only
NAME                                                 READY   STATUS    RESTARTS   AGE
public-crbuls7lk20t9qglbsgskg-alb1-98c5d665c-m6pc4   0/1     Pending   0          0s
public-crbuls7lk20t9qglbsgskg-alb1-668d79bffd-j7mfm   1/1     Terminating   0          33h
...

Now, send another request to the Ingress subdomain to check whether the IP address of the original client is returned in PROXY protocol headers:

$ curl proxy-protocol-blog.us-south.containers.appdomain.cloud

Hostname: test-application-5ccc6cd54-lzr9s

Pod Information:
	-no pod information available-

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=172.17.24.74
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://proxy-protocol-blog.us-south.containers.appdomain.cloud:8080/nopp

Request Headers:
	accept=*/*
	host=proxy-protocol-blog.us-south.containers.appdomain.cloud
	user-agent=curl/7.58.0
	x-forwarded-for=169.53.74.35
	x-forwarded-host=proxy-protocol-blog.us-south.containers.appdomain.cloud
	x-forwarded-port=80
	x-forwarded-proto=http
	x-real-ip=169.53.74.35
	x-request-id=ab1c3fe0fb707fecd538bd5e073d787d
	x-scheme=http

Request Body:
	-no body in request-

The client address is the IP address of an ALB pod that forwarded the traffic to the application again. However, this time, the x-forwarded-for and x-real-ip headers contain the actual client IP address, 169.53.74.35. Also, when using PROXY protocol, the access logs generated by the ALBs contain the actual client address:

$ kubectl logs -n kube-system deployments/public-crbuls7lk20t9qglbsgskg-alb1 | tail -n 1 | jq .
{
  "time_date": "2020-11-12T22:34:40+00:00",
  "client": "169.53.74.35",
  "host": "proxy-protocol-blog.us-south.stg.containers.appdomain.cloud",
  "scheme": "http",
  "request_method": "GET",
  "request_uri": "/",
  "request_id": "ab1c3fe0fb707fecd538bd5e073d787d",
  "status": 200,
  "upstream_addr": "172.17.24.72:8080",
  "upstream_status": 200,
  "request_time": 0,
  "upstream_response_time": 0,
  "upstream_connect_time": 0,
  "upstream_header_time": 0
}

Limitations

In order to process PROXY protocol headers, your ALBs must run the Kubernetes Ingress controller image. Check out the announcement and the official documentation for Kubernetes Ingress controller image.

Currently, you cannot configure the PROXY protocol separately for public and private ALBs. The PROXY protocol is simultaneously enabled or disabled for both ALB types.

More information

For more information, check out our official documentation about exposing apps with load balancers and about preserving source IP addresses with Ingress application load balancers.

Contact us

If you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.

Was this article helpful?
YesNo

More from Cloud

IBM Cloud Reference Architectures unleashed

2 min read - The ability to onboard workloads to cloud quickly and seamlessly is paramount to accelerate enterprises digital transformation journey. At IBM Cloud, we're thrilled to introduce the IBM Cloud® Reference Architectures designed to empower clients, technical architects, strategists and partners to revolutionize the way businesses harness the power of the cloud. VPC resiliency: Strengthening your foundation Explore the resilience of IBM Cloud Virtual Private Cloud through our comprehensive resources. Dive into our VPC Resiliency white paper, a blueprint for building robust…

Enhance your data security posture with a no-code approach to application-level encryption

4 min read - Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle. While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance…

Attention new clients: exciting financial incentives for VMware Cloud Foundation on IBM Cloud

4 min read - New client specials: Get up to 50% off when you commit to a 1- or 3-year term contract on new VCF-as-a-Service offerings, plus an additional value of up to USD 200K in credits through 30 June 2025 when you migrate your VMware workloads to IBM Cloud®.1 Low starting prices: On-demand VCF-as-a-Service deployments begin under USD 200 per month.2 The IBM Cloud benefit: See the potential for a 201%3 return on investment (ROI) over 3 years with reduced downtime, cost and…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters