IBM Cloud Kubernetes Service Deployment Patterns #1: Single-Zone Cluster

5 min read

By: Arpad Kun

Deployment Patterns #1: Single-Zone Cluster, App exposed via LoadBalancer (NLB) and ALB (Ingress Controller)

In my previous post, “IBM Cloud Kubernetes Service: Deployment Patterns for Maximizing Throughput and Availability,” I briefly described a few cluster deployment patterns that should be considered when you are looking to deploy IBM Cloud Kubernetes Service clusters. When choosing the right pattern, you must consider the requirements of the application you are running (including scale), the SLA target, and the budget. The simplest pattern is deploying an IBM Cloud Kubernetes Service cluster in a single zone within a region, and we’ll go into more detail on this option here.

LoadBalancer vs. ALB/Ingress controller: When should I use each one?

The LoadBalancer is typically a Layer4 (in the OSI layers) load balancer and is implemented by using a Network Load Balancer (NLB). For a Kubernetes cluster, this typically means TCP and UDP (in some cases SCTP). The LoadBalancer service has no concept of anything above in the higher layers (e.g., the application layer). It does not understand HTTP, for example, which is a Layer7 protocol.

Application Load Balancer (ALB)/Ingress controllers are fundamentally reverse proxies. They are typically used when the application uses a protocol that the proxy understands and can provide additional features, functionality, and value. Microservices are usually reached (and even talk to each other) over HTTP. If this is the case, a Layer7 proxy that can make smart decisions based on HTTP headers, GET, POST parameters, cookies, etc., is a great tool for request routing, application-level load balancing, and incorporating higher protocol level (L7) information in the routing decisions. ALBs/Ingress controllers are typically run as user-space daemons in Kube pods.

If the protocol is unknown to the ALB (e.g., a binary protocol like MQTT, RTMP, MySQL, PostgreSQL, etc.), a proxy-like load balancer (like the ALB) does not give much benefit over a Layer4 load balancer (like the LoadBalancerservice). Therefore, if your ALB will not process HTTP requests (e.g., if it is not terminating the TLS connection for HTTPS), we suggest you use the IBM Cloud Kubernetes Service LoadBalancer that is more efficient, faster in packet processing and forwarding, and able to keep the source IP address of the connecting clients and horizontally scale to multiple worker nodes.

Example deployment pattern

In this article, we are going to go through the steps to deploy an example application with the following deployment pattern:

DNS Response

 

Steps to expose app directly via LoadBalancer

  1. Sign up and create a single-zone IBM Cloud Kubernetes cluster using the IBM Cloud Console. Please follow the documentation on deploying a cluster and specifically how single-zone clusters work. Important: You have to use the paid tier.

  2. Download and apply the following example Deployment and Service resource yaml, which will expose the echoserver application via the LoadBalancer service on port 1884. You can also apply it directly:

    $ kubectl apply -f https://raw.githubusercontent.com/IBM-Cloud/kube-samples/master/loadbalancer-alb/iks_single-zone_cluster_app_via_LoadBalancer.yaml
  3. Check the IP address of the LoadBalancer service:

Check the IP address of the LoadBalancer service

Test the app

  1. To test, load the IP:port you specified in your browser or initiate curlcommands (like my example):

    $ curl http://{your IP here}:1884/
  2. You should see a response similar to the following:

You should see a response similar to the following

You can see the source IP address in the client_address field because we applied the externalTrafficPolicy: Local in the LoadBalancer Service resource.

Steps to expose app via the ALB/Ingress controller

  1. Sign up and create a single-zone IBM Cloud Kubernetes Service cluster using the IBM Cloud Console. Please follow the documentation on deploying a cluster and specifically how single-zone clusters work. Important: You have to use the paid tier in order to use ALBs.

  2. Check to see if everything came up and the ALBs are running fine. You can find useful commands on the IBM Cloud Kubernetes Service Ingress/ALB Cheat Sheets.

  3. Download, edit, and apply the following example Deployment and Ingress resource yaml, which will expose the echoserver application via the ALB/Ingress controller on both port 80(http) and 443(https):

    $ kubectl apply -f iks_single_or_multi-zone_cluster_app_via_ALB.yamlNote: Do not forget to edit theHost and secretName part.
  4. To test, load the host you specified in your browser or initiate curlcommands (like my example):

    $ curl https://echoserver.arpad-ipvs-test-aug14.us-south.containers.appdomain.cloud/
  5. You should see a response similar to the following:

You should see a response similar to the following

Response to a successful curl delivered via the IKS ALB

Notice in the x-forwarded-for and x-real-ip header, you see the IP address of the worker node. This happens because kube-proxy is doing source NAT within the Kubernetes cluster and masks the original source IP of the client.

If you want to enable source IP preservation, you have to patch the IBM Cloud Kubernetes Service ALB (you can find further documentation about this step here). To set up source IP preservation for all public ALBs in your cluster, run the following command:

$ kubectl get svc -n kube-system |grep alb | awk '{print $1}' |grep "^public" |while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec": {"externalTrafficPolicy":"Local"}}'; done
run the following command

Once the patch is applied, you should see the original source IP address of the client showing up in the x-forwarded-for and x-real-ip header:

Once the patch is applied, you should see the original source IP address of the client showing up

Finding the right pattern

As you learn more about your workload, you can adjust and even switch between patterns as needed. Different applications will require different patterns; please let us help you decide which is right!

You can learn more about the various deployment patterns in the following posts:

Contact us

If you have questions, engage our team via Slack by registering here and joining the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.

Be the first to hear about news, product updates, and innovation from IBM Cloud