IBM Cloud Kubernetes Service Deployment Patterns #2: Multi-Zone Cluster—ALB/Ingress Controller

5 min read

By: Arpad Kun

Deployment Patterns #2: Multi-Zone Cluster, App exposed via ALB/Ingress Controller

In my previous post, “IBM Cloud Kubernetes Service: Deployment Patterns for Maximizing Throughput and Availability,” I briefly described a few cluster deployment patterns that should be considered when you are looking to deploy IBM Cloud Kubernetes Service clusters. When choosing the right pattern, you must consider the requirements of the application you are running (including scale), the SLA target, and the budget. In this pattern, you can observe the default behavior of a multi-zone IBM Cloud Kubernetes Service cluster and the application load balancer (ALB).

Example deployment pattern

We are going to go through the steps to deploy an example application with the following deployment pattern:

screenshot

Steps

  1. Sign up and create a multi-zone IKS cluster using the IBM Cloud Console. Please read the documentation on deploying a cluster and specifically how multi-zone clusters work. Important: You have to use the paid tier in order to use ALBs.

  2. Check if everything came up and the ALBs are running fine. You can find useful commands on the IBM Cloud Kubernetes Service Ingress Cheatsheets.

  3. Download, edit, and apply the following example Deployment and Ingress resource yaml, which will expose the echoserver application via the ALB/Ingress controller on both port 80(http) and 443(https):

    $ kubectl apply -f iks_single_or_multi-zone_cluster_app_via_ALB.yaml

    Note: Do not forget to edit theHost and secretName part.

Test the app

  1. To test, load the host you specified in your browser or initiate curlcommands (like my example):

    $ curl https://echoserver.arpad-ipvs-test-aug14.us-south.containers.appdomain.cloud/
  2. You should see a response similar to the following:

screenshot

Response to a successful curl delivered via the IBM Cloud Kubernetes Service ALB

Notice that in the x-forwarded-for and x-real-ip header, you see the IP address of the worker node. This happens because kube-proxy is doing source NAT within the Kubernetes cluster and masks the original source IP of the client.

If you want to enable source IP preservation, you have to patch the IBM Cloud Kubernetes Service ALB (you can find further documentation about this step here). To set up source IP preservation for all public ALBs in your cluster, run the following command:

$ kubectl get svc -n kube-system |grep alb | awk '{print $1}' |grep "^public" |while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec": {"externalTrafficPolicy":"Local"}}'; done
screenshot

Once the patch is applied, you will see the original source IP address of the client showing up in the x-forwarded-for and x-real-ip header:

screenshot

Finding the right pattern

As you learn more about your workload, you can adjust and even switch between patterns as needed. Different applications will require different patterns; please let us help you decide which is right!

You can learn more about the various deployment patterns in the following posts:

Contact us

If you have questions, engage our team via Slack by registering here and joining the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.

Be the first to hear about news, product updates, and innovation from IBM Cloud