Deployment Patterns #1: Single-Zone Cluster, App exposed via LoadBalancer (NLB) and ALB (Ingress Controller)
In my previous post, “IBM Cloud Kubernetes Service: Deployment Patterns for Maximizing Throughput and Availability,” I briefly described a few cluster deployment patterns that should be considered when you are looking to deploy IBM Cloud Kubernetes Service clusters. When choosing the right pattern, you must consider the requirements of the application you are running (including scale), the SLA target, and the budget. The simplest pattern is deploying an IBM Cloud Kubernetes Service cluster in a single zone within a region, and we’ll go into more detail on this option here.
LoadBalancer vs. ALB/Ingress controller: When should I use each one?
The LoadBalancer is typically a Layer4 (in the OSI layers) load balancer and is implemented by using a Network Load Balancer (NLB). For a Kubernetes cluster, this typically means TCP and UDP (in some cases SCTP). The LoadBalancer service has no concept of anything above in the higher layers (e.g., the application layer). It does not understand HTTP, for example, which is a Layer7 protocol.
Application Load Balancer (ALB)/Ingress controllers are fundamentally reverse proxies. They are typically used when the application uses a protocol that the proxy understands and can provide additional features, functionality, and value. Microservices are usually reached (and even talk to each other) over HTTP. If this is the case, a Layer7 proxy that can make smart decisions based on HTTP headers, GET, POST parameters, cookies, etc., is a great tool for request routing, application-level load balancing, and incorporating higher protocol level (L7) information in the routing decisions. ALBs/Ingress controllers are typically run as user-space daemons in Kube pods.
If the protocol is unknown to the ALB (e.g., a binary protocol like MQTT, RTMP, MySQL, PostgreSQL, etc.), a proxy-like load balancer (like the ALB) does not give much benefit over a Layer4 load balancer (like the LoadBalancerservice). Therefore, if your ALB will not process HTTP requests (e.g., if it is not terminating the TLS connection for HTTPS), we suggest you use the IBM Cloud Kubernetes Service LoadBalancer that is more efficient, faster in packet processing and forwarding, and able to keep the source IP address of the connecting clients and horizontally scale to multiple worker nodes.
Example deployment pattern
In this article, we are going to go through the steps to deploy an example application with the following deployment pattern:
You should see a response similar to the following:
Notice in the x-forwarded-for and x-real-ip header, you see the IP address of the worker node. This happens because kube-proxy is doing source NAT within the Kubernetes cluster and masks the original source IP of the client.
If you want to enable source IP preservation, you have to patch the IBM Cloud Kubernetes Service ALB (you can find further documentation about this step here). To set up source IP preservation for all public ALBs in your cluster, run the following command:
If you've ever wanted to run a web server, an API gateway, an Ingress controller, a Kafka proxy, a service that has a binary protocol like an MQTT service or database, or essentially anything that runs on TCP (or UDP), you can now run it in IBM Cloud Kubernetes Service on a host name.
In the past, we've talked about containerization technology and dove into Kubernetes as an orchestration platform, but we're going to take a step back to look at why container orchestration is necessary and the benefits it brings to both developers and operations teams.
IBM Log Analysis with LogDNA has a solution for multi-tenant services running on IBM Cloud. Starting now, platform service logs from your IBM Cloud multi-tenant services will be appearing in your provisioned LogDNA instances.