How to deploy, manage, and secure your container-based workloads – Part 3
5 min read
How to deploy, manage, and secure your container-based workloads – Part 3
Kubernetes provides several options when applications in a cluster are ready to be consumed by the outside world. The IBM Bluemix Container Service based on Kubernetes has several logical networks inside the cluster that are not accessible outside of it, as discussed in my previous post . However, to expose our applications, we need to understand how the logical networks relate to the physical network that the cluster infrastructure is placed on. In this entry, we look at what options are available for the outside world to call our applications running in Kubernetes.
This blog series is based on my team’s experience deploying our Microservices reference architecture; you can find the code for our simple storefront application on GitHub.
Learn more about our Microservices architecture
This post is a continuation of a series on the networking topology of the IBM Bluemix Container Service. The previous entries are listed below:
-
Kubernetes Cluster Networking Infrastructure on IBM Bluemix: Get an overview of the VMs and physical networks that are involved in creating your cluster on Bluemix Infrastructure.
-
Kubernetes Application Networking on IBM Bluemix – Review: Review the logical networks inside Kubernetes and how the applications that run inside of Kubernetes communicate with each other.
-
Kubernetes Application Networking on IBM Bluemix – Communication (this entry): Learn how the applications that run inside Kubernetes can communicate to the outside world.
-
Connect it – Using a VPN to connect Kubernetes on IBM Bluemix to on-premises resources: Learn how to connect networks outside of Bluemix by using a secure VPN tunnel and a Vyatta Gateway Appliance.
-
Protect it – Firewall and Network Policy for Kubernetes applications on IBM Bluemix: Learn how to define a firewall to restrict who can communicate with your applications that run on Kubernetes.
Avoid exposing your services directly on worker nodes, if possible
The simplest way to expose services running in Kubernetes directly on the worker nodes using the NodePort service type. This forwards traffic on a configurable (usually high-number) port on every interface of every worker node to the service’s target port.
Note that I wrote a port on every worker node. This means I can connect to the public (or private) facing IP address on any of the worker nodes in my cluster on the forwarded port and access my service. This is the “magic” of the kube-proxy running on every node that writes iptables rules to intercept incoming traffic and forward requests to one of the pods supporting the service, even if the pods are running on another node. Because the service is being exposed, kube-proxy is also loaded balancing access among the pods that are part of the service.
The downside of using the NodePort service type is that clients need to know the IP addresses of all worker nodes if they want high availability. If a worker node goes down for some reason, the client code’s logic needs to try the next worker node and stop trying to connect to the unavailable worker node. Similarly, if I add a new worker node, the client code is responsible for discovering the node’s IP address.
I really don’t like the idea of mixing management traffic with application traffic on the same network, and yet I’m exposing my clients to the same interface that the Kubernetes master nodes talk to the worker nodes on! Let’s look at another option in the next section.
Using floating IPs with the LoadBalancer service type
As mentioned in Part 1 of the series, when I select the public VLAN that the worker nodes are placed on, a public portable subnet (/29) is created on the public VLAN that serves as a pool of usable Internet-facing IP addresses. There are 5 usable addresses in this subnet. If needed, additional subnets may also be added to the cluster. One of the IP addresses in the public portable subnet is automatically consumed by the Ingress controller. We’ll discuss that in the next section.
Another option is to deploy my named service using the LoadBalancer type, which gives me all of the NodePort service features discussed earlier plus more. In the IBM Bluemix Container Service implementation, a LoadBalancer service consumes one of the remaining addresses in my public portable subnet, and creates a deployment running two keepalived pods within the ibm-system namespace that manage the public IP address for the service. kube-proxy once again sets up iptables rules to forward the traffic from the load balancer service’s IP address and port(s) to one of the pods supporting the service.
For example, I can make my web app accessible directly to the internet at the address 169.145.120.3 like in the diagram using this LoadBalancer service type. I can publish any port (for example, port 80 or port 443) and even multiple ports.
One advantage to this is that my clients only need to know one IP address, which is static for my application. And since the keepalived pods are running on different worker nodes (via Kubernetes pod anti-affinity), I now have high availability for the service’s public IP address, and it’s transparent to my clients. My application traffic is also separate from my management traffic, as they are two distinct subnets.
If I have more than one application running in Kubernetes that I want to expose, I can use the LoadBalancer type for each one so they have different IP addresses and different DNS names. For example, I might have several backend-for-frontends (BFFs) that support different clients that call the same reusable microservices running in Kubernetes. I can use a LoadBalancer type and register different DNS names for each one, and scale out each of them independently depending on the load.
Exposing my REST APIs using Ingress Resources
We can also expose services as ingress resources on the Ingress controller. The Ingress controller is a special LoadBalancer-type service automatically deployed with the cluster. It’s a nginx-based container deployment that can be used to expose one or more services to the Internet. The IBM Bluemix Container Service registers a unique public DNS name that resolves to the public IP address for my Ingress controller, similar to <my-cluster-name>.<region>.containers.mybluemix.net
. Note that the public DNS name is truncated to 64 characters.
For example, I have my Orders service REST API (at /micro/orders
) that I deployed to my awesome-kube
Kubernetes cluster in the us-south
region. I can expose it to the Internet using the Ingress controller by creating an Ingress resource at the path /micro/orders
. Now my clients can call my Orders service REST API at https://awesome-kube.us-south.containers.mybluemix.net/micro/orders
.
As mentioned above, the Ingress controller is exposed through the LoadBalancer service type using the reserved public IP, making it highly available. One advantage of using the Ingress controller is that the public DNS entry for it is automatically set up when the cluster is deployed. In addition to registering a public DNS name, the IBM Bluemix Container Service also provides CA signed certificate for the assigned DNS name. This certificate is created as a Kubernetes secret in the “default” namespace and can be used to terminate TLS connections for L7 routing (Note: The Ingress resource and corresponding Kubernetes secret must exist in the same namespace). If I’m exposing multiple services using different paths, I can terminate the TLS connections for all of them at the Ingress controller instead of having to set up multiple endpoints and set up multiple certificates.
The Ingress controller is a good option for an API gateway pattern when I have a lot of microservices in Kubernetes that I want to expose to third-party clients; if I want to expose the REST APIs for my services directly to the Internet, I can create Ingress resources for each service to publish them. This is also great for introducing new version of APIs; developers can deploy Ingress resources for any new versions of APIs they want to expose along with the Kubernetes deployments and container images. They can also retire old APIs by removing Ingress resources, removing them from public consumption.
Conclusion
Kubernetes provides several options when we want to expose our applications to clients outside of the cluster. Being able to manage which services are exposed from Kubernetes abstractions like Ingress controllers lets developers deliver features quickly without overhead. And we can separate our management traffic from our application traffic allowing us to define robust firewall policies at the router level. For more information and examples to test public access to applications in your cluster, see Allowing public access to apps in IBM Bluemix Container Service. For a broader introduction to microservices, check out the Architecture Center in the IBM Garage Method:
Learn more about our Microservices architecture
In my next post, we’ll look at how to connect on-premise networks to the applications running in a Kubernetes cluster on IBM Bluemix. Finally, in closing, I would like to recognize the timely contributions and corrections for this post from Richard Theis (IBM Cloud Network Development) and Shaival Chokshi (IBM Container Services). Thanks!
References
-
Microservices Architecture (ibm.com/devops/method)
-
NodePort: Publishing services – service types (kubernetes.io/docs)
-
kube-proxy: Config reference (kubernetes.io/docs)