Hybrid Deployments

Kubernetes and IBM Bluemix: How to deploy, manage, and secure your container-based workloads – Part 2

Share this post:

Kubernetes As developers package individual microservices in containers to be deployed to the cloud, they know that each microservice is not an island. In an application composed of microservices, each microservice should be able to call each other. When deploying to the IBM Bluemix Container Service based on Kubernetes, there’s not just the physical network that the actual cluster infrastructure is placed on, but also logical networks that Kubernetes exposes to containers.

In the previous entry, we briefly covered the physical networks involved in a Kubernetes cluster and described the networks for application and management-related traffic. In this entry, we look at the network details more closely to better understand how your container-based workloads communicate between each other in Kubernetes.

In our reference implementation, we have created several backend microservices that need to communicate with each other. We will look at how the logical networks in Kubernetes made it easy for us to pull these microservices together into our web-based store front application.

This post is a continuation of a series on the the networking topology of the IBM Bluemix Container Service:

How Bluemix Container Service control networks in Kubernetes

IBM Bluemix Container Service uses the open-source Project Calico under the covers to control networks in Kubernetes. Calico is a Software Defined Networking (SDN) controller that can manage virtual networks across a cluster based on application requirements defined by an orchestrator like Kubernetes.

Subnets for container traffic are defined by Kubernetes, and routes for each of the container subnets in the Kubernetes cluster are distributed using Border Gateway Protocol (BGP), which is a scalable protocol used by Internet Service Providers to dynamically distribute routes between them, kind of like when new worker nodes join the Kubernetes cluster and new containers get created and destroyed. In addition, Calico can be used by Kubernetes to define a network policy by workload, which allows for defining interesting firewall rules to secure your applications. We will explore this in the “Protect It” entry of this series.

In the IBM Bluemix Container Service, all traffic to and from pods (groups of containers in Kubernetes) is encapsulated using IP-in-IP tunnels, and is routed through the kube-proxy process running on each worker node. Kube-proxy intercepts and controls where to forward the traffic, either to another worker node running your destination pod, or outside of the cluster.

As a developer, it’s enough to know that “it just works” and your containers can talk to any other container running in the same cluster over a flat subnet. Below we discuss how your microservices can leverage the networks exposed by Kubernetes.

Avoid direct pod-to-pod communication

Kubernetes is configured with a large flat subnet (e.g. 172.30.0.0/16) that is considered internal application traffic inside of the cluster. Each worker node in the Kubernetes cluster is assigned one or more non-overlapping slices of this network, coordinated by the Kubernetes master node. When a container is created in the cluster, it gets assigned to a worker node and is given an IP address from the slice of the subnet for the worker node. Any pod can communicate with any another pod using its assigned IP address, even if it’s on a different worker node.

Pod-to-pod communication

Pods can be assigned new IP addresses when they get restarted, replicated, or re-deployed. Since such IP reassignments are very likely to occur, it’s not a good idea for pods to talk directly to other pods this way. Instead, we should define a service resource that represents a group of pods.

Connecting to pods reliably using services

Each cluster has a second flat subnet which we call the cluster IP subnet (e.g. 10.10.10.0/24) that is also not accessible outside of the worker nodes. When a service resource is created in Kubernetes, a virtual cluster IP is assigned using one of the addresses in this second flat subnet. Any communication with this cluster IP is intercepted by the kube-proxy daemon and forwarded to one of the pods exposed by the service.

Cluster forwarding

For example, in our reference implementation, we have a pod used to implement an Orders microservice, so we created a service resource in Kubernetes so that other microservices running in the cluster can call to the Orders microservice using its cluster IP. If the Orders service pod gets restarted or redeployed, the same cluster IP will still forward traffic to the new pod that implements the Orders service.

High availability and horizontal scaling using services

The cluster IP is load balanced, so if I have more than one pod implementing my Orders microservice, the actual pod that is selected when the cluster IP is called is selected in round-robin fashion. This enables high availability for my microservice since I can deploy a replica set of identical pods spread out between my worker nodes behind a single cluster IP. If one of my pods crashes, Kubernetes deploys another one somewhere else and the cluster IP continues forwarding my request.

Additionally, when it’s the holiday season and I need to handle more orders, I can add more Orders microservice pods and my cluster IP transparently spreads out the requests to these new pods. Kubernetes enables us to horizontally scale out the number of pods in our Orders service if there’s a lot of demand, without changing the way it’s published.

HA and scaling using services

Using the Cluster DNS to perform name lookups

The cluster IP chosen for a service is randomly selected, but the service resource’s name-cluster IP pair is registered to the internal kube-dns service, which is an internal DNS system that each pod’s DNS resolver configuration points at by default. To call a service from any pod in a Kubernetes cluster, it’s as simple as resolving the named service to a cluster IP.

Cluster DNS to perform name lookups

This way, another developer writing a microservice that calls my OrdersService REST API just uses an HTTP client that points at http://orders-service/micro/orders to get my API.

Conclusion

Kubernetes’ networking abstractions make things simpler for developers that write code packaged in containers, as they can call any dependent services from their code and it “just works”. The internal network is presented to the applications as a flat routable subnet. In our next post, we’ll look at how to connect clients outside of the cluster, like web browsers or mobile devices, can call our microservices application running on the Kubernetes cluster on IBM Bluemix over the Internet.

 

Learn more about IBM Cloud architectures

More Hybrid Deployments stories
May 2, 2019

Video – What is a DDoS Attack?

Ryan Sumner, Chief Networking Architect, gives an overview of DDoS attacks and just how the attacker's botnet can affect the target application and its users.

Continue reading

May 1, 2019

Two Tutorials: Plan, Create, and Update Deployment Environments with Terraform

Multiple environments are pretty common in a project when building a solution. They support the different phases of the development cycle and the slight differences between the environments, like capacity, networking, credentials, and log verbosity. These two tutorials will show you how to manage the environments with Terraform.

Continue reading

April 18, 2019

Load Balancing API Calls Across Regions with IBM Cloud Internet Services and Cloud API Gateway

In this article, we'll explore load balancing traffic across two geographically-separated backends built on IBM Cloud Functions. We'll use the IBM Cloud API Gateway to deploy the same API definition in both regions, and then intelligently route traffic with IBM Cloud Internet Services.

Continue reading