Hybrid Deployments

Kubernetes and IBM Bluemix: How to deploy, manage, and secure your container-based workloads

Share this post:

Kubernetes More developers are packaging their microservices in containers to improve developer productivity and accelerate deployment to production. However, managing a cluster of servers on the cloud to run containers can be difficult without a container orchestration platform. The next evolution of the IBM Bluemix Container Service is based on Kubernetes and runs on IBM Bluemix Infrastructure in a few clicks. With the service, you can design and launch a cluster into your Bluemix Infrastructure account and then deploy and manage your container-based workloads by using the standard kubectl command line tool.

The service automates the full end-to-end installation of Kubernetes through the Bluemix portal. Before you create a cluster, though, you need to know which infrastructure components make up the cluster, where the cluster is going to be placed, how to connect it to your workloads and data sources, and how to secure it.

This blog series is based on my team’s experience deploying our Microservices reference architecture; you can find the code for our simple storefront application on GitHub.

If you’re not familiar with Bluemix Infrastructure, read on to learn about the infrastructure for the new IBM Bluemix Container Service. If you’re familiar with Bluemix Infrastructure or you have a Bluemix Infrastructure account with devices and networks, read on to learn how you can build new container-based applications in Kubernetes while using your current resources.

Overview of a Kubernetes cluster infrastructure

At a high level, a Kubernetes cluster consists of one or more master nodes that manage and monitor the cluster and one or more worker nodes that run the actual applications. On Bluemix, each cluster is provisioned with a dedicated master node that is provisioned in a Bluemix Infrastructure account managed by IBM. The worker nodes are provisioned into isolated, customer-owned Bluemix Infrastructure accounts in one of the supported datacenters.

When a new Kubernetes cluster is provisioned, in addition to the CPU and memory sizes of the worker nodes, you can also select the location, which includes locations that might have existing infrastructure in your account. The master and the worker nodes communicate over the public internet through an OpenVPN tunnel that is started from the worker nodes.

Kubernetes cluster with master and the worker nodes

If your applications or databases run in VMs or bare metal servers and you want to connect them to your containers that run in Kubernetes, reduce latency by placing the worker nodes in a datacenter that is as geographically close to the applications as possible. Keeping the applications close to the datacenter reduces the number of network hops and will improve the performance of the applications. Bluemix Infrastructure has more than 40 datacenter locations to choose from.

I liked that the control plane and Kubernetes master are provisioned in a Bluemix Infrastructure account that is managed by IBM and the worker nodes are provisioned in a customer-owned Bluemix Infrastructure account. This lets me observe the infrastructure that supports my Kubernetes cluster.

How Bluemix Infrastructure uses public and private networks

Bluemix Infrastructure has two networks that all servers are placed on by default. The public network allows servers to be reached from the internet. The private network allows servers to communicate with each other over a high-speed backbone that connects all customer-owned servers in all Bluemix Infrastructure datacenters. We were able to separate private back-end traffic, such as database access, monitoring, and logging, from public front-end traffic, such as clients. There’s no cost to transfer data over the private network, and it’s fast.

How Bluemix Infrastructure uses public and private networks

With Bluemix Infrastructure, you can optionally provision firewalls, routers, and load balancers to further protect the network segments (VLANs and subnets) that your servers are placed on. You can define rules about who can access your infrastructure. For example, you might want to expose certain applications to only your internal users through a VPN tunnel. In that case, you can define a set of firewall rules that block all traffic from the Internet to the servers that run your application, so it’s accessible from only the private network.

As part of the Kubernetes cluster creation, you must select public and private VLANs that the worker nodes use, all of which must be in the same datacenter pod on Bluemix Infrastructure. By doing so, that cluster is provisioned in that datacenter pod. Like other devices on Bluemix Infrastructure, each worker node consumes one IP address from the primary subnet on the public VLAN and one IP address from the primary subnet on the private VLAN.

Because the worker nodes are in your infrastructure account, you can use VLANs that might be behind routers and firewalls that you’re already using to protect servers on Bluemix Infrastructure. For example, you might have policies regarding network traffic that you define at the gateway level. In my case, instead of reusing VLANs with servers that were already on them, I ordered additional VLANs dedicated to my container-based workloads and used the Vyatta Gateway Appliance that I had in the datacenter pod to route and protect the network traffic. If you don’t have any VLANs in the datacenter that you want to provision the worker nodes in, new ones are created.

Kubernetes separation of management and application traffic

The IBM Bluemix Container Service also provisions two portable subnets, one private and one public, on the selected VLANs to be used as floating IPs when you want to expose services that run in Kubernetes outside of the cluster. As a result, you can separate Kubernetes cluster management network traffic from application traffic. You can further secure the cluster by defining a set of firewall rules to allow only the master node to communicate with the worker nodes on the management network. These rules are different from the rules that allow the clients to communicate with the applications that run in Kubernetes. We’ll look at how to do that in part 5 of this series.

If you’re using a Vyatta Gateway Appliance to route traffic on the VLANs, be sure to configure it to route traffic to these portable subnets as well, or your clients won’t be able to connect to the cluster. In part 3 of the series, we will explore how the IBM Bluemix Container Service uses these portable subnets to expose services and applications that run in Kubernetes to clients.

Traffic routing thru IBM Bluemix Container Service

Conclusion

While the IBM Bluemix Container Service makes it easy to provision a Kubernetes cluster, it’s important to understand the network topology of the infrastructure pieces, especially if you have servers on Bluemix Infrastructure like we did in our reference implementation. In my next post, I’ll expand on the internal Kubernetes networking concepts and how they are relevant in a microservices architecture.

More Network Stories

Rapidly developing applications (part 4): using development patterns

This is the fourth post in a series on microservices application development. The series provides a context for defining a cloud-based pilot project that best fits current needs and prepares for a longer-term cloud adoption decision.

Continue reading

Rapidly developing applications (part 3): implementing your microservices project

This is the third post in a series on microservices application development. The series provides a context for defining a cloud-based pilot project that best fits current needs and prepares for a longer-term cloud adoption decision.

Continue reading

Rapidly developing applications (part 1): an overview of microservices

This is the first post in a series on how to move your team towards the best long-term cloud platform adoption decision. Since adopting a cloud platform involves a significant commitment, and implies the confirmation that comes from previous work on one or more pilot projects, the primary goal of this series is to get you to the step of defining an appropriate cloud-based pilot project for your team.

Continue reading