Compute Infrastructure

Kubernetes and IBM Bluemix: How to deploy, manage, and secure your container-based workloads – Part 4

Share this post:

Kubernetes While writing microservices in containers and deploying them to Kubernetes running on Bluemix is a great way to create greenfield applications, frequently we’ll also have mountains of useful data hosted by on-premise systems of record. What if because of compliance reasons, we can’t migrate all of it to the cloud, but we still want to allow our new microservice applications to leverage this data?

In the IBM Bluemix Container Service, we can use a Vyatta Gateway Appliance or Fortigate Appliance in the Bluemix Infrastructure as an IPSec VPN endpoint to connect our Kubernetes cluster to our on-premise datacenter. This allows our services running in Kubernetes to communicate with on-premise applications through a securely encrypted tunnel, so they can leverage resources like mainframes, user directories, or databases that we can’t always put in a Kubernetes pod.

This blog series is based on my team’s experience deploying our Microservices reference architecture; you can find the code for our simple storefront application on GitHub.

This is the fourth entry in the series and continues on the details of the networking topology of the IBM Bluemix Container Service; specifically, we will review VPN/IPSec tunnel establishment through a Vyatta Gateway Appliance.

Firewall/Router with Vyatta Gateway Appliance

One of the major network building blocks in IBM Bluemix Infrastructure is the Vyatta Gateway Appliance, which is a bare metal server that runs a special distribution of Linux that you can use as a firewall to route and protect traffic on a set of VLANs. In this case, the Vyatta acts as a firewall and a VPN gateway; it must be Internet-routable and reachable, but no other servers need to be exposed to the Internet.

All public and private network traffic that enters or leaves the VLANs is routed through this Vyatta Gateway Appliance, which lets us control who can communicate with our worker nodes. In the “Protect It” entry of this series, we will look at how we can use this device as a gateway-level firewall to secure application and management traffic to our worker nodes.

The Vyatta is used as a VPN endpoint to create an encrypted IPSec tunnel between servers in Bluemix Infrastructure and on-premise. This can be used to connect the private network in Bluemix Infrastructure (one or more subnets in 10.0.0.0/8) to one or more private networks over an encrypted connection.

You can configure the Vyatta side of the VPN tunnel by following these IBM KnowledgeLayer instructions or the Vyatta official documentation. The steps to configure the on-premise side of the tunnel varies based on firewall or router device used. Once configured, the VPN tunnel will connect the on-premise subnets (e.g. 192.168.0.0/24 and 192.168.1.0/24) to the subnets in the Bluemix Infrastructure private network (10.0.0.0/8) where the Kubernetes cluster is running through an encrypted IPSec tunnel.

Solving the Overlapping Subnet Problem

The IBM Bluemix Infrastructure side of the tunnel always assigns the Kubernetes cluster a subnet in the private class A network 10.0.0.0/8 as published in RFC1918. Some on-premise networks also use this address space for hosts in on-premise networks, which will overlap with networks that the Kubernetes cluster uses. To remedy this, you will need to configure the routers on both sides of the tunnel to perform a one-to-one subnet mapping using Bidirectional NAT on both sides using unique unused subnets.

In the above example, pods running in Kubernetes can reach the on-premise servers using the matching address in the mapped subnet 192.168.1.0/24 (e.g. 192.168.1.1 is 10.0.1.1 on-premise, etc), and the Kubernetes servers can be reached from on-premise using the mapped subnet 192.168.2.0/27 (e.g. 192.168.2.1 is 10.0.1.1 in the cloud).

Reverse Path Filtering

The Kubernetes worker nodes are configured on both public and private networks, so they have two ways of connecting to devices outside of the VLAN, both of which go through the same Vyatta Gateway Appliance. Traffic destined for the private network (e.g. other worker nodes or servers running in Bluemix Infrastructure) are sent using the route for 10.0.0.0/8 on the private network. All other traffic goes through the default route using the public network. The Vyatta then decides where to forward the packets next according to the firewall rules and available networks it knows about.

There’s a security feature in the Linux kernel called reverse path filtering that prevents traffic coming in on one interface from leaving another interface in order to prevent private network traffic from getting forwarded onto the Internet by mistake. For traffic destined for on-premise networks, the worker nodes don’t know that there’s a VPN tunnel, so they blindly try to use their default route through the public network. In the diagram below, we can see why this becomes a problem in our Bluemix Infrastructure setup:

The on-premise server 192.168.0.100 wants to send a request to the worker node; the request makes it through the VPN to the worker node, but the worker node tries to send the response through its default route on the public network interface because 192.168.0.100 is not in its route table. Because of the above Linux feature, the response is dropped by the worker node.

The solution we used is to add our on-premise subnets to each worker node’s static route table so they know to send the response back through the private network. Because Kubernetes clusters can have hundreds of nodes and it would be a pain to configure them all individually, Kubernetes has a concept called Daemon Sets which allows you to run one pod per worker node. When we run a pod on the host network with the NET_ADMIN privilege, we can add entries for our on-premise subnets to each worker node’s static route table. I have included the DaemonSet and associated ConfigMap you can use to update each worker node’s static route tables here.

Combining on-premise services as Kubernetes Services

Once we’ve got the connectivity working, we want to leverage the Kubernetes way of doing things with our on-premise services. As we mentioned in our previous post about application networking, we can publish services in Kubernetes so our microservices can talk to each other using a stable DNS name and IP address. This isn’t limited to just pods that are running in Kubernetes though, we can also define a service for something running outside of the cluster.

For example, in our reference implementation, we decided to move the Orders microservice off of the Kubernetes cluster running in Bluemix and into an on-premise server with a separate MySQL database.

We can create an Endpoints resource to manually tell Kubernetes about an endpoint; in our case it’s on the other side of the tunnel.

kind: Endpoints
apiVersion: v1
metadata:
  name: orders-service
subsets:
  - addresses:
      - ip: 192.168.0.189
    ports:
      - port: 8080

Then we create the Service resource to create a Cluster IP and a DNS name in kube-dns for the Orders service, as usual.

apiVersion: v1
kind: Service
metadata:
  name: orders-service
  labels:
      name: orders-service
spec:
  ports:
   - protocol: TCP
     port: 8080  

Now, other microservices in Kubernetes can call my Orders service at http://orders-service/micro/orders as if it were running inside of Kubernetes. Note also that I can have several addresses in the Endpoints resource, and Kubernetes will round-robin between those addresses just like between pods that are running inside the cluster!

Conclusion

The Vyatta Gateway Appliance allows us to connect multiple private networks together over a secure encrypted tunnel, which means I don’t have to migrate all my data to the cloud in order to write new cloud-native applications that leverage my data. Kubernetes can also provide an abstraction so my microservices running inside my cluster don’t even need to know they’re calling pods inside the same cluster.

Before closing, allow me to remind you that for a broader introduction to microservices, you should check out the Architecture Center in the IBM Garage Method:

In our final post in the series, we’ll look at how we can secure the networks for microservices applications running in Kubernetes.

More Compute Infrastructure stories
May 7, 2019

We’ve Moved! The IBM Cloud Blog Has a New URL

In an effort better integrate the IBM Cloud Blog with the IBM Cloud web experience, we have migrated the blog to a new URL: www.ibm.com/cloud/blog.

Continue reading

May 6, 2019

Use IBM Cloud Certificate Manager to Obtain Let’s Encrypt TLS Certificates for Your Public Domains

IBM Cloud Certificate Manager now lets you obtain TLS certificates signed by Let’s Encrypt. Let’s Encrypt is an automated, ACME-protocol-based CA that issues free certificates valid for 90 days.

Continue reading

May 6, 2019

Are You Ready for SAP S/4HANA Running on Cloud?

Our clients tell us SAP applications are central to their success and strategy for cloud, with a deadline to refresh the business processes and move to SAP S/4HANA by 2025. Now is the time to assess, plan and execute the journey to cloud and SAP S/4HANA

Continue reading