Bring your own private IP addresses to the cloud using IBM Container Service

By Jeffrey Kwong

Bring your own private IP addresses to the cloud using IBM Container Service

logo

The IBM Container Service enables rapid application development and deployment. In our previous series on Kubernetes and the IBM Container Service, we learned how to deploy and secure container-based workloads on the IBM Cloud. The Kubernetes LoadBalancer and Ingress resources were used to expose our Internet-facing services on a floating IP address and we were able to secure traffic to these services using a Gateway Appliance and Calico network policies.

Of course, not all applications and services are meant to be consumed by the public internet in general. For example, when standing up our microservices reference application on the IBM Container Service, we wanted to enable monitoring of our application using Grafana on-premise, consolidate logging using our on-prem ELK stack, view the status of Circuit-Breakers using a Hystrix dashboard, or allow developers to trace distributed requests using a Zipkin server.

In this post, we will discuss how to expose these services to an on-premises network, accessible only through a VPN tunnel.

Exposing services using the Private Network

Not all services should be accessible to the Internet at-large. For example, the target audience for service endpoints created for monitoring, such as Prometheus, is my SRE (Site Reliability Engineering) team. Not all monitoring components support authentication or encrypted connections, so providing Internet-facing endpoints to them provides an additional attack surface that I would need to secure. In our use case below, we wanted to federate monitoring of my application running on the IBM Cloud Container Service to a Prometheus instance on-premise, where my SRE team is already monitoring several other applications running on VMs and in IBM Cloud Private.

the Internet

In part 3 of our previous series, we learned about the different service types, and how services should be exposed on the application network instead of the management network. The application network can then be secured through a Gateway Appliance that filters traffic in and out of the cluster. The same principle can apply to private-facing applications.

In the IBM Container Service, every Kubernetes cluster is created with a private network portable subnet that can be used as the application network for private-facing applications. This is usually in the 10.0.0.0/8range. Both the public and private portable subnets used by the LoadBalancer service type can be viewed using the following command:

# bx cs cluster-get --showResources

When a LoadBalancer service type is created in Kubernetes, the cluster selects an address from a pool of IP addresses in the portable subnet. To have an address selected from the private portable subnet, we add the following annotation to the service resource, which makes the application private-facing:

kind: Service
metadata:
  annotations:
    service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: private

If you recall, we recommended not exposing services in production using the NodePort type, since this opens the NodePort port on all worker nodes. Another critical point is that the LoadBalancer service type automatically exposes NodePort that is opened on all interfaces of every worker node in the cluster, including the public interface. It is highly recommended to configure the Gateway Appliance at the edge to drop all traffic on both the public and private management networks to avoid exposing the private-facing application to the internet on this port. You can then whitelist traffic to and from the master nodes in the IBM-managed account. For more information on which subnets to whitelist, see the documentation.

Using a private subnet of my choosing

Many network administrators will want to expose services on a subnet that they select and not have to perform bi-directional Network Address Translation (NAT) at the edge, especially if the on-premise subnet overlaps with the IBM Cloud infrastructure private network of 10.0.0.0/8. To add additional user-defined subnets, we used this command:

# bx cs cluster-user-subnet-add <cluster-name> <CIDR> <private VLAN ID>

This command causes the worker nodes to forward traffic on this subnet to the correct pod running the service. In my case, I would like my cloud services running in my cluster jkwong-kube to be exposed on subnet 192.168.2.0/24. The private VLAN ID to use for this command can be viewed using:

# bx cs cluster-get <cluster-name> --showResources

Once the user subnet is created, I am able to define LoadBalancer service types with the above annotation set to use the private network, and to use an IP selected from the user-defined subnet pool. In my case, I have a Prometheus server that I want to expose to the private network only on http://192.168.2.14. I can create the LoadBalancer service type with the following YAML:

apiVersion: v1
kind: Service
metadata:
  name: prometheus-server
  annotations:
   service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: private
  labels:
    name: prometheus
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.2.14
  ports:
  - name: http
    port: 80
    protocol: TCP
  selector:
    name: prometheus

By convention, the first address in the user-defined subnet is assumed to be the gateway address for the subnet, and the last address in the range is assumed to be the broadcast address; these addresses should not be used for services. In my case, I configured my Vyatta Gateway Appliance to route traffic for this subnet on my VLAN with the gateway address 192.168.2.1, so I cannot use this address for applications.

Providing controlled access to my on-premises app

If you recall from our previous blog series, we configured a VPN tunnel and used a DaemonSet to manage static routes on our worker nodes to be able to route response traffic through the private network, over the tunnel, and back to the client on-premise. This is still required for every on-premise subnet that clients will be calling the on-cloud services from, so that worker nodes know to send the response through the private network interface and over the tunnel. In this example, this is the subnet that the federated Prometheus Server is running in.

In the on-premise side, I configured the edge device to route traffic to our user-defined subnet (i.e. 192.168.2.0/24) through the VPN tunnel. The full traffic flow now looks like this:

Providing controlled access to my on-premises app

On-premise, I add the extra data source to my federated Prometheus instance for 192.168.2.14. My SRE team can now monitor my new application running on the cloud using their existing toolchain!

Conclusion

Using the IBM Container Service enables us to provision all sorts of new workloads. It even allows me to provision applications on cloud infrastructure that I might only want to expose to clients behind my company firewall. This enables monitoring of our applications from on-premise by our SRE team through a secure encrypted tunnel, while protecting this traffic from the internet. For more information on the above features in the IBM Cloud Container Service, see the documentation.

Be the first to hear about news, product updates, and innovation from IBM Cloud