Introducing the IBM Cloud Kubernetes Service gateway-enabled clusters for Classic
Today, IBM Cloud Kubernetes Service is introducing the availability of gateway-enabled clusters. Gateway-enabled clusters allow you to easily provision a gateway worker pool inside your cluster that provides network connectivity separation between the internet (or a directly attached on-premise data center) and the compute workload that is running in the cluster.
Until now, network appliances that required purchase provided edge gateway and firewall support to workloads running in an IBM Cloud Kubernetes Service cluster. With gateway-enabled clusters, network appliances are no longer required to provide necessary edge firewall and gateway support in IBM Cloud Classic Infrastructure.
Gateway-enabled clusters provide firewall and gateway routing functionality built directly in an IBM Cloud Kubernetes Service cluster deployed on classic infrastructure. This will save time and money by reducing the need to use additional gateway and firewall devices that can be expensive and difficult to configure.
How does it deploy?
Out of the box, gateway-enabled clusters provide two worker pools to provide network separation. The first worker pool, named gateway
, provides public and private network connectivity by default. This worker pool provides an edge firewall for ingress and egress traffic, an L4 load balancer for ingress traffic to the cluster, and an ECMP gateway for egress traffic from the cluster. The nodes in this worker pool are tainted so no compute workload can be scheduled to them. This ensures that no compute applications sit on the edge of the network.
Next, a worker pool named compute
is created. The compute worker pool provides only private network connectivity and cannot be directly accessed from the public network. By default, Kubernetes pods are scheduled only to the compute worker pool.
Finally, you can optionally create a worker pool named edge
that hosts the ingress application load balancer (ALB). When deployed (explained in the next section), the edge worker pool hosts ALBs only and is tainted to not allow any other workload to be deployed to these worker nodes. Edge worker nodes have only private network connectivity and cannot be accessed directly from the public network. The purpose of the edge worker pool is to provide another level of network separation between the public network and the compute worker pool, especially in cases where network traffic is exposed using an ALB. Please note, if an edge
node is not present, ALB pods will be scheduled to workers in the compute
worker pool.
How is traffic routed?
When an ALB is configured, an L4 load balancer is also created for that ALB. The load balancer for that ALB is scheduled to a worker in the gateway worker pool, while the ALB is scheduled to a worker in the edge worker pool. When a request is made to a path defined in an ingress resource, the request is first routed to the load balancer in the gateway worker pool. The traffic is load balanced over the private network to one of the ALBs deployed in the edge worker pool. The ALB in the edge worker pool proxies the request to one of the backend application pods in the compute worker pool.
When the backend application returns a response, it is returned to the ALB that proxied the initial request. The ALB responds to the initiator of the request. Equal Cost Multipath (ECMP) is then used to balance the response traffic through one of the workers in the gateway worker pool to the initiator of the request.
If you create a load balancer service instead of an ALB to direct traffic to an application pod, the load balancer routes traffic directly to the application pod in the compute worker pool over the private network. The compute worker uses ECMP to send the response back to the initiator through one of the workers in the gateway worker pool.
What about the firewall?
The Calico network plug-in is provisioned in a gateway-enabled cluster just as it is in any other IBM Cloud Kubernetes Service cluster. Calico Global Network policies using the public Host Endpoint will only be applied to the workers in the gateway worker pool because they are the only workers attached to the public network. Additionally, Kubernetes network policies can be created to provide traffic control for pod-to-pod communication.
How do I get started?
Run the following command to create a new gateway-enabled cluster:
This creates a gateway-enabled cluster with two worker nodes in the gateway
worker pool (the default), six private-only compute worker nodes of flavor c3c.32×64 in the compute
worker pool, and no edge worker pool.
Note: To create a gateway-enabled cluster, the --private-service-endpoint
flag is required. Additionally, gateway-enabled clusters only work on clusters at Kubernetes version 1.15 and higher.
To add a worker pool of edge nodes to a gateway-enabled cluster, use the following command:
To change the default worker pool sizes:
Contact us
If you have questions, engage our team via Slack by registering here and joining the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.