October 1, 2019 By Cale Rath 4 min read

Introducing the IBM Cloud Kubernetes Service gateway-enabled clusters for Classic

Today, IBM Cloud Kubernetes Service is introducing the availability of gateway-enabled clusters. Gateway-enabled clusters allow you to easily provision a gateway worker pool inside your cluster that provides network connectivity separation between the internet (or a directly attached on-premise data center) and the compute workload that is running in the cluster.

Until now, network appliances that required purchase provided edge gateway and firewall support to workloads running in an IBM Cloud Kubernetes Service cluster. With gateway-enabled clusters, network appliances are no longer required to provide necessary edge firewall and gateway support in IBM Cloud Classic Infrastructure.

Gateway-enabled clusters provide firewall and gateway routing functionality built directly in an IBM Cloud Kubernetes Service cluster deployed on classic infrastructure. This will save time and money by reducing the need to use additional gateway and firewall devices that can be expensive and difficult to configure.

How does it deploy? 

Out of the box, gateway-enabled clusters provide two worker pools to provide network separation. The first worker pool, named gateway, provides public and private network connectivity by default. This worker pool provides an edge firewall for ingress and egress traffic, an L4 load balancer for ingress traffic to the cluster, and an ECMP gateway for egress traffic from the cluster. The nodes in this worker pool are tainted so no compute workload can be scheduled to them. This ensures that no compute applications sit on the edge of the network.

Next, a worker pool named compute is created. The compute worker pool provides only private network connectivity and cannot be directly accessed from the public network. By default, Kubernetes pods are scheduled only to the compute worker pool.

Finally, you can optionally create a worker pool named edge that hosts the ingress application load balancer (ALB). When deployed (explained in the next section), the edge worker pool hosts ALBs only and is tainted to not allow any other workload to be deployed to these worker nodes. Edge worker nodes have only private network connectivity and cannot be accessed directly from the public network. The purpose of the edge worker pool is to provide another level of network separation between the public network and the compute worker pool, especially in cases where network traffic is exposed using an ALB. Please note, if an edge node is not present, ALB pods will be scheduled to workers in the compute worker pool.

How is traffic routed?

When an ALB is configured, an L4 load balancer is also created for that ALB. The load balancer for that ALB is scheduled to a worker in the gateway worker pool, while the ALB is scheduled to a worker in the edge worker pool. When a request is made to a path defined in an ingress resource, the request is first routed to the load balancer in the gateway worker pool. The traffic is load balanced over the private network to one of the ALBs deployed in the edge worker pool. The ALB in the edge worker pool proxies the request to one of the backend application pods in the compute worker pool.

When the backend application returns a response, it is returned to the ALB that proxied the initial request. The ALB responds to the initiator of the request.  Equal Cost Multipath (ECMP) is then used to balance the response traffic through one of the workers in the gateway worker pool to the initiator of the request.

If you create a load balancer service instead of an ALB to direct traffic to an application pod, the load balancer routes traffic directly to the application pod in the compute worker pool over the private network. The compute worker uses ECMP to send the response back to the initiator through one of the workers in the gateway worker pool.

What about the firewall?

The Calico network plug-in is provisioned in a gateway-enabled cluster just as it is in any other IBM Cloud Kubernetes Service cluster. Calico Global Network policies using the public Host Endpoint will only be applied to the workers in the gateway worker pool because they are the only workers attached to the public network. Additionally, Kubernetes network policies can be created to provide traffic control for pod-to-pod communication.

How do I get started?

Run the following command to create a new gateway-enabled cluster:

ibmcloud ks cluster-create --workers 6 --gateway-enabled --machine-type c3c.32x64 --private-vlan <private_VLAN> --public-vlan <public_VLAN> --private-service-endpoint --public-service-endpoint --name gec-1 --location dal10 --kube-version 1.15

This creates a gateway-enabled cluster with two worker nodes in the gateway worker pool (the default), six private-only compute worker nodes of flavor c3c.32×64 in the compute worker pool, and no edge worker pool.

Note: To create a gateway-enabled cluster, the --private-service-endpoint flag is required. Additionally, gateway-enabled clusters only work on clusters at Kubernetes version 1.15 and higher.

To add a worker pool of edge nodes to a gateway-enabled cluster, use the following command:

ibmcloud ks worker-pool-create --cluster gec-1 --name edge --machine-type b2c.8x32 --size-per-zone 2 --labels dedicated=edge,node-role.kubernetes.io/edge=true,ibm-cloud.kubernetes.io/private-cluster-role=worker

 To change the default worker pool sizes:

ibmcloud ks worker-pool-resize --cluster gec-1 --size-per-zone 3 --worker-pool gateway
ibmcloud ks worker-pool-resize --cluster gec-1 --size-per-zone 8 --worker-pool compute

Contact us

If you have questions, engage our team via Slack by registering here and joining the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.

More from Announcements

IBM Consulting augments expertise with AWS Competencies: A win-win for clients 

3 min read - In today's dynamic economic landscape, businesses demand continuous innovation and speed of execution. At IBM Consulting®, our unwavering focus on partnerships and shared commitment to delivering enterprise-level solutions to mutual clients have been core to our success.   We are thrilled to announce that IBM® has recently gained five competencies from Amazon Web Services (AWS) in vital domains including Cloud Operations, Internet of Things (IoT), Life Sciences, Mainframe Modernization, and Telecommunications. With these credentials, IBM further establishes its position as a…

Probable Root Cause: Accelerating incident remediation with causal AI 

5 min read - It has been proven time and time again that a business application’s outages are very costly. The estimated cost of an average downtime can run USD 50,000 to 500,000 per hour, and more as businesses are actively moving to digitization. The complexity of applications is growing as well, so Site Reliability Engineers (SREs) require hours—and sometimes days—to identify and resolve problems.   To alleviate this problem, we have introduced the new feature Probable Root Cause as part of Intelligent Incident…

Reflecting on IBM’s legacy of environmental innovation and leadership

4 min read - Upholding a legacy of more than 50 years of environmental responsibility through our company’s actions and commitments, IBM continues to be a leader in driving sustainability for our business, our communities and our clients—including a 34-year history of annual, public environmental reporting, which we continue today. As a hybrid cloud and artificial intelligence (AI) company, we believe that leveraging technology is key to unlocking impact, and it will play a substantial role in how society addresses, adapts to, and overcomes…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters