Today, IBM Cloud Kubernetes Service is introducing a new beta feature for gateway-enabled clusters.
Until now, pod-to-pod communication in a cluster was isolated from any other external entities, such as standalone virtual or bare metal servers in the same customer account. Connecting pods to services external to your cluster required additional network resources (load balancers, gateways, etc.).
You can now add classic VSIs (Virtual Server Instances or virtual machines) and BM (Bare Metal) instances to the existing network of your classic gateway-enabled clusters in IBM Cloud Kubernetes Service. This feature provides seamless integration of the instances into the pod network without joining these server instances to the cluster itself. The VSI or BM instance is assigned a pod IP address so that Kubernetes application pods can communicate with the instance using the 172.30.x.x pod networking range. The feature also provides domain name resolution in both directions.
You might have a mixed application set of containerized and non-containerized workloads. When one type of workload must consume resources from the other type, it is a challenge to integrate the workloads on a network level.
For example, imagine a latency-sensitive database cluster that has a fine-tuned set of configurations on the operating system level (buffer sizes, huge tables, etc.). These kinds of workloads might not have an effective containerized version or alternative but you can still deploy them in virtual machines or on bare metal servers in IBM Cloud.
In order for a containerized business logic app to consume information from this database, some kind of network integration is needed. If the business logic app is deployed in a classic, gateway-enabled cluster, you can integrate the app and the database in your cluster network.
How it works
The network integration is provisioned through automation via a Kubernetes job. You first provide the cluster details and necessary credentials in a Kubernetes secret and configmap, then set up the job to run in the cluster. During the provisioning, the job is able to gather every needed credential and environment detail from the cluster. The job uses the secret, configmap, and an SSH connection to the server instance(s) to provision the network integration.
Because all compute worker nodes in gateway-enabled clusters have only private network access, the integrated VSI or bare metal instance should also have only private VLAN connectivity. The gateway worker nodes, which have public network access, are configured as default routes during the provisioning.
Before you begin
Make sure you have (or create) a gateway-enabled cluster with the private service endpoint enabled. To create a gateway enabled cluster, see ”IBM Cloud Kubernetes Service Gateway-Enabled Clusters.”
Next, check whether you have a VSI or bare metal instance that is provisioned on the same private VLAN as the worker nodes in your gateway-enabled cluster. If you do not have one, you can order new VSIs or bare metal instances by using the IBM Cloud console for Classic Infrastructure.
When you create a new instance, make sure you choose the following values:
- Select an appropriate name for the instance(s). This name is used as the DNS name in the Kubernetes DNS resolver.
- Select the same location (zone) where the cluster is deployed. If you use a multizone cluster, you can select any of the zones where you have worker nodes.
- The supported operating systems are CentOS 7.x, RedHat 7.x, or Ubuntu 18.04. Other OS types and versions are not supported for network integration.
- Inject a public SSH key to the instance. The private key is used during the provisioning job to access the instance.
- Select private-only networking and the same private VLAN that is used by the worker nodes in your cluster.
- Select at least "allow_outbound" and "allow_ssh" in the private security group drop-down list.
How to use
All the command examples below are expected to work on Linux- or Darwin-based operating systems.
Get started by setting up the following:
Create SSH key secret
Assuming that you have the private part of the SSH key in the id_rsa file on your local machine, create a Kubernetes secret:
Set configmap options
Prepare the configuration options and export them as environment variables before creating the configmap.
Create a file called
inventory that the provisioning job uses when it employs Ansible. Replace
<TARGET_IP> with the private IP address of the VSI or bare metal instance. You can find the instance IP address in the IBM Cloud classic infrastructure console or by running
ibmcloud sl vs list or
ibmcloud sl hardware list in the IBM Cloud CLI:
Export the path of the inventory file into an environment variable:
Set the IBM Cloud Container Registry domain for the region to which your cluster and server instance are deployed as an environment variable. When the job runs, container images are pulled from this registry domain. For example, use the following command for US South and US East:
Choose from which Kubernetes namespace you want to reach the VSI or bare metal instance via a DNS name. Your app pods will be able to reach the server instance pod IP address from any Kubernetes namespace, but the provisioning job sets up a DNS name in this selected namespace:
If you want to be able to make DNS resolutions from the VSI or bare metal instance towards the services in the cluster, the provisioning job can set up the cluster's resolver on the server instance. To use this optional feature, use the following command:
Otherwise, set it to false:
Create the configmap
Using the options that you exported, create the configmap for the provisioning job:
Note: Don’t modify the Kubernetes namespace and the name of the configmap object, as they will be referred to by other objects later.
Create ImagePullSecret in kube-system
Copy the default-us-icr-io image pull secret from the default namespace to the kube-system namespace. This secret is required so that the Kubernetes job can access the necessary images from the kube-system namespace:
For more information on ImagePullSecrets, see the IBM Cloud Kubernetes Service documentation.
Deploy the provisioning job
Save the following manifest content into a file named
To start the provisioning job, deploy the manifest file:
After the deployment of the manifest file, the job creates a pod that does the actual provisioning. To check the provisioning process, you can check the logs of the provisioner pod:
If an error is reported in the logs, you can delete all the created resources via
kubectl delete -f and change your configuration settings in the config map before retrying.
Accessing the target
After the provisioning job completes, your server instance is integrated in your cluster’s private pod network.
You can test the connection by pinging the private IP address or the hostname of the server instance from an app pod. You can use the following command example for an existing pod (your pod must have ping command installed):
Note that the ping requires the "allow_all" security group to be enabled on the target server (or any other security group that enables ICMP traffic).
To access the server via SSH, you might create a pod with an SSH client, for example:
This command will access the target via the root user. You can find the root user’s generated password on the cloud console.
For more information, check out the IBM Cloud Kubernetes Service documentation. If you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.