Adding Classic Infrastructure Servers to Gateway-Enabled Classic Clusters

5 min read

Today, IBM Cloud Kubernetes Service is introducing a new beta feature for gateway-enabled clusters.

Until now, pod-to-pod communication in a cluster was isolated from any other external entities, such as standalone virtual or bare metal servers in the same customer account. Connecting pods to services external to your cluster required additional network resources (load balancers, gateways, etc.).

You can now add classic VSIs (Virtual Server Instances or virtual machines) and BM (Bare Metal) instances to the existing network of your classic gateway-enabled clusters in IBM Cloud Kubernetes Service. This feature provides seamless integration of the instances into the pod network without joining these server instances to the cluster itself. The VSI or BM instance is assigned a pod IP address so that Kubernetes application pods can communicate with the instance using the 172.30.x.x pod networking range. The feature also provides domain name resolution in both directions.

Use case

You might have a mixed application set of containerized and non-containerized workloads. When one type of workload must consume resources from the other type, it is a challenge to integrate the workloads on a network level.

For example, imagine a latency-sensitive database cluster that has a fine-tuned set of configurations on the operating system level (buffer sizes, huge tables, etc.). These kinds of workloads might not have an effective containerized version or alternative but you can still deploy them in virtual machines or on bare metal servers in IBM Cloud.

In order for a containerized business logic app to consume information from this database, some kind of network integration is needed. If the business logic app is deployed in a classic, gateway-enabled cluster, you can integrate the app and the database in your cluster network.

How it works

The network integration is provisioned through automation via a Kubernetes job. You first provide the cluster details and necessary credentials in a Kubernetes secret and configmap, then set up the job to run in the cluster. During the provisioning, the job is able to gather every needed credential and environment detail from the cluster. The job uses the secret, configmap, and an SSH connection to the server instance(s) to provision the network integration.

Because all compute worker nodes in gateway-enabled clusters have only private network access, the integrated VSI or bare metal instance should also have only private VLAN connectivity. The gateway worker nodes, which have public network access, are configured as default routes during the provisioning.

The gateway worker nodes, which have public network access, are configured as default routes during the provisioning.

Before you begin

Make sure you have (or create) a gateway-enabled cluster with the private service endpoint enabled. To create a gateway enabled cluster, see ”IBM Cloud Kubernetes Service Gateway-Enabled Clusters.”

Next, check whether you have a VSI or bare metal instance that is provisioned on the same private VLAN as the worker nodes in your gateway-enabled cluster. If you do not have one, you can order new VSIs or bare metal instances by using the IBM Cloud console for Classic Infrastructure.

When you create a new instance, make sure you choose the following values:

  • Select an appropriate name for the instance(s). This name is used as the DNS name in the Kubernetes DNS resolver.
  • Select the same location (zone) where the cluster is deployed. If you use a multizone cluster, you can select any of the zones where you have worker nodes.
  • The supported operating systems are CentOS 7.x, RedHat 7.x, or Ubuntu 18.04. Other OS types and versions are not supported for network integration.
  • Inject a public SSH key to the instance. The private key is used during the provisioning job to access the instance.
  • Select private-only networking and the same private VLAN that is used by the worker nodes in your cluster.
  • Select at least "allow_outbound" and "allow_ssh" in the private security group drop-down list.

How to use

All the command examples below are expected to work on Linux- or Darwin-based operating systems.

Get started by setting up the following:

Create SSH key secret

Assuming that you have the private part of the SSH key in the id_rsa file on your local machine, create a Kubernetes secret:

kubectl create secret -n kube-system generic ibm-external-compute-pk --from-file=id_rsa

Set configmap options

Prepare the configuration options and export them as environment variables before creating the configmap.

Inventory

Create a file called inventory that the provisioning job uses when it employs Ansible. Replace <TARGET_IP> with the private IP address of the VSI or bare metal instance. You can find the instance IP address in the IBM Cloud classic infrastructure console or by running ibmcloud sl vs list or ibmcloud sl hardware list in the IBM Cloud CLI:

<TARGET_IP>:22 ansible_user=root ansible_connection=ssh

Export the path of the inventory file into an environment variable:

export INVENTORY=./inventory

Regional Registry

Set the IBM Cloud Container Registry domain for the region to which your cluster and server instance are deployed as an environment variable. When the job runs, container images are pulled from this registry domain. For example, use the following command for US South and US East:

export REPO_NAME=us.icr.io

Service namespace

Choose from which Kubernetes namespace you want to reach the VSI or bare metal instance via a DNS name. Your app pods will be able to reach the server instance pod IP address from any Kubernetes namespace, but the provisioning job sets up a DNS name in this selected namespace:

export SERVICE_K8S_NS=<NAMESPACE>

Cluster DNS

If you want to be able to make DNS resolutions from the VSI or bare metal instance towards the services in the cluster, the provisioning job can set up the cluster's resolver on the server instance. To use this optional feature, use the following command:

export CLUSTERDNS_SETUP=true

Otherwise, set it to false:

export CLUSTERDNS_SETUP=false

Create the configmap

Using the options that you exported, create the configmap for the provisioning job:

kubectl create configmap -n kube-system ibm-external-compute-config \
  --from-file="inventory=${INVENTORY}" \
  --from-literal="repo_name=${REPO_NAME}" \
  --from-literal="service_k8s_ns=${SERVICE_K8S_NS}" \
  --from-literal="clusterdns_setup=${CLUSTERDNS_SETUP}"

Note: Don’t modify the Kubernetes namespace and the name of the configmap object, as they will be referred to by other objects later.

Create ImagePullSecret in kube-system

Copy the default-us-icr-io image pull secret from the default namespace to the kube-system namespace. This secret is required so that the Kubernetes job can access the necessary images from the kube-system namespace:

kubectl get secret default-us-icr-io -n default -o yaml | sed -e 's/namespace: default/namespace: kube-system/' -e 's/default-us-icr-io/ibm-external-compute-image-pull/' | kubectl create -f -

For more information on ImagePullSecrets, see the IBM Cloud Kubernetes Service documentation.

Deploy the provisioning job

Save the following manifest content into a file named ibm-external-compute-job.yaml.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: ibm-external-compute-job
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ibm-external-compute-job
rules:
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ibm-external-compute-job
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ibm-external-compute-job
subjects:
- kind: ServiceAccount
  namespace: kube-system
  name: ibm-external-compute-job
---
apiVersion: batch/v1
kind: Job
metadata:
  name: ibm-external-compute-job
  namespace: kube-system
spec:
  template:
    spec:
      imagePullSecrets:
      - name: ibm-external-compute-image-pull
      containers:
      - name: provision
        image: us.icr.io/armada-master/stranger:512
        env:
        - name: ETCD_HOST
          valueFrom:
            configMapKeyRef:
              name: cluster-info
              key: etcd_host
        - name: ETCD_PORT
          valueFrom:
            configMapKeyRef:
              name: cluster-info
              key: etcd_port
        - name: REPO_NAME
          valueFrom:
            configMapKeyRef:
              name: ibm-external-compute-config
              key: repo_name
        - name: ANSIBLE_HOST_KEY_CHECKING
          value: "false"
        - name: SERVICE_K8S_NS
          valueFrom:
            configMapKeyRef:
              name: ibm-external-compute-config
              key: service_k8s_ns
        - name: CLUSTERDNS_SETUP
          valueFrom:
            configMapKeyRef:
              name: ibm-external-compute-config
              key: clusterdns_setup
        command: ["ansible-playbook"]
        args:
        - "-i"
        - "/config/inventory"
        - "setup.yml"
        - "-e etcd_host=$(ETCD_HOST)"
        - "-e etcd_port=$(ETCD_PORT)"
        - "-e repo_name=$(REPO_NAME)"
        - "-e service_k8s_ns=$(SERVICE_K8S_NS)"
        - "-e clusterdns_setup=$(CLUSTERDNS_SETUP)"
        volumeMounts:
        - name: calico-etcd-secrets
          mountPath: /ansible/roles/calico-node/files
          readOnly: true
        - name: ibm-external-compute-pk
          mountPath: /root/.ssh
          readOnly: true
        - name: ibm-external-compute-config
          mountPath: /config
          readOnly: true
        - name: cluster-info
          mountPath: /ansible/roles/ibm-gateway-controller/files
          readOnly: true
      restartPolicy: Never
      volumes:
      - name: calico-etcd-secrets
        secret:
          secretName: calico-etcd-secrets
      - name: ibm-external-compute-pk
        secret:
          secretName: ibm-external-compute-pk
          defaultMode: 0400
      - name: ibm-external-compute-config
        configMap:
          name: ibm-external-compute-config
      - name: cluster-info
        configMap:
          name: cluster-info
      serviceAccountName: ibm-external-compute-job
  backoffLimit: 0

To start the provisioning job, deploy the manifest file:

kubectl create -f ibm-external-compute-job.yaml

After the deployment of the manifest file, the job creates a pod that does the actual provisioning. To check the provisioning process, you can check the logs of the provisioner pod:

kubectl logs -n kube-system -f $(kubectl get po -n kube-system --selector job-name=ibm-external-compute-job -o custom-columns=:.metadata.name --no-headers)

 If an error is reported in the logs, you can delete all the created resources via kubectl delete -f and change your configuration settings in the config map before retrying.

Accessing the target

After the provisioning job completes, your server instance is integrated in your cluster’s private pod network.

You can test the connection by pinging the private IP address or the hostname of the server instance from an app pod. You can use the following command example for an existing pod (your pod must have ping command installed):

kubectl exec <POD_NAME> ping <SERVER_IP_OR_HOSTNAME>

Note that the ping requires the "allow_all" security group to be enabled on the target server (or any other security group that enables ICMP traffic).

To access the server via SSH, you might create a pod with an SSH client, for example:

kubectl run -i -t ssh-pod --image=alpine --restart=Never -- sh -ic 'apk add --update openssh-client && ssh -o "StrictHostKeyChecking no" <SERVER_IP_OR_HOSTNAME>'

This command will access the target via the root user. You can find the root user’s generated password on the cloud console.

Contact us

For more information, check out the IBM Cloud Kubernetes Service documentation. If you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.

Be the first to hear about news, product updates, and innovation from IBM Cloud