Single Service Mesh with Istio Against Multiple Hybrid Clusters

4 min read

By: SHAO JUN DING

Istio provides a central control plane for multiple clusters

More and more customers are using hybrid cloud environment—some legacy applications may run in on-premise cloud while others are running in public cloud. This, of course, can present a big challenge for how to manage all these workloads together. Istio provides a complete solution to connect, manage, and secure microservices (learn more about Istio by reading our post: “What is Istio?“). In version 0.8 and later, Istio supports multiple clusters by providing a central control plane. But to use multiple clusters, you need to use different configurations for different cloud providers—especially if the clusters are hybrid.

In this blog post, I will demonstrate how you can use one Istio control plane to control both an IBM Cloud Private (ICP) cluster and an IBM Kubernetes Services (IKS) cluster.

Set up one ICP Cluster

You can get detailed installation steps from the IBM Cloud Private Knowledge Center.

In ICP, you can configure the pod CIDR and service CIDR in cluster/config.yaml. In this tutorial, I use the default value.

Request one IKS Cluster

You can get detailed installation steps from the IBM Cloud documentation.

In this tutorial, I have requested a standard cluster with two worker nodes.

Note: By default, when you have provisioned a IKS cluster, the CIDR is as below:

pod subnet CIDR: 172.30.0.0/16.

service subnet CIDR: 172.21.0.0/16.

Connect ICP and IKS clusters through VPN

Because these two clusters are in isolated network environments, we need to set up a VPN connection between them. The aim is to make sure all pod CIDRs in every cluster must be routable to each other.

  1. Follow the steps listed in Set Up VPN.Note: If you require faster error recovery, higher security level, or a more elaborate high availability solution than strongSwan, consider using a VPN solution that runs outside of the cluster on dedicated hardware or one of the IBM Cloud Direct Link service options.

  2. When you install strongSwan on IKS in the remote.subnet, you need to put both the pod CIDR and service CIDR:

    remote.subnet:  10.0.0.0/24,10.1.0.0/16
  3. When you install strongSwan on ICP, modify the two settings below:

    Local subnets: 0.0.0/24,10.1.0.0/16
    Remote subnets: 30.0.0/16,172.21.0.0/16
  4. Confirm we can ping pod IP address on ICP in one pod on IKS:

    # ping 10.1.14.30
    PING 10.1.14.30 (10.1.14.30) 56(84) bytes of data.
    64 bytes from 10.1.14.30: icmp_seq=1 ttl=59 time=51.8 ms
    64 bytes from 10.1.14.30: icmp_seq=2 ttl=59 time=51.5 ms
    ….

Congratulations, you now are ready to enjoy to the Istio multi-cluster journey!

Install Istio

Install Istio local control plane into ICP

  • kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

  • kubectl apply -f install/kubernetes/istio-demo.yaml

Istio remote needs to connect back to the services below that are running on Istio local on IBM Cloud Private:

  • istio-pilot

  • istio-policy

  • zipkin

  • istio-telemetry

  • istio-statsd-prom-bridge

So, let’s make a record for the above Pod’s IP address:

  • export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}’)

  • export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=policy -o jsonpath='{.items[0].status.podIP}’)

  • export STATSD_POD_IP=$(kubectl -n istio-system get pod -l istio=statsd-prom-bridge -o jsonpath='{.items[0].status.podIP}’)

  • export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}’)

  • export ZIPKIN_POD_IP=$(kubectl -n istio-system get pod -l app=jaeger -o jsonpath='{range .items[*]}{.status.podIP}{end}’)

Install Istio remote into IBM Cloud Kubernetes Service

I configured Helm with IKS when I installed the strongSwan VPN. To do the same, follow these instructions in the “Use kubectl with Helm to connect the remote cluster to the local” section of these Istio Multicluster Instructions.

After the installation, you can see that only the Istio data plane is installed in the IKS cluster:

kubectl get pods -n istio-system
Install Istio remote into IBM Cloud Kubernetes Service

Instantiate the credentials for IKS on ICP

Create a secret in Istio local plane on ICP based on the kubeconfig file for IKS by following the detailed steps in multicluster-install.

Make sure you have the secret ready on ICP:

kubectl get secret -n istio-system
Instantiate the credentials for IKS on ICP

By following the steps above, you have successfully set up Istio in these two clusters: Istio local control plane on ICP and Istio remote data plane on IKS. We’ll now look at how we use only one Istio control plane to control traffic for the famous bookinfo application among these two clusters.

Split Bookinfo between two clusters

In my demo, I have deployed review-v3 into IKS and all other services are running on ICP. You can get the updated bookinfo.yaml and review-v3.yaml in my GitHub.

Installation steps on ICP

kubectl get pods
Installation steps on ICP

Installation steps on IKS

  • Kubectl label namespace default istio-injection=enabled

  • Kubectl create –f review-v3.yaml

kubectll get pods
Installation steps on IKS

Access our product page

You can see that since I have applied the virtual-service-all-v1.yaml, all traffic has been routed to review-v1 in the beginning:

Access our product page

Let’s apply another rule:

Kubectl replace –f virtual-service-reviews-jason-v2-v3.yaml

 

If I now log in as “jason,” I will see review-v2, which is running on ICP:

Let’s apply another rule:

From the logs in istio-proxy, I can confirm the traffic has been routed to the review service running on ICP:

[2018-08-06T05:59:21.610Z] “GET /reviews/0 HTTP/1.1” 200 – 0 379 82 81 “-” “python-requests/2.18.4” “7f25c24a-9bbc-914b-8166-fa135d3a3a48” “reviews:9080” “10.1.14.8:9080”

If I sign out as “jason,” I can see review-v3, which is running on IKS:

sign out as “jason,”

Logs from istio-proxy:

[2018-08-06T05:53:24.686Z] “GET /reviews/0 HTTP/1.1” 200 – 0 375 1377 1376 “-” “python-requests/2.18.4” “f2239dda-3729-9fdb-8ef8-0f578d8eacfa” “reviews:9080” “172.30.125.41:9080”

I can confirm traffic has been routed to review-v3, which is running on IKS.

Future Steps

As you can see from the above demo, we rely on Istio control plane’s pod IP for communication. Pod IP might be changed if pod has been restarted. In this case, you will need to re-add the Istio remote. You can configure load balancer or use gateway to bypass this limitation as described in https://istio.io/docs/tasks/traffic-management/ingress/.

Be the first to hear about news, product updates, and innovation from IBM Cloud