Compute Services

Single Service Mesh with Istio Against Multiple Hybrid Clusters

Share this post:

Istio provides a central control plane for multiple clusters

More and more customers are using hybrid cloud environment—some legacy applications may run in on-premise cloud while others are running in public cloud. This, of course, can present a big challenge for how to manage all these workloads together. Istio provides a complete solution to connect, manage, and secure microservices (learn more about Istio by reading our post: “What is Istio?“). In version 0.8 and later, Istio supports multiple clusters by providing a central control plane. But to use multiple clusters, you need to use different configurations for different cloud providers—especially if the clusters are hybrid.

In this blog post, I will demonstrate how you can use one Istio control plane to control both an IBM Cloud Private (ICP) cluster and an IBM Kubernetes Services (IKS) cluster.

Set up one ICP Cluster

You can get detailed installation steps from the IBM Cloud Private Knowledge Center.

In ICP, you can configure the pod CIDR and service CIDR in cluster/config.yaml. In this tutorial, I use the default value.

Request one IKS Cluster

You can get detailed installation steps from the IBM Cloud documentation.

In this tutorial, I have requested a standard cluster with two worker nodes.

Note: By default, when you have provisioned a IKS cluster, the CIDR is as below:

pod subnet CIDR: 172.30.0.0/16.

service subnet CIDR: 172.21.0.0/16.

Connect ICP and IKS clusters through VPN

Because these two clusters are in isolated network environments, we need to set up a VPN connection between them. The aim is to make sure all pod CIDRs in every cluster must be routable to each other.

  1. Follow the steps listed in Set Up VPN.
    Note: If you require faster error recovery, higher security level, or a more elaborate high availability solution than strongSwan, consider using a VPN solution that runs outside of the cluster on dedicated hardware or one of the IBM Cloud Direct Link service options.
  2. When you install strongSwan on IKS in the remote.subnet, you need to put both the pod CIDR and service CIDR:
    remote.subnet:  10.0.0.0/24,10.1.0.0/16
  3. When you install strongSwan on ICP, modify the two settings below:
    Local subnets: 0.0.0/24,10.1.0.0/16
    Remote subnets: 30.0.0/16,172.21.0.0/16
  4. Confirm we can ping pod IP address on ICP in one pod on IKS:
    # ping 10.1.14.30
    PING 10.1.14.30 (10.1.14.30) 56(84) bytes of data.
    64 bytes from 10.1.14.30: icmp_seq=1 ttl=59 time=51.8 ms
    64 bytes from 10.1.14.30: icmp_seq=2 ttl=59 time=51.5 ms
    ….

Congratulations, you now are ready to enjoy to the Istio multi-cluster journey!

Install Istio

Install Istio local control plane into ICP

  • kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
  • kubectl apply -f install/kubernetes/istio-demo.yaml

Istio remote needs to connect back to the services below that are running on Istio local on IBM Cloud Private:

  • istio-pilot
  • istio-policy
  • zipkin
  • istio-telemetry
  • istio-statsd-prom-bridge

So, let’s make a record for the above Pod’s IP address:

  • export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}’)
  • export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=policy -o jsonpath='{.items[0].status.podIP}’)
  • export STATSD_POD_IP=$(kubectl -n istio-system get pod -l istio=statsd-prom-bridge -o jsonpath='{.items[0].status.podIP}’)
  • export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}’)
  • export ZIPKIN_POD_IP=$(kubectl -n istio-system get pod -l app=jaeger -o jsonpath='{range .items[*]}{.status.podIP}{end}’)

Install Istio remote into IBM Cloud Kubernetes Service

I configured Helm with IKS when I installed the strongSwan VPN. To do the same, follow these instructions in the “Use kubectl with Helm to connect the remote cluster to the local” section of these Istio Multicluster Instructions.

After the installation, you can see that only the Istio data plane is installed in the IKS cluster:

kubectl get pods -n istio-system

NAME READY STATUS RESTARTS AGE
istio-citadel-78bb756b86-77klv 1/1 Running 0 1m
istio-cleanup-secrets-s4s77 0/1 Completed 0 1m
istio-sidecar-injector-77988bc694-qlk89 1/1 Running 0 1m

Instantiate the credentials for IKS on ICP

Create a secret in Istio local plane on ICP based on the kubeconfig file for IKS by following the detailed steps in multicluster-install.

Make sure you have the secret ready on ICP:

kubectl get secret -n istio-system

NAME TYPE DATA AGE
iriscluster Opaque 1 12s

By following the steps above, you have successfully set up Istio in these two clusters: Istio local control plane on ICP and Istio remote data plane on IKS. We’ll now look at how we use only one Istio control plane to control traffic for the famous bookinfo application among these two clusters.

Split Bookinfo between two clusters

In my demo, I have deployed review-v3 into IKS and all other services are running on ICP. You can get the updated bookinfo.yaml and review-v3.yaml in my GitHub.

Installation steps on ICP

kubectl get pods

NAME READY STATUS RESTARTS AGE
details-v1-6865b9b99d-gtz9c 2/2 Running 0 21h
productpage-v1-f8c8fb8-mrnv5 2/2 Running 0 21h
ratings-v1-77f657f55d-5v87g 2/2 Running 0 21h
reviews-v1-6b7f6db5c5-hzhb2 2/2 Running 0 21h
reviews-v2-7ff5966b99-bptjk 2/2 Running 0 21h
vpn-strongswan-5db9659df6-v4rpz 1/1 Running 0 1d
vpn-strongswan-routes-clxmt 1/1 Running 0 1d

Installation steps on IKS

  • Kubectl label namespace default istio-injection=enabled
  • Kubectl create –f review-v3.yaml

kubectll get pods

NAME READY STATUS RESTARTS AGE
reviews-v3-5465dc97bc-pr6xk 2/2 Running 0 4h
vpn-strongswan-8db9f5f5-d5ddv 1/1 Running 0 1d
vpn-strongswan-routes-6xfqk 1/1 Running 0 1d
vpn-strongswan-routes-5vvvh 1/1 Running 0 1d

Access our product page

You can see that since I have applied the virtual-service-all-v1.yaml, all traffic has been routed to review-v1 in the beginning:
Traffics to reviews-v1

Let’s apply another rule:

Kubectl replace –f virtual-service-reviews-jason-v2-v3.yaml

If I now log in as “jason,” I will see review-v2, which is running on ICP:

Traffics to reviews-v2
From the logs in istio-proxy, I can confirm the traffic has been routed to the review service running on ICP:
[2018-08-06T05:59:21.610Z] “GET /reviews/0 HTTP/1.1” 200 – 0 379 82 81 “-” “python-requests/2.18.4” “7f25c24a-9bbc-914b-8166-fa135d3a3a48” “reviews:9080” “10.1.14.8:9080”

If I sign out as “jason,” I can see review-v3, which is running on IKS:

Traffics to reviews-v3
Logs from istio-proxy:
[2018-08-06T05:53:24.686Z] “GET /reviews/0 HTTP/1.1” 200 – 0 375 1377 1376 “-” “python-requests/2.18.4” “f2239dda-3729-9fdb-8ef8-0f578d8eacfa” “reviews:9080” “172.30.125.41:9080”
I can confirm traffic has been routed to review-v3, which is running on IKS.

Future Steps

As you can see from the above demo, we rely on Istio control plane’s pod IP for communication. Pod IP might be changed if pod has been restarted. In this case, you will need to re-add the Istio remote. You can configure load balancer or use gateway to bypass this limitation as described in https://istio.io/docs/tasks/traffic-management/ingress/.

More Compute Services stories
May 7, 2019

We’ve Moved! The IBM Cloud Blog Has a New URL

In an effort better integrate the IBM Cloud Blog with the IBM Cloud web experience, we have migrated the blog to a new URL: www.ibm.com/cloud/blog.

Continue reading

May 6, 2019

Use IBM Cloud Certificate Manager to Obtain Let’s Encrypt TLS Certificates for Your Public Domains

IBM Cloud Certificate Manager now lets you obtain TLS certificates signed by Let’s Encrypt. Let’s Encrypt is an automated, ACME-protocol-based CA that issues free certificates valid for 90 days.

Continue reading

May 6, 2019

Are You Ready for SAP S/4HANA Running on Cloud?

Our clients tell us SAP applications are central to their success and strategy for cloud, with a deadline to refresh the business processes and move to SAP S/4HANA by 2025. Now is the time to assess, plan and execute the journey to cloud and SAP S/4HANA

Continue reading