Community

Transitioning Your Service Mesh From IBM Cloud Kubernetes Service Ingress to Istio Ingress

Share this post:

Migrating a service mesh from Kubernetes Ingress resources to Istio’s ingress gateway

Through a tremendous collaborative effort between IBM, Google, Lyft, Red Hat, and other members of the open source community, Istio is officially ready for production. Istio adds additional layers of service mesh management on top of those available in Kubernetes and allows developers to connect, secure, and manage microservices. You can learn more about the basics of Istio by reading our post, “What is Istio?

This tutorial will provide steps for migrating a service mesh from Kubernetes Ingress resources to Istio’s ingress gateway in an IBM Cloud Kubernetes Service environment.

Prerequisites

1. Using IBM Kubernetes Service ALB and Kubernetes Ingress resources to access a service

IBM Cloud Kubernetes Service exposes applications deployed within your cluster using Kubernetes Ingress resources. The application load balancer (ALB) accepts incoming traffic and forwards the requests to the corresponding service. Your cluster ALB is assigned a URL to expose the deployed services using the name format <cluster_name>.<region>.containers.appdomain.cloud. Retrieve your cluster’s ingress subdomain and configure this value as an environment variable for the steps that follow:

export CLUSTER_NAME=<cluster>
export INGRESS_HOST=$(ibmcloud cs cluster-get $CLUSTER_NAME | grep Subdomain | awk '{print $3}')

For this tutorial, we will use a simple HTTP request and response service called httpbin. Begin by deploying a copy of this service to your space in IBM Cloud Kubernetes Service:

kubectl apply -f samples/httpbin/httpbin.yaml

Next, apply a Kubernetes Ingress resource to access the httpbin service and expose the /headers API:

kubectl apply -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-ingress
  namespace: default
spec:
  rules:
    - host: $INGRESS_HOST
      http:
        paths:
          - path: /headers
            backend:
              serviceName: httpbin
              servicePort: 8000
EOF

The serviceName and servicePort match those specified in the Kubernetes service definition in samples/httpbin/httpbin.yaml. Verify that the httpbin service is now accessible using curl:

curl -i http://$INGRESS_HOST/headers
HTTP/1.1 200 OK
Date: Mon, 13 Aug 2018 21:43:04 GMT
Content-Type: application/json
Content-Length: 304
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
{
  "headers": {
    "Accept": "*/*", 
    "Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "User-Agent": "curl/7.54.0", 
    "X-Forwarded-Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "X-Global-K8Fdic-Transaction-Id": "ba234fb0a36e57a20ee68e9da27ae6fd"
  }
}

2. Chain ALB to Istio ingress gateway

The latest install steps and installation methods for Istio can be found here. This blog will follow the steps for Helm.

2.1 Install Istio using Helm

First, configure Tiller in your cluster:

kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller

Next, install Istio’s crds to your cluster:

helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system

Then, install the Istio control plane:

helm install install/kubernetes/helm/istio --name istio --namespace istio-system \
    ‑‑set global.k8sIngress.enabled=true

Verify Istio is up and running:

kubectl get pods -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
istio-citadel-5f886dc9b4-p9j68            1/1     Running     0          2m2s
istio-galley-689b548d98-2vff8             1/1     Running     0          2m2s
istio-ingressgateway-74484b55f4-fvf6m     1/1     Running     0          2m2s
istio-init-crd-10-b9scd                   0/1     Completed   0          3m23s
istio-init-crd-11-5j2j7                   0/1     Completed   0          3m23s
istio-pilot-6cd56bb6cb-2kmxb              2/2     Running     0          2m2s
istio-policy-546866485-7m725              2/2     Running     2          2m2s
istio-sidecar-injector-74666b458c-vnk5s   1/1     Running     0          2m2s
istio-telemetry-77b97f6547-6bwg6          2/2     Running     2          2m2s
prometheus-66c9f5694-dl98d                1/1     Running     0          2m2s

Note: If the cluster was created with limited resources, some pods may get stuck in PENDING state, in which case specify the demo profile values file during the install command: helm install install/kubernetes/helm/istio --name istio --namespace istio-system --values install/kubernetes/helm/istio/values-istio-demo.yaml

2.2 Inject the httpbin service with Istio

Enable the default Kubernetes namespace for autoinjection:

kubectl label namespace default istio-injection=enabled
kubectl get namespace -L istio-injection
NAME             STATUS    AGE       ISTIO-INJECTION
default          Active    19d       enabled
ibm-cert-store   Active    19d       
ibm-system       Active    19d       
istio-system     Active    15m       
kube-public      Active    19d       
kube-system      Active    19d

With this configured, your Kubernetes services will automatically get deployed with Istio’s sidecar proxy. Since the httpbin service is already deployed, deleting the httpbin pod will cause it to get redeployed with the sidecar in place:

kubectl delete pod -l app=httpbin
kubectl get pods
NAME                       READY     STATUS        RESTARTS   AGE
httpbin-77647f7b59-bcp4q   2/2       Running       0          8s
httpbin-77647f7b59-gm7cc   0/1       Terminating   0          2h

The httpbin service now has two containers in the pod: httpbin app and istio-proxy sidecar. Verify the new httpbin pod is running and curl the httpbin service again.

curl -i http://$INGRESS_HOST/headers
HTTP/1.1 200 OK
Date: Tue, 14 Aug 2018 00:19:47 GMT
Content-Type: application/json
Content-Length: 533
Connection: keep-alive
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
x-envoy-decorator-operation: httpbin.default.svc.cluster.local:8000/*
{
  "headers": {
    "Accept": "*/*", 
    "Content-Length": "0", 
    "Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "User-Agent": "curl/7.54.0", 
    "X-B3-Sampled": "1", 
    "X-B3-Spanid": "f1a011b53ba9cc4b", 
    "X-B3-Traceid": "f1a011b53ba9cc4b", 
    "X-Envoy-Internal": "true", 
    "X-Forwarded-Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "X-Global-K8Fdic-Transaction-Id": "f9327e92e4aec58c04c19451a50967c8", 
    "X-Request-Id": "612a99ee-cf5e-94b7-b2ae-cbe1f98ef4a9"
  }
}

Observe the additional headers returned in the response this time around. The Envoy proxy running in the Istio sidecar captures all traffic to the httpbin service, applies any traffic management policies, and then routes the request to the httpbin service.

Note: If Istio was deployed with –set mtls.enabled=true, the curl command above will fail since the Kubernetes Ingress resource does not have mTLS configured between the ALB and httpbin service. This option will be addressed in a follow-up blog.

2.3 Convert the Kubernetes Ingress resource to Istio Gateway and VirtualService rules

The ALB relies on Kubernetes Ingress resources to control how traffic is routed to services deployed in your cluster. In Istio, ingress traffic is configured via Gateways and VirtualServices. Gateways and VirtualServices provide a super set of the functionality provided by the Kubernetes Ingress resource in a more flexible format. In the same way that the routing behavior of the ALB is configured by Kubernetes Ingress resources, Istio uses Gateways and VirtualServices to configure Istio’s ingress gateway. Traffic from outside the Istio service mesh is routed by the ingress gateway:

kubectl get pods -n istio-system -l istio=ingressgateway
NAME                                   READY     STATUS    RESTARTS   AGE
istio-ingressgateway-f8dd85989-h5sb9   1/1       Running   0          44m

To make the transition easier for users who may already have Kubernetes Ingress resources defined for their services, Istio provides a converter tool as part of istioctl for migrating Ingress resource definitions to corresponding VirtualServices:

kubectl get ingress simple-ingress -o yaml > ingress.yaml
istioctl experimental convert-ingress -f ingress.yaml > vservice.yaml
kubectl apply -f vservice.yaml
cat vservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
....
spec:
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  hosts:
  - gihanson-test.us-south.containers.appdomain.cloud
  http:
  - match:
    - uri:
        exact: /headers
    route:
    - destination:
        host: httpbin.default.svc.cluster.local
        port:
          number: 8000
      weight: 100

The VirtualService tells Istio which service to route to based on the request host and request path and since “istio-system/istio-autogenerated-k8s-ingress” is specified under gateways, the routing information will only be used with a corresponding Gateway rule named istio-autogenerated-k8s-ingress in the istio-system namespace.  The Gateway is required to open a port on the Istio ingress gateway for incoming requests but this has already been created during the install process with the  --set global.k8sIngress.enabled=true flag.

kubectl get gateway istio-autogenerated-k8s-ingress -n istio-system -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
....
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP2

Finally, the original Kubernetes Ingress resource needs to be deleted and recreated so that it points to the Istio ingress gateway rather than the httpbin service directly:

kubectl delete ingress simple-ingress
kubectl apply -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-ingress
  namespace: istio-system
spec:
  rules:
    - host: $INGRESS_HOST
      http:
        paths:
          - path: /headers
            backend:
              serviceName: istio-ingressgateway
              servicePort: 80
EOF

Perform the curl command once more to verify that traffic is now routed from the ALB to the Istio ingress gateway and finally the httpbin service.

curl -i http://$INGRESS_HOST/headers

3. Replace IBM Cloud Kubernetes Service ALB with Istio ingress gateway

IBM Cloud Kubernetes Service provides a bring-your-own-ingress option if developers want to use their own ingress solution over the ALB. The steps below will enable the Istio ingress gateway to accept web traffic directly instead of first going through the ALB.  The first step is to disable the current ALB for your cluster.

3.1 Disable the IKS ALB

To get the ALB ID for your cluster, run the following kubectl command:

export ALB_ID=$(kubectl get svc -n kube-system | grep alb | awk '{print $1}')

Then, disable the ALB deployment using the ID returned:

ibmcloud ks alb-configure --albID $ALB_ID --disable-deployment

Disabling the ALB deployment may take a few minutes to complete. Verify the corresponding ALB pods are deleted from the kube-system namespace before continuing:

kubectl get pods -n kube-system | grep alb
public-cr7591595a4fb54d839cf3a77d12147ba4-alb1-9d858db58-8k65p   2/2       Terminating   0          17m
public-cr7591595a4fb54d839cf3a77d12147ba4-alb1-9d858db58-qn8jh   2/2       Terminating   0          17m

3.2 Redeploy the Istio ingress gateway

Next, delete the current Istio ingress gateway so that it can be recreated in the kube-system namespace:

helm upgrade istio install/kubernetes/helm/istio \
     --set gateways.istio-ingressgateway.enabled=false

Use Helm to deploy only the ingress gateway to the kube-system namespace:

helm install install/kubernetes/helm/istio \
    -f install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \
    --name=istio-ingress \
    --namespace kube-system \
    --set global.k8sIngress.enabled=true \
    --set gateways.custom-gateway.enabled=false

If you receive an error during the install command due to a resource already existing, run helm del --purge istio-ingress and then re-run the install command.

The original default Gateway for the istio-system namespace which opens port 80 for incoming traffic into the mesh was deleted.  However, a new Gateway has been created for the ingress gateway in the kube-system namespace.

kubectl get gateway --all-namespaces
NAMESPACE     NAME                              AGE
kube-system   istio-autogenerated-k8s-ingress   17m

The original VirtualService created in the previous steps must to be modified to point to kube-system/istio-autogenerated-k8s-ingress

kubectl get virtualservices
NAME                                                           GATEWAYS                                         HOSTS             AGE
$INGRESS_HOST-simple-ingress-istio-autogenerated-k8s-ingress   [istio-system/istio-autogenerated-k8s-ingress]   [$INGRESS_HOST]   1h
kubectl edit virtualservice <name>
...
spec:
  gateways:
  - kube-system/istio-autogenerated-k8s-ingress
  hosts:
  - gihanson-test.us-south.containers.appdomain.cloud
 ...

3.3 Configure the IKS cluster to use the new ingress gateway

Update the ALB service to point to the new ingress. Under spec/selector, remove the ALB ID from the app label and add the label for istio-ingressgateway:

kubectl edit svc $ALB_ID -n kube-system
spec:
...
  selector:
    app: istio-ingressgateway
  sessionAffinity: None
  type: LoadBalancer
...

Now, you can curl the Istio ingress gateway directly using the ALB URL provided for your cluster:

curl -i http://$INGRESS_HOST/headers

Conclusions

With these instructions, existing services can be migrated from the default IBM Cloud Kubernetes Service ALB to using the Istio ingress gateway. After migration is complete, you can begin exploring the additional ingress configuration options for your service mesh available with Istio Gateways and VirtualServices.

Related links

Staff Software Engineer

More stories
April 18, 2019

Getting Started with IBM Cloud Databases for Elasticsearch and Kibana

In this article, we’ll show you how to use Docker to connect your Databases for Elasticsearch deployment to Kibana—the open source tool that lets you add visualization capabilities to your Elasticsearch database.

Continue reading

April 18, 2019

Load Balancing API Calls Across Regions with IBM Cloud Internet Services and Cloud API Gateway

In this article, we'll explore load balancing traffic across two geographically-separated backends built on IBM Cloud Functions. We'll use the IBM Cloud API Gateway to deploy the same API definition in both regions, and then intelligently route traffic with IBM Cloud Internet Services.

Continue reading

April 17, 2019

Container Orchestration Explained

In the past, we've talked about containerization technology and dove into Kubernetes as an orchestration platform, but we're going to take a step back to look at why container orchestration is necessary and the benefits it brings to both developers and operations teams.

Continue reading