Transitioning Your Service Mesh From IBM Cloud Kubernetes Service Ingress to Istio Ingress

10 min read

Migrating a service mesh from Kubernetes Ingress resources to Istio’s ingress gateway

Through a tremendous collaborative effort between IBM, Google, Lyft, Red Hat, and other members of the open source community, Istio is officially ready for production. Istio adds additional layers of service mesh management on top of those available in Kubernetes and allows developers to connect, secure, and manage microservices. You can learn more about the basics of Istio by reading our post, “What is Istio?

This tutorial will provide steps for migrating a service mesh from Kubernetes Ingress resources to Istio’s ingress gateway in an IBM Cloud Kubernetes Service environment.

Prerequisites

1. Using IBM Kubernetes Service ALB and Kubernetes Ingress resources to access a service

IBM Cloud Kubernetes Service exposes applications deployed within your cluster using Kubernetes Ingress resources. The application load balancer (ALB) accepts incoming traffic and forwards the requests to the corresponding service. Your cluster ALB is assigned a URL to expose the deployed services using the name format <cluster_name>.<region>.containers.appdomain.cloud. Retrieve your cluster’s ingress subdomain and configure this value as an environment variable for the steps that follow:

export CLUSTER_NAME=<cluster>
export INGRESS_HOST=$(ibmcloud cs cluster-get $CLUSTER_NAME | grep Subdomain | awk '{print $3}')

For this tutorial, we will use a simple HTTP request and response service called httpbin. Begin by deploying a copy of this service to your space in IBM Cloud Kubernetes Service:

kubectl apply -f samples/httpbin/httpbin.yaml

Next, apply a Kubernetes Ingress resource to access the httpbin service and expose the /headers API:

kubectl apply -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-ingress
  namespace: default
spec:
  rules:
    - host: $INGRESS_HOST
      http:
        paths:
          - path: /headers
            backend:
              serviceName: httpbin
              servicePort: 8000
EOF

The serviceName and servicePort match those specified in the Kubernetes service definition in samples/httpbin/httpbin.yaml. Verify that the httpbin service is now accessible using curl:

curl -i http://$INGRESS_HOST/headers
HTTP/1.1 200 OK
Date: Mon, 13 Aug 2018 21:43:04 GMT
Content-Type: application/json
Content-Length: 304
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
{
  "headers": {
    "Accept": "*/*", 
    "Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "User-Agent": "curl/7.54.0", 
    "X-Forwarded-Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "X-Global-K8Fdic-Transaction-Id": "ba234fb0a36e57a20ee68e9da27ae6fd"
  }
}

2. Chain ALB to Istio ingress gateway

The latest install steps and installation methods for Istio can be found here. This blog will follow the steps for Helm.

2.1 Install Istio using Helm

First, configure Tiller in your cluster:

kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller

Next, install Istio’s crds to your cluster:

helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system

Then, install the Istio control plane:

helm install install/kubernetes/helm/istio --name istio --namespace istio-system \
    ‑‑set global.k8sIngress.enabled=true

Verify Istio is up and running:

kubectl get pods -n istio-system
table

Note: If the cluster was created with limited resources, some pods may get stuck in PENDING state, in which case specify the demo profile values file during the install command: 

helm install install/kubernetes/helm/istio --name istio --namespace istio-system --values install/kubernetes/helm/istio/values-istio-demo.yaml

2.2 Inject the httpbin service with Istio

kubectl label namespace default istio-injection=enabled
kubectl get namespace -L istio-injection
table

With this configured, your Kubernetes services will automatically get deployed with Istio’s sidecar proxy. Since the httpbin service is already deployed, deleting the httpbin pod will cause it to get redeployed with the sidecar in place:

kubectl delete pod -l app=httpbin
kubectl get pods
table

The httpbin service now has two containers in the pod: httpbin app and istio-proxy sidecar. Verify the new httpbin pod is running and curl the httpbin service again.

curl -i http://$INGRESS_HOST/headers
HTTP/1.1 200 OK
Date: Tue, 14 Aug 2018 00:19:47 GMT
Content-Type: application/json
Content-Length: 533
Connection: keep-alive
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
x-envoy-decorator-operation: httpbin.default.svc.cluster.local:8000/*
{
  "headers": {
    "Accept": "*/*", 
    "Content-Length": "0", 
    "Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "User-Agent": "curl/7.54.0", 
    "X-B3-Sampled": "1", 
    "X-B3-Spanid": "f1a011b53ba9cc4b", 
    "X-B3-Traceid": "f1a011b53ba9cc4b", 
    "X-Envoy-Internal": "true", 
    "X-Forwarded-Host": "istio-integration.us-south.containers.appdomain.cloud", 
    "X-Global-K8Fdic-Transaction-Id": "f9327e92e4aec58c04c19451a50967c8", 
    "X-Request-Id": "612a99ee-cf5e-94b7-b2ae-cbe1f98ef4a9"
  }
}

Observe the additional headers returned in the response this time around. The Envoy proxy running in the Istio sidecar captures all traffic to the httpbin service, applies any traffic management policies, and then routes the request to the httpbin service.

Note: If Istio was deployed with –set mtls.enabled=true, the curl command above will fail since the Kubernetes Ingress resource does not have mTLS configured between the ALB and httpbin service. This option will be addressed in a follow-up blog.

2.3 Convert the Kubernetes Ingress resource to Istio Gateway and VirtualService rules

The ALB relies on Kubernetes Ingress resources to control how traffic is routed to services deployed in your cluster. In Istio, ingress traffic is configured via Gateways and VirtualServices. Gateways and VirtualServices provide a super set of the functionality provided by the Kubernetes Ingress resource in a more flexible format. In the same way that the routing behavior of the ALB is configured by Kubernetes Ingress resources, Istio uses Gateways and VirtualServices to configure Istio’s ingress gateway. Traffic from outside the Istio service mesh is routed by the ingress gateway:

kubectl get pods -n istio-system -l istio=ingressgateway
table

To make the transition easier for users who may already have Kubernetes Ingress resources defined for their services, Istio provides a converter tool as part of istioctl for migrating Ingress resource definitions to corresponding VirtualServices:

kubectl get ingress simple-ingress -o yaml > ingress.yaml
istioctl experimental convert-ingress -f ingress.yaml > vservice.yaml
kubectl apply -f vservice.yaml
cat vservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
....
spec:
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  hosts:
  - gihanson-test.us-south.containers.appdomain.cloud
  http:
  - match:
    - uri:
        exact: /headers
    route:
    - destination:
        host: httpbin.default.svc.cluster.local
        port:
          number: 8000
      weight: 100

The VirtualService tells Istio which service to route to based on the request host and request path and since “istio-system/istio-autogenerated-k8s-ingress” is specified under gateways, the routing information will only be used with a corresponding Gateway rule named istio-autogenerated-k8s-ingress in the istio-systemnamespace.  The Gateway is required to open a port on the Istio ingress gateway for incoming requests but this has already been created during the install process with the  --set global.k8sIngress.enabled=true flag.

kubectl get gateway istio-autogenerated-k8s-ingress -n istio-system -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
....
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP2

Finally, the original Kubernetes Ingress resource needs to be deleted and recreated so that it points to the Istio ingress gateway rather than the httpbin service directly:

kubectl delete ingress simple-ingress
kubectl apply -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-ingress
  namespace: istio-system
spec:
  rules:
    - host: $INGRESS_HOST
      http:
        paths:
          - path: /headers
            backend:
              serviceName: istio-ingressgateway
              servicePort: 80
EOF

Perform the curl command once more to verify that traffic is now routed from the ALB to the Istio ingress gateway and finally the httpbin service.

curl -i http://$INGRESS_HOST/headers

3. Create a host name to access the Istio ingress gateway directly

IBM Cloud Kubernetes Service allows users to configure a host name which will resolve directly to a Kubernetes network load balancer (NLB) IP.  The steps below will enable the Istio ingress gateway to accept web traffic directly instead of first going through the ALB.

3.1 Create a host name for your cluster

First retrieve the NLB IP for the Istio ingress gateway:

export INGRESS_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Then, create the host name for your cluster:

ibmcloud ks nlb-dns-create --cluster $CLUSTER_NAME --ip $INGRESS_IP
ibmcloud ks nlb-dnss --cluster $CLUSTER_NAME

This will create a host name using the following format: 

<cluster_name>-<globally_unique_account_HASH>-0001.<region>.containers.appdomain.cloud.

3.2 Update Istio with the new host name

Replace the original INGRESS_HOST defined in Section 1 with the host name created above.

export INGRESS_HOST=$(ibmcloud ks nlb-dnss --cluster $CLUSTER_NAME -s | grep $CLUSTER_NAME | awk '{print $1}')

Modify the VirtualService rule with the updated INGRESS_HOST value

kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: gihanson-test-us-south-containers-appdomain-cloud-simple-ingress-istio-autogenerated-k8s-ingress
  namespace: default
spec:
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  hosts:
  - $INGRESS_HOST
  http:
  - match:
    - uri:
        exact: /headers
    route:
    - destination:
        host: httpbin.default.svc.cluster.local
        port:
          number: 8000
      weight: 100
EOF

Now, you can curl the Istio ingress gateway directly using the NLB host name provided for your cluster:

curl -i http://$INGRESS_HOST/headers

Conclusions

With these instructions, existing services can be migrated from the default IBM Cloud Kubernetes Service ALB to using the Istio ingress gateway. After migration is complete, you can begin exploring the additional ingress configuration options for your service mesh available with Istio Gateways and VirtualServices.

Related links

Be the first to hear about news, product updates, and innovation from IBM Cloud