Compute Services

How to Deploy a gRPC Mode Istio Mixer Adapter into Kubernetes

Share this post:

Overview of Istio and Mixer

Istio is a platform that provides a way for developers to seamlessly connect, manage, and secure networks of different microservices. It was jointly funded by IBM, Google, and Lyft. Istio can be divided into two sections: data plane and control plane.

The data plane is implemented by Envoy proxy and injected as a sidecar to the microservice POD. Envoy proxy handles inbound and outbound traffic between services. Control plane is composed of Pilot, Mixer, and Citadel. Pilot is the component that configures the proxies at runtime, and Mixer is the central component used by the proxies and microservices to enforce certain policies (e.g., quotas, authorization, authentication, rate limits, request tracing, and telemetry collection).

JJ Asghar, an IBM Cloud Developer Advocate, further explains the basics of Istio in the video below:

What is a gRPC mode adapter?

Policy checking and telemetry data reporting are two major functions of Istio Mixer, but it doesn’t store any kind of telemetry data. It provides a mechanism to upload metrics that are collected by Envoy proxy to various backends. Prometheus is supported by default. There are also some other adapters (e.g., DogStatsD, Stackdriver).

Before Istio v1.0, all the adapters were compiled in Mixer. That means you must add your adapter code into the istio/mixer/adapter directory as a part of Mixer and wait for the Istio community to review and merge your code. The only other option was to build a customized Istio, so neither option is overly easy for the user.

It is also important to note that if the adapter implementation is not good, it will impact the performance or stability of Mixer, possibly causing it to crash. Fortunately, the gRPC mode adapter can resolve this problem.

The gRPC adapter is a separated process, which means that Mixer can pass the raw telemetry data to the corresponding adapter. All the formatting and uploading processes are irrelevant to Mixer because the gRPC adapter will handle them.

Develop a gRPC adapter

There is already an exhaustive guide on how to develop a gRPC adapter, so we won’t reiterate that process here.

You can write an adapter and test it using mixc, which is a client to simulate traffic in service mesh. Eventually, however, you will need to deploy the adapter to somewhere as a binary file or a Kubernetes POD. We will show you how to do that in this post

When you walk through this guide, “Mixer Out of Process Adapter Walkthrough,” there is a directory which contains some configuration files:

mkdir testdata
cp sample_operator_cfg.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata
cp config/mygrpcadapter.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata
cp $MIXER_REPO/testdata/config/attributes.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata
cp $MIXER_REPO/testdata/config/metrictemplate.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata

You will use these files to start up your mixs server:

$GOPATH/out/linux_amd64/release/mixs server –configStoreURL=fs://$(pwd)/mixer/adapter/mygrpcadapter/testdata

This will simplify the testing process of your adapter (otherwise you would have to build the adapter and deploy it again and again for a small function test). When you deploy the adapter into Kubernetes, those yaml files need to be applied.

Build an image for your adapter

Once finished with the development and testing of the adapter, we need to build the docker image. There are two key points when building the image:

Compile the adapter

We need to build statically linked binary to avoid libs dependency:

CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o mygrpcadapter

We disabled CGO, which you may not need depending on the libs, so that it can run inside the base docker image (i.e., scratch or alpine).

Use a small base image

You can choose scratch or alpine as your base image. They are very small and can be pulled quickly. You might need ca-certificates if you want your adapter accessing some TLS-enabled endpoints:

FROM scratch
COPY ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ADD mygrpcadapter /
ENTRYPOINT [“/mygrpcadapter”]

You can push the image to docker hub or some private docker registry.

Apply a metric template for Istio

The template is the foundation of Istio telemetry, and it defines the data that Mixer will dispatch to adapters. You can find it at the following location: $ISTIO_REPO/mixer/template/metric/template.yaml

Apply the definition of the adapter

This file is generated by mixer_codegen.sh. It can tell Mixer that there is an adapter that Mixer can pass telemetry data to.

Deploy adapter as a POD

An adapter running as POD will be better than running it as binary outside of a Kubernetes cluster because Mixer can communicate with the adapter directly through a private network and resolve by Kubernetes DNS. If the adapter is running as a binary file outside Kubernetes, you have to make sure the network policy allows Mixer to talk with the outside adapter.

apiVersion: apps/v1
kind: Deployment
metadata:
      name: mygrpcadapter
spec:
    selector:
       matchLabels:
            app: mygrpcadapter
    replicas: 1
    template:
        metadata:
              labels:
                 app: mygrpcadapter
        spec:
            containers:
            – name: mygrpcadapter
               image: wentaozhang/mygrpcadapter:0.1.0
               command: [“/mygrpcadapter”]

apiVersion: v1
kind: Service
metadata:
      labels:
         app: mygrpcadapter
      name: mygrpcadapter
spec:
    selector:
        app: mygrpcadapter
    ports:
    – protocol: TCP
       port: 8888
       targetPort: 41165

Configure handler/instance/rule

This is the most important part. The definition of the adapter tells Mixer the existence of the adapter. This configuration tells Mixer which IP/Ports to connect with. It is important to note that the address should follow Kubernetes dns pattern:

$servicename.$namespace.svc.cluster.local:$port

apiVersion: “config.istio.io/v1alpha2”
kind: handler
metadata:
      name: h1
      namespace: istio-system
spec:
    adapter: mygrpcadapter
    connection:
        address: “mygrpcadapter.default.svc.cluster.local:8888”

apiVersion: “config.istio.io/v1alpha2”
kind: instance
metadata:
     name: i1metric
     namespace: istio-system
spec:
    template: metric
    params:
        value: request.size | 0
        dimensions:
            response_code: response.code | 400
            source_service: source.service | “unknown”
            destination_service: destination.service | “unknown”
            connection_mtls: connection.mtls | false
            response_duration: response.duration

apiVersion: “config.istio.io/v1alpha2”
kind: rule
metadata:
     name: r1
     namespace: istio-system
spec:
    actions:
    – handler: h1.istio-system
       instances:
       – i1metric

Summary

By following the process outlined in this post, you can easily deploy a gRPC mode Istio Mixer adapter into a Kubernetes cluster. I have a repository which is a gRPC adapter for New Relic. It can upload the telemetry data to New Relic backend. If you are interested in the implementation details of the Mixer adapter, you are welcome to move forward to newrelic-istio-adapter.

How do I get started?

Create your IBM Cloud account and install Istio on IBM Cloud Kubernetes Service. If you want to get more information about Istio, the Istio official guide is a good start.

Related Links

Contact us

If you have questions or suggestions please reach out to us via email (zwtzhang@cn.ibm.com or bjyangyy@cn.ibm.com)

Software Developer

Yang Yang

Advisory Software Engineer

More stories
April 19, 2019

Kubernetes Deployments: Get Started Fast

I'm excited to bring you guys a new video format that is going to take a deeper dive on Kubernetes deployments.

Continue reading

April 19, 2019

Reach Out to the IBM Cloud Development Teams on Slack

Get the help you need fast—directly from the IBM Cloud Development Teams and other users on Slack.

Continue reading

April 19, 2019

Introducing IBM Cloud Object Storage Firewall: Further Secure Your Data

IBM Cloud Object Storage (COS) is giving you more control over who can access your data. We have introduced a new capability allowing you to configure your buckets with trusted IP address(es) that will dictate access to the data in COS.

Continue reading