Compute Services

How to Deploy a gRPC Mode Istio Mixer Adapter into Kubernetes

Share this post:

Overview of Istio and Mixer

Istio is a platform that provides a way for developers to seamlessly connect, manage, and secure networks of different microservices. It was jointly funded by IBM, Google, and Lyft. Istio can be divided into two sections: data plane and control plane.

The data plane is implemented by Envoy proxy and injected as a sidecar to the microservice POD. Envoy proxy handles inbound and outbound traffic between services. Control plane is composed of Pilot, Mixer, and Citadel. Pilot is the component that configures the proxies at runtime, and Mixer is the central component used by the proxies and microservices to enforce certain policies (e.g., quotas, authorization, authentication, rate limits, request tracing, and telemetry collection).

JJ Asghar, an IBM Cloud Developer Advocate, further explains the basics of Istio in the video below:

What is a gRPC mode adapter?

Policy checking and telemetry data reporting are two major functions of Istio Mixer, but it doesn’t store any kind of telemetry data. It provides a mechanism to upload metrics that are collected by Envoy proxy to various backends. Prometheus is supported by default. There are also some other adapters (e.g., DogStatsD, Stackdriver).

Before Istio v1.0, all the adapters were compiled in Mixer. That means you must add your adapter code into the istio/mixer/adapter directory as a part of Mixer and wait for the Istio community to review and merge your code. The only other option was to build a customized Istio, so neither option is overly easy for the user.

It is also important to note that if the adapter implementation is not good, it will impact the performance or stability of Mixer, possibly causing it to crash. Fortunately, the gRPC mode adapter can resolve this problem.

The gRPC adapter is a separated process, which means that Mixer can pass the raw telemetry data to the corresponding adapter. All the formatting and uploading processes are irrelevant to Mixer because the gRPC adapter will handle them.

Develop a gRPC adapter

There is already an exhaustive guide on how to develop a gRPC adapter, so we won’t reiterate that process here.

You can write an adapter and test it using mixc, which is a client to simulate traffic in service mesh. Eventually, however, you will need to deploy the adapter to somewhere as a binary file or a Kubernetes POD. We will show you how to do that in this post

When you walk through this guide, “Mixer Out of Process Adapter Walkthrough,” there is a directory which contains some configuration files:

mkdir testdata
cp sample_operator_cfg.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata
cp config/mygrpcadapter.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata
cp $MIXER_REPO/testdata/config/attributes.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata
cp $MIXER_REPO/testdata/config/metrictemplate.yaml $MIXER_REPO/adapter/mygrpcadapter/testdata

You will use these files to start up your mixs server:

$GOPATH/out/linux_amd64/release/mixs server –configStoreURL=fs://$(pwd)/mixer/adapter/mygrpcadapter/testdata

This will simplify the testing process of your adapter (otherwise you would have to build the adapter and deploy it again and again for a small function test). When you deploy the adapter into Kubernetes, those yaml files need to be applied.

Build an image for your adapter

Once finished with the development and testing of the adapter, we need to build the docker image. There are two key points when building the image:

Compile the adapter

We need to build statically linked binary to avoid libs dependency:

CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o mygrpcadapter

We disabled CGO, which you may not need depending on the libs, so that it can run inside the base docker image (i.e., scratch or alpine).

Use a small base image

You can choose scratch or alpine as your base image. They are very small and can be pulled quickly. You might need ca-certificates if you want your adapter accessing some TLS-enabled endpoints:

FROM scratch
COPY ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ADD mygrpcadapter /
ENTRYPOINT [“/mygrpcadapter”]

You can push the image to docker hub or some private docker registry.

Apply a metric template for Istio

The template is the foundation of Istio telemetry, and it defines the data that Mixer will dispatch to adapters. You can find it at the following location: $ISTIO_REPO/mixer/template/metric/template.yaml

Apply the definition of the adapter

This file is generated by It can tell Mixer that there is an adapter that Mixer can pass telemetry data to.

Deploy adapter as a POD

An adapter running as POD will be better than running it as binary outside of a Kubernetes cluster because Mixer can communicate with the adapter directly through a private network and resolve by Kubernetes DNS. If the adapter is running as a binary file outside Kubernetes, you have to make sure the network policy allows Mixer to talk with the outside adapter.

apiVersion: apps/v1
kind: Deployment
      name: mygrpcadapter
            app: mygrpcadapter
    replicas: 1
                 app: mygrpcadapter
            – name: mygrpcadapter
               image: wentaozhang/mygrpcadapter:0.1.0
               command: [“/mygrpcadapter”]

apiVersion: v1
kind: Service
         app: mygrpcadapter
      name: mygrpcadapter
        app: mygrpcadapter
    – protocol: TCP
       port: 8888
       targetPort: 41165

Configure handler/instance/rule

This is the most important part. The definition of the adapter tells Mixer the existence of the adapter. This configuration tells Mixer which IP/Ports to connect with. It is important to note that the address should follow Kubernetes dns pattern:


apiVersion: “”
kind: handler
      name: h1
      namespace: istio-system
    adapter: mygrpcadapter
        address: “mygrpcadapter.default.svc.cluster.local:8888”

apiVersion: “”
kind: instance
     name: i1metric
     namespace: istio-system
    template: metric
        value: request.size | 0
            response_code: response.code | 400
            source_service: source.service | “unknown”
            destination_service: destination.service | “unknown”
            connection_mtls: connection.mtls | false
            response_duration: response.duration

apiVersion: “”
kind: rule
     name: r1
     namespace: istio-system
    – handler: h1.istio-system
       – i1metric


By following the process outlined in this post, you can easily deploy a gRPC mode Istio Mixer adapter into a Kubernetes cluster. I have a repository which is a gRPC adapter for New Relic. It can upload the telemetry data to New Relic backend. If you are interested in the implementation details of the Mixer adapter, you are welcome to move forward to newrelic-istio-adapter.

How do I get started?

Create your IBM Cloud account and install Istio on IBM Cloud Kubernetes Service. If you want to get more information about Istio, the Istio official guide is a good start.

Related Links

Contact us

If you have questions or suggestions please reach out to us via email ( or

Software Developer

Yang Yang

Advisory Software Engineer

More stories
May 3, 2019

Kubernetes Tutorials: 5 Ways to Get You Building Fast

Ready to start working with Kubernetes? Want to build your Kubernetes skills? The five tutorials in this post will teach you everything you need to know about how to manage your containerized apps with Kubernetes.

Continue reading

May 3, 2019

Using Portworx to Deploy and Manage an HA MySQL Cluster on IBM Cloud Kubernetes Service

This tutorial is a walkthrough of the steps involved in deploying and managing a highly available MySQL cluster on IBM Cloud Kubernetes Service.

Continue reading

May 2, 2019

Kubernetes v1.14.1 Now Available in IBM Cloud Kubernetes Service

We are excited to announce the availability of Kubernetes v1.14.1 for your clusters that are running in IBM Cloud Kubernetes Service. IBM Cloud Kubernetes Service continues to be the first public managed Kubernetes service to support the latest upstream versions from the community.

Continue reading