What is Istio?
Explore IBM's Istio solution Subscribe for cloud updates
Illustration with collage of pictograms of computer monitor, server, clouds, dots
What is Istio?

Istio is a configurable, open source service-mesh layer that connects, monitors and secures the containers in a Kubernetes cluster.

At this writing, Istio works natively with Kubernetes only, but its open source nature makes it possible for anyone to write extensions enabling Istio to run on any cluster software. Today, we focus on using Istio with Kubernetes, its most popular use case.

Kubernetes is a container orchestration tool, and one core unit of Kubernetes is a node. A node consists of one or more containers, along with file systems or other components. A microservices architecture might have a dozen different nodes, each representing different microservices. Kubernetes manages the availability and resource consumption of nodes, adding pods as demand increases with the pod autoscaler. Istio injects more containers into the pod to add security, management and monitoring.

Because it is open source, Istio can run on any public cloud provider that supports it and any private cloud with willing administrators.

The following video explains more about the basics of Istio:

Strategic app modernization drives digital transformation

Strategic application modernization is one key to transformational success that can boost annual revenue and lower maintenance and running costs.

Related content

Register for the guide on hybrid cloud

The network service mesh

When organizations move to microservices, they need to support dozens or hundreds of specific applications. Managing those endpoints separately means to support many virtual machines or VMs, including demand. Cluster software like Kubernetes can create pods and scale them up, but Kubernetes does not provide routing, traffic rules, strong monitoring or debugging tools.

Enter the service mesh.

As the number of services increases, the number of potential ways to communicate increases exponentially. Two services have only two communication paths. Three services have six, while 10 services have 90. A service mesh provides a single way to configure those communications paths by creating a policy for the communication.

A service mesh instruments the services and directs communications traffic according to a predefined configuration. That means that instead of configuring a running container (or writing code to do so), an administrator can provide configuration to the service mesh and have it complete that work. This previously always had to happen with web servers and service-to-service communication.

The most common way to do this in a cluster is to use the sidecar pattern. A sidecar is a new container inside the pod that routes and observes communications traffic between services and containers.

Istio and Kubernetes

As mentioned earlier, Istio layers on top of Kubernetes, adding containers that are invisible to the programmer and administrator. Called sidecar containers, these act as a person in the middle, directing traffic and monitoring the interactions between components. The two work in combination in three ways: configuration, monitoring and management.

Configuration

The primary method to set configuration with Kubernetes is the kubectl command, commonly "kubectl -f <filename>", where the file is a YAML file. Istio users can either run new and different types of YAML files with kubectl or use the new, optional, ioctl command.

Monitoring

With Istio, you can easily monitor the health of your applications running with Kubernetes. Istio's instrumentation can manage and visualize the health of applications, providing more insight than just the general monitoring of clusters and nodes that Kubernetes provides.

Management

Because the interface for Istio is essentially the same as Kubernetes, managing it takes almost no additional work. In fact, Istio allows the user to create policies that impact and manage the entire Kubernetes cluster, reducing the time to manage each cluster while eliminating the need for custom management code.

Benefits of Istio

The major benefits of a service mesh include capabilities for improved debugging, monitoring, routing, security and use. That is, with Istio, it takes less effort to manage a wider group of services.

Improved debugging

Say, for example, that a service has multiple dependencies. The pay_claim service at an insurance company calls the deductible_amt service, which calls the is_member_covered service, and so on. A complex dependency chain might have 10 or 12 service calls. When one of those 12 is failing, there will be a cascading set of failures that result in some sort of 500 error, 400 error or possibly no response at all.

To debug that set of calls, you can use something like a stack trace. On the front end, client-side developers can see what elements are pulled back from web servers, in what order, and examine them. Frontend programmers can get a waterfall diagram to aid in debugging.

What the example does not show is what happens inside the data center—how callback=parselLotamaAudiences calls four other web services and which ones respond more slowly. Later, we see how Istio provides tools to trace function calls in a diagram much like this one.

Monitoring and observability

DevOps teams and IT administration might want to observe the traffic to see latency, time-in-service, errors as a percentage of traffic, and so on. Often, they want to see a dashboard. A dashboard provides a visualization of the sum, or average, or those metrics over time, perhaps with the ability to drill down to a specific node, service or pod. Kubernetes does not provide these functions natively.

Policy

By default, Kubernetes allows every pod to send traffic to every other pod. Istio allows administrators to create a policy to restrict which services can work with each other. So, for example, services can only call other services that are true dependencies. Another policy to keep services up is a rate limit, which will stop excess traffic from clogging a service and prevent denial of service attacks.

Routing and load balancing

By default, Kubernetes provides round-robin load balancing. If there are six pods that provide a microservice, Kubernetes will provide a load balancer, or service that sends requests to each pod in increasing order then it starts over. However, sometimes a company deploys different versions of the same service in production.

The simplest example of this might be a blue or green deployment. In that case, the software might build an entirely new version of the application in production without sending production users to it. After promoting the new version, the company can keep the old servers around to make the switchback quick in the event of failure.

With Istio, this is as simple as using tagging in a configuration file. Administrators can also use labels to indicate what type of service to connect to and build rules based on headers. So, for example, beta users can route to a canary pod with the latest and greatest build, while regular users go to the stable production build.

Circuit breaking

If a service is overloaded or down, more requests fail while continuing to overload the system. Because Istio is tracking errors and delays, it can force a pause, allowing a service to recover, after a specific number of requests set by policy. You can enforce this policy across the entire cluster by creating a small text file and directing Istio to use it as a new policy.

Security

Istio provides identity, policy and encryption by default, along with authentication, authorization and audit (AAA). Any pods under management that communicate with others use encrypted traffic, preventing any observation. The identity service, combined with encryption, helps to ensure that no unauthorized user can fake or "spoof" a service call. AAA provides security and operations professionals with the tools they need to monitor, with less overhead.

Simplified administration

Traditional applications still need the identity, policy and security features that Istio offers. That has programmers and administrators working at the wrong level of abstraction, reimplementing the same security rules over and over for every service. Istio allows them to work at the right level, setting policies for the cluster through a single control panel. 

Related solutions
Red Hat® OpenShift® on IBM Cloud®

With Red Hat OpenShift on IBM Cloud, OpenShift developers have a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters.

Explore Red Hat OpenShift
IBM Cloud® Satellite

Deploy and run apps consistently across on-premises, edge computing and public cloud environments from any cloud vendor, by using a common set of cloud services including toolchains, databases and artificial intelligence.

Explore IBM Cloud Satellite
Resources Containers in the enterprise

IBM Research® documents the surging momentum of container and Kubernetes adoption.

What is serverless?

Serverless is a cloud application development and execution model that lets developers build and run code without managing servers or paying for idle cloud infrastructure.

Flexible, resilient, secure IT for your hybrid cloud

Containers are part of an hybrid cloud strategy lets you build and manage workloads from anywhere.

Take the next step

Red Hat OpenShift on IBM Cloud offers developers a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters. Offload tedious and repetitive tasks involving security management, compliance management, deployment management and ongoing lifecycle management. 

Explore Red Hat OpenShift on IBM Cloud Start for free