Learn more about Istio—open technology that provides a way for developers to seamlessly connect, manage, and secure networks of different microservices.
Istio is a configurable, open source service-mesh layer that connects, monitors, and secures the containers in a Kubernetes cluster. At this writing, Istio works natively with Kubernetes only, but its open source nature makes it possible for anyone to write extensions enabling Istio to run on any cluster software. Today, we'll focus on using Istio with Kubernetes, its most popular use case.
Kubernetes is a container orchestration tool, and one core unit of Kubernetes is a node. A node consists of one or more containers, along with file systems or other components. A microservices architecture might have a dozen different nodes, each representing different microservices. Kubernetes manages availability and resource consumption of nodes, adding pods as demand increases with the pod autoscaler. Istio injects additional containers into the pod to add security, management, and monitoring.
Because it is open source, Istio can run on any public cloud provider that supports it and any private cloud with willing administrators.
The following video explains more about the basics of Istio:
Read the e-book (1.4 MB)
When organizations move to microservices, they need to support dozens or hundreds of specific applications. Managing those endpoints separately means supporting a large number of virtual machines or VMs, including demand. Cluster software like Kubernetes can create pods and scale them up, but Kubernetes does not provide routing, traffic rules, or strong monitoring or debugging tools.
Enter the service mesh.
As the number of services increases, the number of potential ways to communicate increases exponentially. Two services have only two communication paths. Three services have six, while 10 services have 90. A service mesh provides a single way to configure those communications paths by creating a policy for the communication.
A service mesh instruments the services and directs communications traffic according to a predefined configuration. That means that instead of configuring a running container (or writing code to do so), an administrator can provide configuration to the service mesh and have it complete that work. This previously always had to happen with web servers and service-to-service communication.
The most common way to do this in a cluster is to use the sidecar pattern. A sidecar is a new container, inside the pod, that routes and observes communications traffic between services and containers.
As mentioned earlier, Istio layers on top of Kubernetes, adding containers that are essentially invisible to the programmer and administrator. Called "sidecar" containers, these act as a "person in the middle," directing traffic and monitoring the interactions between components. The two work in combination in three ways: configuration, monitoring, and management.
The primary method to set configuration with Kubernetes is the kubectl command, commonly "kubectl -f <filename>", where the file is a YAML file. Istio users can either run new and different types of YAML files with kubectl or use the new, optional, ioctl command.
With Istio, you can easily monitor the health of your applications running with Kubernetes. Istio's instrumentation can manage and visualize the health of applications, providing more insight than just the general monitoring of cluster and nodes that Kubernetes provides.
Because the interface for Istio is essentially the same as Kubernetes, managing it takes almost no additional work. In fact, Istio allows the user to create policies that impact and manage the entire Kubernetes cluster, reducing time to manage each cluster while eliminating the need for custom management code.
Red Hat OpenShift on IBM Cloud
IBM Cloud Kubernetes Service
IBM Cloud Satellite
The major benefits of a service mesh include capabilities for improved debugging, monitoring, routing, security, and leverage. That is, with Istio, it will take less effort to manage a wider group of services.
Say, for example, that a service has multiple dependencies. The pay_claim service at an insurance company calls the deductible_amt service, which calls the is_member_covered service, and so on. A complex dependency chain might have 10 or 12 service calls. When one of those 12 is failing, there will be a cascading set of failures that result in some sort of 500 error, 400 error, or possibly no response at all.
To debug that set of calls, you can use something like a stack trace. On the frontend, client-side developers can see what elements are pulled back from web servers, in what order, and examine them. Frontend programmers can get a waterfall diagram to aid in debugging.
What the example does not show is what happens inside the data center—how callback=parselLotamaAudiences calls four other web services and which ones respond more slowly. Later, we will see how Istio provides tools to trace function calls in a diagram much like this one.
Monitoring and observability
DevOps teams and IT Administration may want to observe the traffic to see latency, time-in-service, errors as a percentage of traffic, and so on. Often, they want to see a dashboard. A dashboard provides a visualization of the sum, or average, or those metrics over time—perhaps with the ability to "drill down" to a specific node, service, or pod. Kubernetes does not provide this functionality natively.
By default, Kubernetes allows every pod to send traffic to every other pod. Istio allows administrators to create a policy to restrict which services can work with each other. So, for example, services can only call other services that are true dependencies. Another policy to keep services up is a rate limit, which will stop excess traffic from clogging a service and prevent denial of service attacks.
Routing and load balancing
By default, Kubernetes provides round-robin load balancing. If there are six pods that provide a microservice, Kubernetes will provide a load balancer, or "service," that sends requests to each pod in increasing order, then it will start over. However, sometimes a company will deploy different versions of the same service in production.
The simplest example of this may be a blue/green deploy. In that case, the software might build an entirely new version of the application in production without sending production users to it. After promoting the new version, the company can keep the old servers around to make switchback quick in the event of failure.
With Istio, this is as simple as using tagging in a configuration file. Administrators can also use labels to indicate what type of service to connect to and build rules based on headers. So, for example, beta users can route to a ‘canary’ pod with the latest and greatest build, while regular users go to the stable production build.
If a service is overloaded or down, additional requests will fail while continuing to overload the system. Because Istio is tracking errors and delays, it can force a pause—allowing a service to recover—after a specific number of requests set by policy. You can enforce this policy across the entire cluster by creating a small text file and directing Istio to use it as a new policy.
Istio provides identity, policy, and encryption by default, along with authentication, authorization, and audit (AAA). Any pods under management that communicate with others will use encrypted traffic, preventing any observation. The identity service, combined with encryption, ensures that no unauthorized user can fake—or "spoof"—a service call. AAA provides security and operations professionals the tools they need to monitor, with less overhead.
Traditional applications still need the identify, policy, and security features that Istio offers. That has programmers and administrators working at the wrong level of abstraction, reimplementing the same security rules over and over for every service. Istio allows them to work at the right level—setting policy for the cluster through a single control panel. At the same time, with Istio's access controls, dashboards, and debugging tools described below, you can easily add a plugin at the command line, rather than go to a web page.
Istio 1.1 includes a new add-on called Kiali that which provides a web-based visualization. You can use it to track service requests, drill into details, or even export the service request history as a JSON to query and format in your own way. The workload graph below offers a real-time generated dependency graph based on the services that actually depend on each other. It is generated from actual observations of traffic.
Trace service calls
The Jaeger service, a component of Istio, provides tracing for any given service. In this example, we’ve traced the product page. Ever dot in the first image represents a service call. By clicking on a dot, we can “drill down” into the waterfall diagram to follow the exact services requests and responses.
We can also look more closely at the product page. We can see the errors are in product page itself—that details returned successfully.
Istio comes with many dashboards (out of the box) to monitor system health and performance. These can measure CPU and memory utilization, traffic demand, the number of 400 and 500 errors, time to serve requests, and more. Best of all, they are available by simply installing and running Istio and adding Grafana, one of the included open source dashboard tools for Istio. Istio also provides two other dashboards: Kiali and Jaeger.
The Istio website (link resides outside IBM) includes lots of helpful documentation and instructions for getting started with Istio.
An enterprise container platform, built around Kubernetes and open source technologies such as Istio, provides orchestration across multiple public and private clouds that unifies your environments for improved business and operational performance. It’s a key component of an open hybrid cloud strategy that lets you avoid vendor lock-in, build and run workloads anywhere with consistency, and optimize and modernize all of your IT.
Take the next step:
Get started with an IBM Cloud account today.
Deploy highly available, fully managed clusters with a click.
Deploy secure, highly available clusters in a native Kubernetes experience.
Build apps fast, run them anywhere.
Simply and seamlessly deploy container-enabled enterprise storage across on-premises and cloud storage environments.
Simplify complex hybrid IT management for greater visibility, manageability and flexibility.
Hybrid. Open. Resilient. Your platform and partner for digital transformation.