December 15, 2020 | Written by: Fabio Oliveira
Categorized: Hybrid Cloud | Open Source
Share this post:
IBM Research has partnered with Red Hat to bring iter8 into Kiali. Iter8 lets developers automate the progressive rollout of new microservice versions. From Kiali, developers can launch these rollouts interactively, watch their progress while iter8 shifts user traffic to the best microservice version, gain real-time insights into how competing versions (two or more) perform, and uncover trends on service metrics across versions.
Iter8 is an open source project initiated by IBM Research, enabling developers to deliver high-quality code frequently and safely. By “high-quality code” we mean software whose features will best resonate with users. “Frequently and safely” imply peace of mind when delivering code to production. These lofty goals are made possible by iter8’s state-of-the-art analytics engine, powered by novel Machine Learning (ML) techniques developed and scientifically validated by the IBM Research team that created the iter8 project.
Kiali, an open source project initiated by Red Hat, is Istio’s de facto User Interface (UI). It allows Istio users to manage the service mesh, validate and change its configuration, and visualize both microservice-level metrics and inter-service topological connectivity patterns.
IBM Research and Red Hat set out to join forces and develop an iter8 extension to Kiali. Bringing iter8 into Kiali has benefits to the Istio and Kiali communities as well as to iter8. The symbiotic iter8-Kiali integration allows Istio users to rely on Kiali for the developer-centric observability concern of assessing the quality of new code delivered to the cloud. In turn, iter8 users can benefit from a well-known Istio-specific tool to examine the insights produced by iter8’s analytics engine.
What exactly can users do with iter8’s extension to Kiali?
Online experiment types supported by iter8 — canary releases, A/B rollouts, and A/B/n rollouts — that would typically be initiated from CI/CD or GitOps pipelines can be interactively launched from Kiali if desired. Importantly, no matter how an experiment is started, users can go to Kiali to watch its progress and gain insights into the behavior of the competing microservice versions with respect to the metrics of choice.
Automated canary rollouts
A canary rollout experiment allows developers to verify the behavior of a new microservice version (the canary) based on a set of Service-Level Objectives (SLOs). These SLOs can be expressed in absolute terms or relative to the current microservice version (the baseline). For example, an absolute SLO could state that “the mean latency must be below 200 milliseconds,” whereas a relative SLO would stipulate that “the canary error rate must be at most 1.05 times that of the baseline.”
During a canary rollout experiment, iter8 treats the current version and the canary version as competitors. Iter8 periodically assesses the canary’s quality and adjusts how the user traffic is split between the two competing versions. As more data becomes available, iter8’s confidence in declaring whether the canary will succeed or fail increases; accordingly, iter8 will gradually shift the traffic towards either the current version or the canary. At the end, the winner version takes over. The canary wins if it meets the SLOs with statistical confidence; otherwise, the current version (the baseline) is declared the winner.
One can see in Kiali, in real time, iter8’s confidence in declaring a winner, the user traffic split between the current and canary versions, how the two versions compare with respect to the metrics chosen for the SLOs, which SLOs are being violated, if any (and by how much), and how far along the experiment is.
Automated A/B and A/B/n rollouts
The goal of A/B and A/B/n rollouts is different from that of canary rollouts. In A/B and A/B/n rollouts, the winner version is the one that maximizes a reward metric, which is typically related to business concerns. For instance, developers might want to experiment with different microservice versions to identify the one that generates more revenue.
Note that iter8 can compare two (A/B) or more (A/B/n) competing versions. Even more importantly, iter8’s formulation of A/B and A/B/n rollouts is unique in that it considers both a reward metric to be maximized as well as SLOs that must be satisfied. As an example, an experiment could specify that “the conversion rate must be maximized” subject to an SLO stipulating that “the mean latency must be below 200 milliseconds.” The winner version must necessarily meet the SLO. Among the versions satisfying this requirement in our example, the one yielding the highest conversation rate wins. Throughout the experiment, iter8 gradually shifts the user traffic towards the winning version, which eventually takes over.
In Kiali, users can watch the experiment as it progresses. Kiali shows how iter8 is splitting the traffic across versions as it explores them, the probability that each version will become the winner, whether or not a winner has been declared, how the competing versions compare with respect to the reward metric and the SLOs, and how much progress has been made in the experiment.
Trend analysis across versions
During an experiment, only the participating versions are analyzed, assessed, and compared through the lenses of iter8’s sophisticated algorithms so that a winner version can be identified and promoted. Over time, a microservice can undergo multiple experiments, leading to the promotion of many successive versions. A question that might arise is: shouldn’t the successive winners also be compared and analyzed?
As a microservice evolves with newly released code, certain undesirable patterns might be slowly and steadily emerging. For example, an increase in the microservice’s resource utilization might not be concerning (or even noticed) in the context of a single experiment, but it might turn out to be prohibitive if it steadily continues over time. Iter8 has the ability to uncover these trends on metrics across winner versions of a microservice.
Kiali’s UI is a good medium for iter8 present that trend analysis to users. Kiali shows all the details of experiments in which a microservice has participated and the sequence of winner versions that were promoted. Through iter8’s trend analysis, Kiali reveals how successive versions compare and how their utilization and SLO metrics are trending.
Iter8 solves the problem of automating canary releases as well as A/B and A/B/n rollouts in a principled, data-driven way through novel ML-based algorithms. The advanced statistics computed by iter8 enable users to get insights into why iter8 makes its decisions and how multiple versions of a microservice behave and trend. Developers adopting Istio can now consume these developer-centric insights in Kiali to learn about their code and their user preferences and how to improve their businesses with less risk.
Inventing What’s Next.
Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.