February 13, 2019 By Doug Davis
Ram Vennam
6 min read

Istio and Knative are poised to change how application developers use and view Kubernetes

Most people already know about Kubernetes as the de facto hosting platform for container-based applications. If you manage a Kubernetes cluster, you probably already know about many of its extensibility points due to the customizations you may have installed. You may have even developed one yourself—such as a custom scheduler—or even gone so far as to extend the Kubernetes resource model by creating your own Custom Resource Definition (CRD) along with a controller that will manage those new resources.

With all of these options available to extend Kubernetes, most of them tend to be developed for the benefit of Kubernetes itself as a hosting environment (i.e., developed to help manage the applications running within it). This all recently changed, however, with the introduction of two new projects—Istio and Knative—that when combined together, will radically change how application developers use and view Kubernetes.

Let’s explore these two projects and explain why they could cause a significant shift in Kubernetes application developers’ lives.

Istio: Next-gen microservice network management

Istio—a joint collaboration between IBM, Google, and Lyft—was introduced back in 2017 as an open source project to provide a language-agnostic way to connect, secure, manage, and monitor microservices. Built using open technologies such as Envoy, Prometheus, Grafana, and Jaeger, it provides a service mesh that includes the following capabilities:

  • Perform traffic management, such as canary deployment and A/B testing

  • Gather, visualize, and export detailed metrics and tracing across your microservices

  • Service authentication, authorization, and automatic traffic encryption

  • Enforce mesh-wide policies, such rate limiting and white/blacklisting

Istio does all of this (and more) without any modifications to the application itself. It extends Kubernetes with new CRD’s and injected Envoy proxy sidecars running next to your application to deliver this control and management functionality.

Istio architecture and components

Looking under the covers, the Istio architecture is split into two planes:

  • Data plane: Composed of a set of intelligent proxies (Envoy) deployed as sidecars that mediate and control all network communication among microservices.

  • Control plane: Responsible for managing and configuring these proxies to route traffic and enforcing policies at runtime.

Istio is also comprised of these components:

  • Envoy: The sidecars running alongside your applications to provide the proxy

  • Pilot: Configuration and propagation of that configuration to the entire system

  • Mixer: Policy and access control and gathering telemetry data

  • Citadel: Identity, encryption and credential management

  • Galley: Validates user-authored Istio API configuration

Istio as a building block in the stack—enabling new technologies to be built on top

While all of this by itself is pretty exciting, and Istio is definitely causing quite a buzz and adoption in the industry, it’s still targeted toward a DevOps engineer/operator persona—someone who is responsible for administrative tasks on your Kubernetes cluster and applications. Yes, mere mortal application developers could configure Istio routing and policies themselves, but in practice, it’s not clear they will do so (or will want to do so). They just want to focus on their application’s code, not all of the details associated with managing their network configurations.

Istio adds to Kubernetes many missing features required for managing microservices, and it does move the needle closer to being a seamless platform for developers to deploy their code without any configuration. Just like Kubernetes, Istio has a clearly defined focus and it does it well. If you view Istio as a building block or a layer in the stack, it enables new technologies to be built on top.

And, that’s where Knative comes into the picture.

Knative: A new way to manage your application

Like Istio, Knative extends Kubernetes to add some new key features, most notably the following:

  • A new abstraction for defining the deployment of your application, enabling a set of rich features aimed at optimizing its resource utilization—in particular, “scale to zero”

  • The ability to build container images within your Kubernetes cluster

  • Easy registration of event sources and management of events to enable your loosely coupled event-driven applications

Knative Serving

Starting with the first item, there’s a Knative component called “serving” that is responsible for running, exposing, and scaling your applications. To achieve this, a new resource called a Knative Service is defined (not to be confused with the core Kubernetes Service resource). The Knative Service is actually more akin to the Kubernetes Deployment in that it defines what image to run for your application, along with some metadata controlling how to manage it.

The key difference between a Knative Service and a Deployment is that a Service can be scaled down to zero instances if the system detects that it is not being used. For those familiar with serverless platforms, the concept here is the same, thus saving you from the cost of continually having at least one instance running. For this reason, Knative is often discussed as a serverless hosting environment, but, in reality, it can be used to host any type of application (not just Functions). This is just one of the bigger use cases driving its design.

Within the Knative Service, there’s also the ability to specify a “roll-out” strategy to switch from one version of your application to another. For example, you can specify that only a small percentage of the incoming network requests be routed to the new version of the application and then slowly increase it over time. To achieve this, Istio is leveraged to manage this dynamic network routing. Along with this is the ability for the Service to include its Route or endpoint URL. This means that Knative will set up all of the Kubernetes and Istio networking, load-balancing, and traffic-splitting associated with this endpoint for you.

Knative Build

One of the other big features available in the Knative Service is the ability to specify how the image used for deployment should be built. In a Kubernetes Deployment, the image is assumed to already be built and available via some container image registry. However, this requires the developer to have a build process that is separate from their application deployment. The Knative Service allows for this all to be combined into one—saving the developer time.

This “build” component that is referenced from the Service is the second key component of the Knative project. While there is the flexibility to pretty much define any type of build process you want, typically the build steps will be very similar to what developers do today—it will extract the application source code from a repository (e.g., GitHub), build it into a container image, and then push it to an image registry. The key aspect here though is that this is now all done within the definition of the Knative Service resource and does not require a separately managed workflow.

Knative Eventing

This brings us to the third, and final, component of the Knative project—”eventing.” With this component, you can define and manage subscriptions to event producers and then control how the events received are then choreographed through your applications. For example, an incoming event could be sent directly to a single application, or it could be sent to multiple interested applications, or even processed as part of a complicated workflow where multiple event consumers are involved.

Defining an entire workflow

Bringing this all together, it should be clear how all of these three components working together could be leveraged to define the entire workflow for an application’s lifecycle. A simplistic example might be as follows:

  • A developer pushes a new version of their code to a GitHub repository.

  • A GitHub event is generated as a result of the push.

  • The push event is received by Knative, which is then passed along to some code that causes the generation of a new revision/version of the application to be defined.

  • This new revision then causes the building of a new version of the container image for the application.

  • Once built, this new image is then deployed to the environment for some canary testing and then the load on the new version is slowly increased over time until the old version of the application can be removed from the system

This entire workflow can be executed and managed within Kubernetes, and it can be version-controlled right alongside the application. And, from the developer’s point of view, all they ever deal with is a single Knative Service resource to define their application—not the numerous resource types they would normally need to define when using Kubernetes alone.

While the above set of Knative features is pretty impressive, Knative itself (like Kubernetes) is just another set of building blocks available for the community to leverage. Knative is being designed with a set of extensibility points to allow for customizations and future higher-order tooling to be developed.

Where will we go next?

What’s different about the development of Istio and Knative is that when combined together, they’re focused on making life easier for the application developer. As good as Kubernetes is, it’s likely that many people’s first exposure to it (especially if they’re coming from other platforms like Cloud Foundry) is probably a bit daunting. Between pods, replicaSets, deployments, ingress, endpoints, services, and Helm, there are a lot of concepts to learn when all a developer really wants to do, in many cases, is host some code. With Knative and its leveraging of Istio, we’ve taken a big step forward in helping developers move back to being application developers instead of DevOps experts. It’ll be exciting to see how the community reacts to this as these projects mature.

Join the conversation

For general questions, engage our team via Slack by registering here and join the discussion in the #managed_istio_knative channel on our public IBM Cloud Kubernetes Service Slack.

Was this article helpful?
YesNo

More from Cloud

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

Optimize observability with IBM Cloud Logs to help improve infrastructure and app performance

5 min read - There is a dilemma facing infrastructure and app performance—as workloads generate an expanding amount of observability data, it puts increased pressure on collection tool abilities to process it all. The resulting data stress becomes expensive to manage and makes it harder to obtain actionable insights from the data itself, making it harder to have fast, effective, and cost-efficient performance management. A recent IDC study found that 57% of large enterprises are either collecting too much or too little observability data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters