Knative aims to tackle the biggest painpoint of Kubernetes adoption—the steep learning curve to start using it. Developers want to focus on developing applications, not wrangling with deployment and operations concerns. Knative not only tackles these concerns by providing the tools to streamline your workflow, but it also brings the power of running serverless workloads to your Kubernetes clusters.
In this video, I’ll break down the key features of Knative by highlighting the primitives: building, serving, and eventing. These primitives are what make Knative so powerful, opening up capabilities like source-to-url deployments, serverless functions, robust build templates, and much more. Check out my video or head to the Knative docs to learn more.
Hi everyone, my name is Sai Vennam, and I’m a developer advocate with IBM. Today, I want to talk about Knative, one of the fastest-growing open source projects in the cloud-native landscape today.
A platform installed on Kubernetes
Knative was recently announced and was developed by engineers from IBM, Google, and a number of other industry leaders. Essentially, Knative is a platform installed on top of Kubernetes, and it essentially brings the capabilities of serverless and running serverless workloads to Kubernetes. In addition, it provides a number of utilities that make working with your cloud-native apps on Kubernetes feel truly native.
The three major components of Knative: Build, Serve, and Event
I’d say there are three major components that make up Knative on top of Kubernetes. So, the first is going to be Build. Next, we’ve got Serve. And finally, we have Event.
These three components are actually called primitives, and the reason is because they are the building blocks that make up Knative. They essentially are what allow it to run serverless workloads within Kubernetes, but, in addition, also provide endpoints and tools to make working with Knative feel more natural and easy.
Knative Primitive #1: Build
So let’s get started with Build. I like to start with an example. So, what does every developer need to do when pushing their application to Kubernetes? Well, first, they need to start with code, right? So every developer has code—you can imagine it’s probably hosted up on GitHub.
So we’ve got that, and and the next thing we want to do is take that code and turn it into a container. Because the first step is code, the second step is always gonna be turning it into a container—something that Kubernetes and Docker or whatever container technology you might be using can understand.
So, to do that it, might be something really simple like a Docker build. Or, depending on how complex your build is, it could be a set of steps to end up with that last, that final container image. Once that image is developed—and, by the way, to actually make that process happen, you’ll need to pull the call down to a local machine or, you know, have something like Travis or Jenkins make that container build for you, right?
So, once that’s created you want to push that to a cloud registry. Something like Docker Hub or maybe a private image registry. But, essentially, once it’s up there, Kubernetes is now able to actually find it and deploy it, and to do that, you’ll probably want to create some much-loved manifest yaml files. And depending on how complex your deploy is, you might have multiple yaml files to make that deployment happen.
You can imagine that for a developer who is iteratively developing on top of Kubernetes, this is a lot of steps and it can be quite tedious. With Knative, we can bring this entire process onto your Kubernetes cluster. So everything from source code management, complex or custom builds, or even if you wanted to, there’s a number of templates out there. So, for example, if you like Cloud Foundry build packs, there’s a template for that to build your application.
So with Knative Build, you can do all of that within your cluster itself. It makes it a lot easier for developers doing kind of iterative development, and especially because this all these steps can be simplified into just a single manifest deploy—it just becomes faster and becomes more agile to develop applications.
Knative Primitive #2: Serve
So, we’ve talked about Build. The next thing I want to talk about is Serve. Serve has a very important role here and I think it’s one of the more exciting parts of Knative. It actually comes with Istio components kind of built-in. And if you’re not familiar with Istio, check out the link in the description below for more information. But, to kind of summarize, Istio comes with a number of capabilities—things like traffic management, intelligent routing, automatic scaling, and scale to zero, which is a pretty cool concept, but, essentially, with serverless applications, you want to be able to scale up to say maybe 1,000 pods and then bring it all the way back down to 0 if no one is accessing that service.
So let’s take a look at what a sample service that’s managed by Knative Serve would look like. So, at the top, we’ll start with a service. And this can be, you know, your traditional kind of microservice, or it can be kind of a function as well.
So that service is pointing and managing two different things. It’s going to be 1) a Route, 2) a Config. There’s one really cool thing about Knative Serve that I haven’t mentioned yet, and it’s the fact that every time you do a push, it’ll actually keep that provision stored.
So let’s say we’ve done a couple pushes to the service. So we’ve got Revision 1 as well as Revision 2. Revision 2 is the newer version of the app, and Config is actually going to manage both of those.
The Route, essentially, is managing all the traffic, and it routes them to one or actually more of those revisions. So, using Istio traffic management capabilities, we could say, let’s say 10 percent of all traffic gets routed to Revision 2 and 90 percent stays on Revision 1. That way, we can start planning to do a staged rollout or even do some A/B testing.
So, again, Knative Serve just to kind of summarize, provides us snapshots, it gives us intelligent routing, as well as scaling. All really cool features. I think Build and Serve together are gonna be solving a lot of problems that people might be having when doing CI/CD and doing microservice deployment to Kuberntes.
Knative Primitive #3: Event
The last thing I want to talk about is Eventing. This is one of those things that’s still work in progress in Knative, you know. It’s one of those projects been released recently you know at the time that we’re creating this video it’s still a work in progress, but there are number capabilities that are available.
So with Eventing, it’s kind of like an integral part of any serverless platform. You need the ability to create triggers—some sort of event that gets responded to by the platform itself.
Let’s say, for example, that you have a delivery re-routing algorithm that, anytime inclement weather is detected, you want a trigger that algorithm at that serverless action. That’s something that Eventing would allow you to set up—triggers.
Another thing you can do with Eventing—it’s a different use case—but you can kind of hook it into your CI/CD pipeline. Let’s say that once you have this whole flow created, you wanna kick that off automatically anytime there’s a new push to master. Or maybe anytime there’s a new push to master you say 10 percent of traffic gets pushed that version of the app.
So, with Eventing, you can make that a reality. So, creating pipelines with Eventing is also an option. And as this feature gets kind of more developed, becomes more of robust, we’ll see a number of kind of options and opportunities for taking advantage of Knative Eventing.
These building blocks make Knative powerful
So these three components together are what make Knative so powerful. Knative` is definitely shaping up to be one of the biggest players in the cloud-native and Kubernetes landscape.
I hope you enjoyed my explanation of Knative today. Definitely stay tuned for more lightboarding sessions in the future, and, again, if you want to learn more, check out the IBM Cloud Blog.
We're back with another lightboarding video, and this time we'll be investigating containerization. Sai Vennam will be using the example of a Node.js application that we want to push into production, and we'll be using two different form factors—virtual machines (VMs) and containers—to explain the advantages of containerization.