Knative (pronounced Kay-NAY-tive) is an extension of the Kubernetes container orchestration platform that enables serverless workloads to run on Kubernetes clusters, and provides tools and utilities that make building, deploying and managing containerized applications within Kubernetes a simpler and more 'native-to-Kubernetes' experience (hence the name – 'K' for 'Kubernetes' + native).
Like Kubernetes, Knative is open-source software. It was originally developed by Google in collaboration with IBM, Pivotal, Red Hat, SAP and nearly 50 other companies. Today, the Knative open source project is hosted by the Cloud Native Computing Foundation (CNCF).
Kubernetes automates and schedules the deployment, management and scaling of containers - lightweight, executable application components that combine source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.
Containers allow application components to share the resources of a single instance of an OS, in much the same way that virtual machines (VMs) allows applications to share the resources of a single physical computer. Smaller and more resource-efficient than VMs, and better suited to the incremental release cycles of Agile and DevOps development methodologies, containers have become the de facto compute units of modern cloud-native applications. Companies using containers report other benefits including improved app quality, greater levels of innovation, and much more:
Download the full report: Containers in the enterprise (PDF, 1.4 MB)
As cloud-native development becomes more popular and containers proliferate an organization, Kubernetes’ container orchestration capabilities – scheduling, load balancing, health monitoring and more – make that proliferation a lot easier to manage. However, Kubernetes is a complex tool that requires developers to perform or template many repetitive tasks - pulling application source code from repositories, building and provisioning a container image around the code, configuring network connections - outside of Kubernetes, using different tools. And incorporating Kubernetes-managed containers into an automated continuous integration/continuous delivery (CI/DC) pipeline requires special tools and custom coding.
Knative eliminates this complexity with tools that automate these tasks, from within Kubernetes. A developer can define the container's contents and configuration in a single YAML manifest file, and Knative does the rest, creating the container and performing the network programming to set up a route, ingress, load balancing and more. (Knative also offers a command line interface, Knative CLI, that allows developers to access Knative features without editing YAML files.)
Serverless computing is a cloud-native execution model that makes applications even easier to develop, and more cost-effective to run. The serverless computing model
On its own, Kubernetes can't run serverless applications without specialized software that integrates Kubernetes with a specific cloud provider's serverless platform. Knative enables any container to run as a serverless workload on any Kubernetes cluster - whether the container is built around a serverless function, or other application code (e.g., microservices) - by abstracting away the code and handling the network routing, event triggers and autoscaling.
Knative sits on top of Kubernetes and adds three main components, or primatives: Build, Serving, and Eventing.
The Knative Build component automates the process of turning source code into a container. This process typically involves multiple steps, including:
Knative uses Kubernetes APIs and other tools for its Build process. A developer can create a single manifest (typically a YAML file) that specifies all the variables - location of the source code, required dependencies, etc. - and Knative uses the manifest to automate the container build.
The Serving component deploys and runs containers as scalable Knative services. Serving provides the following important capabilities:
Knative Serving borrows intelligent service routing from Istio, another application in the Kubernetes ecosystem. an open-source service mesh for Kubernetes. Istio also provides authentication for service requests, automatic traffic encryption for secure communication between services, and detailed metrics about microservices and serverless function operations that developers and administrators can use to optimize infrastructure. (For more detail on how Knative uses Istio, read “Istio and Knative: Extending Kubernetes for a New Developer Experience.")
The Eventing component of Knative enables different events to trigger their container-based services and functions. Knative queues and delivers those events to the appropriate containers, so there's no need to write scripts or implement middleware for the functionality. Knative also handles channels, which are queues of events that developers can choose from, and the bus, a messaging platform that delivers events to containers. It also enables developers to set up feeds, which connect an event to an action for their containers to perform.
Knative Event sources make it easier for developers to create connections to third-party event producers. Knative eventing will automatically create the connection to the event producer and route the generated events. There's no need to figure out how to do it programmatically—Knative does all the work.
To recap, Knative supports several use cases for Kubernetes users who want to simplify containerized app development or take their use of containers to the next level.
Containers are ideal for modernizing your applications and optimizing your IT infrastructure. Container services from IBM Cloud, built on open source technologies like Kubernetes, Knative and Istio, can facilitate and accelerate your path to cloud-native application development, and to an open hybrid cloud approach that integrates the best features and functions from private cloud, public cloud and on-premises IT infrastructure.
Take the next step:
To get started right away with Knative, sign up for an IBM Cloud account.
An open, hybrid cloud strategy lets you build and manage workloads from anywhere, without vendor lock-in
Learn how the right on-premises cloud infrastructure can bring resiliency and security to your hybrid cloud environment – even during disruption.
Red Hat OpenShift on IBM Cloud leverages OpenShift in public and hybrid environments for velocity, market responsiveness, scalability and reliability.
With IBM Cloud Satellite, you can launch consistent cloud services anywhere — on premises, at the edge and in public cloud environments.
IBM Cloud Code Engine, a fully managed, serverless platform, runs containerized workloads, including web apps, microservices, event-driven functions, and more.