Introducing Managed Knative on IBM Cloud Kubernetes Service
Today, IBM is announcing Knative support as an experimental managed add-on to our IBM Cloud Kubernetes Service. With this support, along with our managed Istio support, which is also being announced, customers can experience a new and innovative way of deploying and managing their Kubernetes hosted applications.
What is Knative?
Knative is a relatively new open source project being developed by some of the key cloud innovators, including IBM, RedHat, Google, Pivotal, and SAP (see our “What is Knative?” video and our “Knative: An Essential Guide” page for more info). There are multiple reasons and goals behind the creation of Knative, and one of the biggest is the influence of serverless computing on the industry.
The serverless paradigm
The serverless paradigm has had quite an impact on the industry—mainly by making people rethink how they design their applications and what they should expect from their hosting platforms. In particular, some of the key aspects of serverless computing include the following:
Source-to-image development chain: Rather than developers crafting the details of how to build and host their code, there is a shift toward a model where they simply provide code to the platform and the platform manages all of the hosting aspects (e.g., building, hosting, scaling, etc.) for them.
Auto-scaling/scale-to-zero: As part of the platform managing the lifecycle of the application, it also scales the application based on the load it is experiencing. This includes scaling the application down to zero instances when it is not in use, meaning that the owner of the application is not charged if the application is not being used.
Short-lived functions: Following the established path of “breaking up the monolith” that the “container revolution” started, splitting up the microservices into even smaller “functions” allows for a more fine-grained hosting model, meaning better resource utilization.
Event-driven: To go along with the optimized scaling is the notion of these functions responding to events rather than simply always running and waiting for something to happen. This allows for a much more loosely-coupled architecture.
Most of these aspects of serverless computing end up with the higher-level goals of reducing the cost of managing your applications and increasing the velocity of your development teams. However, when trying to meet these goals within the Kubernetes platform, a couple of issues present themselves.
First, the existing resource model that Kubernetes exposes (while quite powerful) isn’t necessarily designed to address these specific needs. In order to achieve the above goals, a newer simplified resource model is needed. One that allows users to work at a higher level without needing to worry about all of the lower-level technical details. Basically, an more opinionated model.
Second, there has been some work in trying to develop such models and experience on top of Kubernetes, but what the industry really needed was a common/shared implementation that they could all rally around. Many independent solutions, even really good ones, do not necessarily help the community at large if there is no interoperability between them and customers feel as though they are locked into one solution.
The building blocks of Knative
Knative solves these concerns through its three main components:
Build: A component that integrates the building of container images into the specification of the application configuration. This allows for the source-to-image model where developers can encapsulate the configuration of their application alongside the specification of how to build their application images all at once.
Serving: A hosting model where an event-driven hosting scheme is leveraged to ensure that the applications are scaled based on actual need, including scaling down to zero when appropriate. This component will also automatically manage the rolling-out of newer versions of the code and allow for advanced traffic routing (such as A/B testing), relying on Istio for these features.
Eventing: A set of core eventing primitives that allows for the specification of interest in events from event sources (both internal and external to the cluster), as well as simple orchestration of delivery of those events to consuming applications.
A new user-experience for hosting applications on Kubernetes
Along with these components is one of the most important aspects—the Knative community itself. With the key cloud computing players working together and jointly addressing consumer’s needs in an open source, open-collaborative fashion, a common higher-level programming model can be offered where there’s freedom of choice/movement while still providing some base-level interoperability and portability.
Putting all of these elements together, the net result is a new user-experience for hosting applications on Kubernetes where the developer can focus on what they really want to do—code. Not manage infrastructure.
Due to the early stages of development for the project, our Knative managed offering is still in the experimental stage, but we expect the Knative community to enhance and solidifies its features very quickly.
Managed Knative add-on
With today’s announcement of managed Knative support for IBM Cloud Kubernetes Service, you can quickly and easily get Knative (and Istio) deployed to your Kubernetes cluster. Installing and using Knative couldn’t be easier. You can use our one-click install via the UI:
Or you can utilize the one-command install via the command line:
The Managed Knative on IBM Cloud Kubernetes Service is in Experimental stage since the project is continuing to mature and evolve, and as it does, IBM Cloud will provide even deeper integration with the rest of the IBM Cloud platform. We want to hear from you, so join the conversations with our team via Slack by registering here and then find them in the #general or #managed_istio_knative channels on our public IBM Cloud Kubernetes Service Slack.