July 18, 2019 By Doug Davis 5 min read

This week, IBM Cloud Kubernetes Service upgraded to support Knative version 0.7, and I wanted to take this opportunity to discuss some of the recent changes of which people should be aware.

Rather than just mention some of the changes, I wanted to point to a new Knative demo that I’ve put together to help people jumpstart their education.

Knative v.07: What’s new?

In my previous blog, “IBM Cloud Kubernetes Service Knative Now Supports v0.6,” I mentioned that the Knative Build feature was being deprecated. In Knative v0.7, it is now unsupported, which means you’ll have to use some other mechanism to build your container images, such as Tekton. More on this later.

Additionally, the old syntax for defining a Knative Service (e.g., using the “runLatest” or “manual” service types) is no longer supported. This is because the new, simpler, Knative Service syntax should be able to support everything you need (most of the time). For those rare occasions where you really do need to manually manipulate the underlying revisions, routes, or config resources, you can still do so—but you should just skip the creation of the Knative Service object. It’s no longer needed in those cases. But, in all honesty, I’d love to hear from you if you ever do need to go that route because that implies to me a deficiency in the current design of the Service object, which should be fixed.

From a user-experience (UX) perspective, the other biggest change is the ability to mount ConfigMaps and Secrets into your containers as Volumes. Unfortunately, you still can’t mount generic Volumes yet, but it’s still under discussion.

There have, of course, been lots of other changes going on and you can see the complete list for Serving here, and for Eventing here, but most are things like new Event Sources or “behind the scenes” improvements (e.g., better cold-start performance).

What is Tekton and how does it relate to Knative Build?

As mentioned above, Knative Build support is now gone. While you can technically use any other container image build mechanism you want, there is definitely a push in the Knative community towards Tekton. As a reminder, Tekton is a relatively new project that was derived from Knative Build. It has similar concepts (such as “tasks”), but rather than being linked with Knative Serving, it is designed to be more generic so it can be used for any purpose other than just building images for hosting cloud native apps. You can almost think of it more like a Jenkins type of project, but specifically designed to be run on Kubernetes.

In preparation for this migration away from Knative Build, IBM Cloud Kubernetes Service now includes Tekton as part of its Knative install process. This means that as you start to experiment with Tekton, you do not need to do anything special beyond just installing Knative into your IBM Cloud Kubernetes cluster.

To learn more, check out “Tekton: A Modern Approach to Continuous Delivery.”

Using Tekton

The move from Knative Build to Tekton isn’t actually too bad. Jason Hall from Google put together a nice migration guide that should provide some of the basics of what’s involved. If you’ve used Knative Build in the past, it should be a relatively obvious migration path for the steps needed to build and push your container images to some image repository.

However, there is one very important aspect that is missing now—the integration with Knative Serving. With Knative Build, once the image is uploaded to a registry, the system would automatically either deploy (or update) your Knative Service to use that new image. That automated process is now gone. In order to make that happen you need to explicitly make it happen.

This could be done by adding an additional “step” to your Tekton “task” that does a deploy, or update, as appropriate. This isn’t too hard, but it is a bit sad that it’s not automatic anymore for you. There are talks underway about ways to help here though. For example, perhaps the new “kn” CLI will be enhanced to make this process less painful.

A new demo: Trying out Tekton

To coincide with this new Knative release I wanted to put together a new demo that showcased a few topics that have been of interest to me recently. You can find the demo in this GitHub repo.

First, is the move to Tekton. I actually haven’t had a lot of time to play with it, so this gave me an opportunity (or “forced” me) to do so. I was curious to see how much work was really needed to do the basic (and probably most common) process of building a new container image from source code. However, if you look at some of the Tekton samples that do this, they most often will build from a git repo. That is obviously going to be one of the most popular ways to do it, but I wanted to play around with building from source code that resides on my laptop instead.

While there are many ways to get source code into the containers that are used during a Tekton task, I decided to choose the path of least resistance and simply put the source code into a ConfigMap and mount that as a Volume into the task’s container. This has many obviously limitations (which are discussed in the demo’s README) but it was quick, and for many cases where you’re running a small function (which often is just a single source file), this can work just fine.

Knative: One *aas to rule them all

One of the things that really excites me about Knative is that it’s basically a merging of many different concepts into one. By that I mean, often when people talk about hosting their cloud-native applications, they’re faced with the choice of which platform to use. CloudFoundry? Docker? Kubernetes? Is their app more like a “function” and therefore should they use OpenWhisk? OpenFaaS? What if they don’t want to manage the infrastructure themselves, so then it’s more “Serverless“? If so, you have to add things like IBM Cloud Functions and Lambda into the list of options.

This choice of which platform to use—which means deciding which type of application your code is—is a very important decision. It doesn’t just impact the tooling you’re going to use to manage your app (i.e., which CLI – cf vs. docker vs. kubectl), but it also impacts which features you have available to actually do that management. For example, some will make it easier to access your running app (by providing you with the networking and an endpoint automatically), or some will do auto-scaling for you based on load. And, of course, some will not have any of those available; or if they do, you’ll have to manage those aspects yourself, which can be non-trivial.

With Knative, however, I think you get the best of all worlds. While it’s not there yet, I think Knative is on-track to providing the following:

  • A simplified UX that you get from a PaaS-like CloudFoundry or simple CaaS platforms like Docker.
  • Automatic infrastructure management from FaaS and Serverless platforms, with features such as autoscaling (even down to zero) and automatic network management.
  • A simplified Kubernetes UX, which means you can host “normal” Kubernetes applications but with a nicer/PaaS-like UX. As powerful as Kubernetes is, it’s not the most user-friendly, and Knative offers relief for that.
  • The ability to still integrate those apps with the rest of your Kubernetes infrastructure/apps, regardless of whether those other components are Knative-based or not.

Why do I mention all of this? Because in my demo, I decided to showcase this aspect by using Knative to host a Docker Registry. Most people would not consider an image repository to be something that a FaaS/Serverless platform should host, but that’s the point. Knative can support applications that are long-lived just as easily as a traditional CaaS platform can, but by doing so, I get all of the benefits of Knative—such as an easier UX and automatic infrastructure management.

The demo will go into more details about some of the trade-offs I did have to make due to the current limitations of Knative, but I’m hopeful that by the time Knative reaches v1.0, those will be resolved.

Questions or comments on the demo

While it would be great if you actually ran through the demo yourself, it’s not necessary to do so to see it “in action.” In the GitHub repo’s README, it provides instructions for how to run the demo without installing anything or even needing internet connectivity.

Check out the demo

Ping me if you have any questions or comments about the demo or my commentary in there—I’d love to hear your thoughts.

For general questions, engage our team via Slack by registering here and join the discussion in the #managed_istio_knative channel on our public IBM Cloud Kubernetes Service Slack.

References

Was this article helpful?
YesNo

More from Cloud

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

Optimize observability with IBM Cloud Logs to help improve infrastructure and app performance

5 min read - There is a dilemma facing infrastructure and app performance—as workloads generate an expanding amount of observability data, it puts increased pressure on collection tool abilities to process it all. The resulting data stress becomes expensive to manage and makes it harder to obtain actionable insights from the data itself, making it harder to have fast, effective, and cost-efficient performance management. A recent IDC study found that 57% of large enterprises are either collecting too much or too little observability data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters