Don’t let your cloud tell you what you can and can’t do.
Different types of workloads require different runtime environments. Functions, 12-factor apps, batch jobs — they each have different runtime characteristics and, therefore, developers should expect to host each type in a purpose-built platform, right? Wrong. See why IBM Cloud Code Engine is revolutionizing how we think about hosting your Cloud Native workloads.
What is IBM Cloud Code Engine?
On March 31, 2021, IBM announced its new flagship runtime hosting environment, IBM Cloud Code Engine (see the announcement here). Code Engine is a fully managed, cloud-based runtime that allows you to deploy any cloud native workload in seconds, without having to manage any of the infrastructure necessary to host it.
Yes, Kubernetes has won the container orchestration wars, and yes, Code Engine uses Kubernetes (among other open source technologies) behind the scenes, but Code Engine took a different approach when thinking about how developers should manage their cloud native workloads — they shouldn’t worry about it!
With other platforms that will “manage” Kubernetes for you, there’s still the (non-trivial) task of learning Kubernetes because, in the end, you still need to learn about pods, replicaSets, deployments, load-balancers and many more things. Why is this necessary? We don’t typically use those in our applications unless we’re interfacing with the hosting environment. So, learning about all of these concepts is just a distraction from our true goal of writing code and, in the end, producing value for our customers. With Code Engine, we will not only manage Kubernetes for you, we actively tried to hide it, thus removing this learning curve.
This leads to the obvious conclusion that Code Engine is a simplified user-experience on top of Kubernetes, and that is certainly true. You can see my previous blog and the video of how easy it is to deploy an application. What I want to lay out in this blog post is how Code Engine unifies the various “as-a-service” platforms.
If, at the end of the day, the code you write runs in a container, does it really make sense to have to choose between a PaaS environment, Kubernetes or a functions/serverless platform? Especially if each imposes a different set of runtime constraints, and each has its own user experience — leading to different CI/CD workflows. That’s a lot of work, learning curves, runtime integrations and pain that takes you away from your real job — delivering customer value in the form of code.
With Code Engine, you don’t have those concerns. Regardless of the runtime characteristics of your workload, you’ll be able to run them in a single platform with a single user-experience, and they can all seamlessly work together. In fact, you can then also integrate them with your traditional Kubernetes workloads because Code Engine also supports those (more complex) APIs if you just can’t let go of the past yet. For example, for each application, you can choose whether you want it to automatically scale based on load (even down to zero), control the resources used for each or control whether it is externally accessible. The choice is yours.
Batch jobs are cloud native workloads, too
As great as all of that is, there’s one other key thing that is often overlooked when talking about cloud native workloads — not all of them are HTTP (event) driven. This means that not all workloads react to incoming messages. A classic example of this is “batch jobs” (sometimes referred to as “run-to-completion” workloads).
Unlike application instances where each one might process multiple requests at the same time, or might process multiple requests sequentially, batch job instances are meant to be executed exactly once, do some processing and then exit. A classic example of this is a batch job that is executed nightly to process some data that was gathered during the day.
Batch jobs, like applications, can be scaled but rather than scaling based on HTTP requests, they are typically scaled based on the amount of work that needs to be done. Using our “data processing” example, perhaps each batch job invocation needs to complete in an hour, and each job instance can only process 1,000 pieces of data in that time. If we needed to process 10,000 records, we’d need to scale up our batch job to 10 instances each time the job is run.
In Code Engine, this is trivial to do. We can simply tell Code Engine how to scale our job upon each execution, and it will take care of deploying and scaling out the infrastructure to support its needs (and then, of course, scale back down to zero when it’s done). You’ll only pay for the time the job is actually running.
Additionally, the user experience between managing applications and batch jobs is the same — with the obvious exception of there not being any configuration options for incoming requests for batch jobs. But, because all of your workloads are hosted in the same runtime environment, this means they’re automatically connected and can securely communicate with each other — no extra integrations are needed.
To see this in action, a new video is available where Gaby Moreno Cesar shows you just how easily and quickly you can deploy and execute a batch job in Code Engine:
In the video, she’ll walk through a sample very much like the one I described above. She’ll kick off the execution of the job manually, but you can just as easily invoke it via our APIs or even as a result of an event. For example, you can connect it up to our Ping (think “cron”) event producer to have your batch job executed at well-defined times during the day.
After that, I encourage you to go try Code Engine yourself. We offer a free tier so that you can play with it before you commit to purchasing. Check out the samples and tutorials that can help you jump-start your migration.