IBM Cloud Code Engine: Build, Deployment and Scaling Aspects

6 min read

Part 2: Best practices for migrating your Cloud Foundry app to IBM Cloud Code Engine.

This is the second part of a series of blog posts covering the migration of apps from IBM Cloud Foundry to IBM Cloud Code Engine. In the first article, "From Cloud Foundry to Code Engine: Service Bindings and Code," I discussed service binding and code migration. Next up in my journey, I focus on building (and thereby containerizing) the app and then deploying it. I also take a look at how to automatically scale the app resources (autoscaling) and high availability and traffic management aspects:

"Blue" instance of a blue-green deployment on IBM Cloud Code Engine.

"Blue" instance of a blue-green deployment on IBM Cloud Code Engine.

Build and deployment aspects

With Cloud Foundry, you push your app as part of the deployment process. The source code is either on your local drive and you deploy your app manually, or, typically, sources are managed in a version control system like GitLab or GitHub and you utilize a toolchain. 

The build and deployment process consists of several steps. It includes packing the source code together with the required stack (the operating system) and buildpack (programming language support, libraries and more) into a droplet. This process is called staging. Next, the droplet is distributed to virtual machines (VMs) for later execution. For the latter, the VM needs to unpack the droplet, possibly compile (build) the app from the provided source code and, finally, run the app.

With Code Engine, the overall process is similar. There are differences in the terminology and in how the app is prepared for execution. Because every workload is based on container images, the overall process is as follows:

  • Take the source code and build a layered container image that consists of the actual app and the required stack. 
  • Publish ("push") that image in a public or private container registry so that it can be accessed by Code Engine.
  • Pull that container image into the assigned virtual machine ("pod") and inject the app configuration.

Build and containerize

Similar to Cloud Foundry, there are different ways to build the app and its container image. One way is to utilize a buildpack for the programming language or to define a Dockerfile.

My recommendation is to go with Dockerfiles, if you can. You have greater control over the entire process, resulting in — among others — smaller image sizes (think "performance benefit"). Furthermore, Dockerfiles and containerized apps can run almost anywhere, including on a Kubernetes cluster. Typically, I try to define multi-stage builds to only have the bare minimum inside the images, for greater performance and security. See the Dockerfile of my sample app for what a multi-stage build for Node.js looks like.

To build the container image, one option is to leverage Code Engine's build/buildrun feature. First, you define a build by specifying the source, the build strategy, destination and other parameters. Then, the actual build is performed by a buildrun. I often use the build/buildrun when starting a new project because it is simple to set up and it supports the Dockerfile strategy (using my own Dockerfiles, see above).

Another approach to build and containerize my app and its components is to utilize a toolchain with a continuous integration (CI) pipeline. Following DevSecOps best practices, I could use another pipeline for continuous delivery (CD) (i.e., the deployment of my app to Code Engine).

Deployment

To deploy the app, its container image needs to be available in a registry. For the first deployment, you need to create an app as part of a Code Engine project. Later on, to deploy a new revision of the app, you just need to update the app. Then, the Code Engine runtime environment automatically pulls the latest version of the container image and runs it. 

Like the build process, I usually start by manually creating the app and its first revision. It can be done in the IBM Cloud console or using the Command Line Interface (CLI). Following the "crawl, walk, run" approach, I then try to automate the process for the next code changes. With the Code Engine project and the app set up, the actual deployment is a single CLI command:

ibmcloud ce app update --name MYAPP

The GitHub Workflow definition in the 3codeengine_target branch of my sample repository shows a very simple, but complete automation. The workflow reacts to code changes pushed to a defined branch, builds a new container image and pushes it to the container registry and then deploys the container as a new app revision (see that step in the following screenshot):

Step in GitHub Workflow to deploy the new app revision to Code Engine.

Step in GitHub Workflow to deploy the new app revision to Code Engine.

I have to emphasize that the focus is on showing the core steps. Therefore, the workflow is not following the DevSecOps security best practices with separate pipelines for build/containerize and deployment, code risk analysis, vulnerability scans and more. 

When working with Code Engine, you can take any existing toolchain for containerized workloads, and you only need to adapt the actual delivery/deployment step. It is even possible to use the Kubernetes CLI and the Knative CLI with Code Engine for access to advanced configuration and deployment options, such as blue-green deployment. I am going to touch on this later below, and you can also take a look at the screenshot at the top of this post.

Scaling, high availability and traffic management

With the app deployed to Code Engine, it is accessible and available for tests and (maybe) production. Because Code Engine provides a serverless platform for containerized workloads, you don't need to care about runtime details. But if you are experienced, you might want to exercise more control. In the following section, I am going to touch on aspects of scaling the app, high availablity and how to manage traffic and split it between different revisions.

Scaling your app

When working with Cloud Foundry, you might have used the autoscaling feature. It allows you to automatically add and remove app instances depending on performance metrics or date and time. By default, autoscaling is off, and some setup work is required to use it. In contract, Code Engine has the app autoscaling activated by default. The standard configuration is shown in the screenshot below. It means that Code Engine scales an app up to 10 runtime instances and down to zero (i.e., the app does not consume any resources when not in use):

IBM Cloud Code Engine defaults for autoscaling.

IBM Cloud Code Engine defaults for autoscaling.

Those settings can also be changed later on within the limits and quotas via the Code Engine UI, CLI or API. A max scale of 0 (zero) even means that the app should scale up to the maximum possible, if necessary. By adapting the concurrency and request timeout, you can tune the autoscaler to your needs. My approach and recommendation is to go with the defaults and enjoy the serverless approach of not caring too much about configuration. 

High availability

For aspects of high availability (and disaster recovery), Cloud Foundry and Code Engine are similar:

Traffic management and rolling updates

To keep your app available when deploying a new version on Cloud Foundry, you needed to use special CLI command options for so-called rolling updates or zero-downtime deployments. With Code Engine, the rolling updates are performed automatically. Once the new app revision is indicating readiness, the traffic is moved from the old to the new revision.

To perform blue-green deployments (i.e., to gradually move traffic from the old to the new revision), some tricks are necessary in Cloud Foundry. With Code Engine, it is possible to utilize the capabilities built into its underlying Kubernetes and Knative layers. The blog post "Blue-green deployment with IBM Cloud Code Engine and Knative" shows how the Knative CLI command to update the app is used to configure the traffic split between revisions. Given an app named "bluegreen" and two revisions "rev-old" and "rev-new", the following command would split the traffic 80/20 between them:

kn service update bluegreen --traffic rev-old=80 --traffic rev-new=20

By changing in the assigned percentages, you can test new code version with some traffic first and, when mature enough, make the revision in production. The screenshot at the beginning of this article shows a small demo app in "blue mode" during a blue/green test.

Conclusions

In this blog post, I looked at build/deployment aspects and how to serve the app using autoscaling, considering high availability and traffic management. Coming from IBM Cloud Foundry to IBM Cloud Code Engine, there are many similarities, which cuts down the required learning effort. Often, I can benefit from Code Engine as serverless platform because it automatically handles many runtime aspects. In cases where it is not enough, I can adapt it, either directly in Code Engine or in the Kubernetes and Knative layers.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn

Be the first to hear about news, product updates, and innovation from IBM Cloud