March 19, 2021 By Henrik Loeser 5 min read

Lessons learned from migrating an app to IBM Cloud Code Engine.

Recently, I blogged about the updated IBM Cloud solution tutorial showcasing serverless a web app and eventing. It is an older tutorial in which, initially, the web app was deployed to IBM Cloud Foundry. The daily data collection was kicked off by an IBM Cloud Functions trigger and then performed by a serverless action. We migrated the code to IBM Cloud Code Engine, and both the app and data collection deployed in a serverless way.

In this post, we are going to discuss the lessons learned from migrating the existing Python code from a mix of Cloud Foundry/Cloud Functions deployments to Code Engine. This article will focus on Cloud Foundry, and a future post will look at the Cloud Functions part. 

Did we encounter any obstacles (spoiler: no)? What are the differences in the build or configuration stages? What code changes were necessary, if any?

Solution architecture.

Overview: IBM Cloud Code Engine

IBM Cloud Code Engine is a fully managed, serverless platform that runs your containerized workloads, including web apps, microservices, event-driven functions or batch jobs. The containers run on Kubernetes, but in a serverless way. As such, a great benefit of using Code Engine is that you don’t need to care about cluster management or any Kubernetes-related setup and scaling. But, you can still have a look under the hood, if you care.

As we will see, Code Engine can either take your existing container images or build them for you based on your code provided in a repository. It supports both public and private code repositories and offers different build strategies. 

Code Engine is integrated into IBM Cloud. As such, you can easily integrate cloud services into your microservices-based app. If you want, Code Engine takes care of the binding; but it also allows you to be fully in charge and customize it (see below). 

Last, but not least, Code Engine features support for event triggers. Your app subscribes to event producers (“subscriptions”) and is then triggered by an API call.

Given the above features and benefits, Code Engine is a great alternative to many of the Cloud Foundry- and Cloud Functions-based deployments. It is the reason we picked the existing solution and migrated it.

Building the app

The previous version of our app was deployed to Cloud Foundry. Thus, the instructions simply asked to execute ibmcloud cf push for the code to be deployed as app. IBM Cloud Code Engine requires a containerized app and offers two options (build strategies) to obtain it. You can either define a Dockerfile and have Code Engine use it to build a container image or — similar to Cloud Foundry — point to the sources and have Code Engine utilize a buildpack to turn the code into something runnable. An alternative is for you to take care of the build process on your own and just point to the container registry for Code Engine to deploy the image provided by you.

After reviewing both options offered by Code Engine, we went with writing a Dockerfile. Buildpacks might be easier and offer more similarities to Cloud Foundry buildpacks (given that it is a Cloud Foundry project), but the advantages of directly building the container image based on our Dockerfile are to have more control over the app runtime and to be able to deploy the app locally, in Code Engine and in other cloud environments.

To create the container image with Code Engine and based on the Dockerfile, we use the following commands. First, create the build specification, then submit a buildrun based upon it:

ibmcloud ce build create --name ghstats-build --source https://github.com/IBM-Cloud/github-traffic-stats  --context-dir /backend --commit master --image us.icr.io/ghstats/codeengine-ghstats --registry-secret usicr
ibmcloud ce buildrun submit --build ghstats-build

The new image is stored in the referenced container registry that we configured for Code Engine. The image is then pulled from the registry during the actual container deployment:

ibmcloud ce app create --name ghstats-app --image us.icr.io/ghstats/codeengine-ghstats:latest --registry-secret usicr

See the solution tutorial “Serverless web app and eventing for data retrieval and analytics” for all the setup steps.

Configure runtime environment

Typically, when deploying (“pushing”) an app in Cloud Foundry, the runtime environment is configured through a manifest.yml file. It allows you to set the app name, buildpack and resource configuration and much more

An alternative is to use parameters for the cf push command. Right now, Code Engine also uses command line parameters as an alternative to the interactive deployment in the IBM Cloud console. Similar to Cloud Foundry, the app create command has similar reasonable defaults like cf push. Thus, in the tutorial, we go with the defaults.

Service bindings

Cloud Foundry allows you to bind apps to cloud service instances. Through this feature, it is possible to create a new set of service credentials and make them available to the app. Within the app, the credentials can be accessed via the CE_SERVICES environment variable. 

Similarly, Code Engine supports service binding. The credentials are made available in two ways. The first is through a CE_SERVICES variable; the second is through transforming the individual parts of a credential set into a set of prefixed environment variables. The CE_SERVICES method allows easier code migration from Cloud Foundry. It still requires few changes because the service names might differ between Cloud Foundry and IAM-controlled resources. Regular environment variables are more flexible and are used for other configuration, too. They can also be passed in using Kubernetes configmaps and secrets.

We changed our code to accept credentials from both CE_SERVICES and environment variables. First, we check for the CE service bindings, then we look for anything configured via other environment variables. This allows us to overwrite settings and make use of DOTENV configuration:

Code sample: Checking for service credentials.

Memory, CPU and scaling

By default, your app runs as single instance in Cloud Foundry. It can be changed using, for example, the manifest file. In Code Engine, the autoscaling of your app is enabled with a minimum of 0 and a maximum of 10 instances. Similar configuration values exist for the amount of assigned memory and CPU shares. All of them can be specified when creating the app and during an update

The minimum scaling of 0 (zero) causes the app to be shut down when not in use and to be restarted when there is some workload again. For an app with a database backend like ours, this might take few (milli)seconds. We found that it was okay in user experience. The big benefit of the entire shutdown is the cost factor. Code Engine charges only for the actual runtime consumption. If there is no workload, there are no costs.

Code changes and testing

As discussed earlier, we changed our code to accept service credentials in two ways. Cloud Foundry also provides more or other environment variables than Code Engine. This required few code changes for the app initialization. Otherwise, there were no code changes and the app was up and running quickly.

While migrating the app, I made use of the documented troubleshooting tips. They include obtaining details on the builds and app and accessing the application logs.

Conclusions

Only few code changes were necessary to migrate the existing Python app to IBM Cloud Code Engine — they were caused by differences in the service binding.  We had to change the way the app was deployed, adding a build step to create a container image before the actual rollout. The runtime was configured slightly different than in Cloud Foundry, but offers similar settings for assigned memory, instance scaling and CPU resources. Overall, we did not encounter any bigger issues and the app is running stable, reliably collecting GitHub traffic data and presenting them for analytics.

Learn more

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn

Was this article helpful?
YesNo

More from Cloud

Hyperscale vs. colocation: Go big or go rent?

9 min read - Here’s the situation: You’re the CIO or similarly empowered representative of an organization. Different voices within your business are calling attention to the awesome scalability and power of hyperscale computing, which you’ve also noticed with increasing interest. Now the word comes down from on high that you’ve been tasked with designing and implementing your company’s hyperscale computing solution—whatever that should be. Your organization already has an ambitious agenda in mind for whatever IT infrastructure you wind up choosing. The company…

IBM Tech Now: March 25, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 95 On this episode, we're covering the IBM X-Force Threat Intelligence Index 2024: IBM X-Force Cyber Range Combating deepfakes Stay plugged in You can check out the IBM Blog Announcements for a full rundown…

Types of 5G: Which one is right for your organization?

7 min read - 5G technology isn’t a one-size-fits-all solution that can enable digital transformation at the touch of a button. There are three kinds of 5G, each with its own specific use cases and capabilities, that business leaders need to understand. 5G wireless is broken down into three types—low, mid and high band—named for the spectrum of radio frequencies they support. Low-band 5G transmits data on frequencies between 600 and 900 MHz Mid-band 5G transmits between 1 and 6 GHz High-band 5G transmits…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters