Compute Services

One microservice, three compute options: Cloud Foundry, container & OpenWhisk – the choice is yours

Share this post:

Cloud platforms support an array of programming model and deployment options. The choice of multiple options is great. In this blog we will explore three compute options for a single microservice: Cloud Foundry, Kubernetes containers, and OpenWhisk.

We often get this question from our clients. There is no one size fits all.  When thinking about your deployment options you must take into account the use case, project requirements, project size, team skills, & budget.   Because there are so many deployment options needed to meet your needs IBM Bluemix offers you the following:

All based on open technologies.

Need more information on the compute options?

Phil Estes recently published a comparison of the different deployment models. He presented this talk at InterConnect. You can find the slides here. He goes through the options and even provide a sample application that makes use of Cloud Foundry, Docker and OpenWhisk. I encourage you to skim through the slide deck as a preamble.

The microservice

As a developer, I like to get first-hand experience on the technologies I need to use. Therefore to better understand the differences between the deployment models, let’s consider one simple microservice.

The service computes Fibonacci numbers. According to Wikipedia in mathematics, the Fibonacci numbers are the numbers in the following integer sequence, called the Fibonacci sequence, and characterized by the fact that every number after the first two is the sum of the two preceding ones:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ...

We define three API methods for this service:

  • given a number n of iteration, return the Fibonacci number at position n in the sequence;
  • given a duration t in milliseconds, compute the Fibonacci sequence during this duration and return the value it was able to compute and the iteration number it reached;
  • and a special endpoint that should crash the microservice. Not really useful but this will allow us to highlight the behavior of each compute option in case of failures.

The microservice source code together with all the steps to deploy it across all compute options is available on GitHub. In this post I’ll give an overview of the deployment steps. I’ve kept the nitty-gritty for the README.

Whether the service is deployed as a Cloud Foundry application, as a container in Kubernetes, or as an action in OpenWhisk, the same code is used to compute the Fibonacci number.

Computing the Fibonacci numbers is simple. There are even many different examples online. In our case, we use Node.js and a library implementing Big Numbers to be able to compute longer sequences than if we were using the language primitive types. The source code for the computation is here.

Learn more about the Fibonacci service project

Deployment option 1: Fibonacci as a Cloud Foundry application

To expose our Fibonacci service as a Cloud Foundry app, we need a runtime and some HTTP server to listen to the request. Given we picked Node.js, we choose:

  • the Node.js buildpack
  • and the Express web application framework.

The source code for the Cloud Foundry app includes:

  • package.json – it lists the dependencies and the startup script (node app.js).
  • app.js – the main entry point, it starts the Express framework and implements the service API. It delegates to the Fibonacci class.
  • lib/fibonacci.js – the implementation of the Fibonacci sequence.
  • manifest.yml – the app descriptor used by Cloud Foundry.

To deploy this app, simply use

cf push

from the service directory.

Deployment option 2: Fibonacci as a Docker container in a Kubernetes cluster

To run in Kubernetes, we need to package our Fibonacci code into a Docker image. The good news is that we can reuse the code we wrote for the Cloud Foundry app. Then our Docker image definition (the Dockerfile) is straightforward:

FROM node:6.9.1

COPY lib/ lib/
COPY app.js .
COPY package.json .

RUN npm install
EXPOSE 8080
CMD node app.js

In plain terms, this means:

  • Use the Node.js 6.9.1 image as the base image;
  • Inject the lib folder containing the Fibonacci implementation;
  • Add the Express application;
  • Add the Node.js app description file, listing the dependencies;/li>
  • Install the dependencies;
  • Expose the port 8080 outside the container;
  • Start the app.

This was the first step. The next steps involve creating a Kubernetes cluster, building the Docker image, pushing it to a Docker registry and deploying the service inside the cluster.

Detailed instructions are available in the GitHub project to do all this.

Deployment option 3: Fibonacci as an OpenWhisk action

For OpenWhisk, we have to wrap our Fibonacci implementation with some glue code. This code receives the action parameters, calls the algorithm and returns the result. Then we deploy this action in OpenWhisk. To expose the action as an HTTP endpoint, we use the recently introduced web actions.

Our action is made of several JavaScript files that we package into a ZIP file. The ZIP file is deployed in OpenWhisk.

The source code for the Cloud Foundry app includes:

  • handler.js – the interface between OpenWhisk and our Fibonacci algorithm
  • package.json – specifies the entry point for the action
  • deploy.js – a script to package the action files in a ZIP and deploy it in OpenWhisk
  • and of course lib/fibonacci.js – the implementation of the Fibonacci sequence.

And again you can find detailed instructions in the GitHub project.

Let’s Review

At this point, the same service is deployed to three different compute options. The main differences we see between the three options here are the packaging and the glue code around the Fibonacci algorithm. But for the outside world, we have three Fibonacci services with the same API and capabilities.

Deploying this service to Cloud Foundry, Kubernetes and OpenWhisk is only a starting point. Now you can start experimenting to understand the differences between the three.

Some suggestions that I might turn into future posts:

  • What if I need to change the code of my service? How do I ensure continuity of service while I deploy the update?
  • What if my service fails? Will the platform automatically recover? Will it be transparent to end-users?
  • How do I scale my service?
  • How much does this service cost?

If you want to find out by yourself how the compute options behave when your service fails, you can get a glimpse of it with the Fibonacci service. Call the crash endpoint and look at the platform logs to see how Cloud Foundry, Kubernetes and OpenWhisk handle the failure of the process. A quick hint: with the default configuration where we have only one instance backing our service, Cloud Foundry and Kubernetes will take a short while after the crash to automatically restart a new instance, while OpenWhisk will just be ready to process the next action. Here adding more instances will avoid your service being unavailable after the crash of the single instance.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter: @L2FProd.

Offering Manager - IBM Cloud

More Compute Services stories
August 16, 2018

Call for Code is Ready to Help Save Lives When Weather Strikes

Perhaps the next great advancement of our age, a better way to protect people from destruction, is an idea you have . . . a code that can save lives. Learn more about Call for Code, a rallying cry for developers to help create the next big solution to disaster crisis.

Continue reading

July 18, 2018

Part III: Wimbledon Facebook Bot on IBM Cloud

Delivering at scale: In the final part of the series, we discuss integrations with on-site systems at the All England Club and how we used Multi-Region within IBM Cloud to ensure scale and availability.

Continue reading

July 16, 2018

Part II: Wimbledon Facebook Bot on IBM Cloud

In the second in a series of posts about how IBM iX designed, developed, and delivered the Facebook Messenger Bot available at Wimbledon 2018, we focus on the broadcast integration within Facebook and how we persisted user preferences using IBM Cloudant and Compose Redis.

Continue reading