How-tos

IBM Containers and Bluemix Services – simplifying distributed Docker applications at runtime

Share this post:

As of May 23rd IBM Bluemix Container Service now provides a native Kubernetes operations experience while removing the burden of maintaining master nodes. Kubernetes itself is based on the Docker engine for managing software images and instantiating containers. Get the details.

With the recent announcements for IBM Bluemix and IBM Containers at DockerCon, it’s important to understand what those announcements bring to the developer, both large and small. The ability to use all of the 120+ Bluemix services inside of a Docker container running on the IBM Container service is truly empowering.

Instead of hard-coding or manually passing in --env parameters from the command line (which is supported in IBM Containers), developers can dynamically and programmatically bind service instances to container instances. This is made possible by Bluemix bringing together the power of Cloud Foundry and Docker containers in a single platform.

How to bind your Bluemix services to IBM Containers at deployment time

In a recent workshop, I had a scenario running a MobileFirst Platform Foundation server in a Docker container on Bluemix. The server was using a Cloudant NoSQL database instance on Bluemix, leveraging the data caching feature for our users’ mobile experience. As we deployed additional instances of our scenario for testing, staging, production, etc, we’d need different combinations of Docker container instances and Cloudant service instances. We could programmatically connect the distinct instances in a pipeline, but that would require querying Bluemix multiple times and maintaining credentials somewhere in our pipeline to query Bluemix. Obviously, something we want to avoid if possible.

Mobile scenario solution architecture
Mobile scenario solution architecture

However, with Bluemix we can bind our container instances to a Bluemix application and automatically expose all the service credentials into a container instance. This dynamic binding is done by injecting the VCAP_SERVICES variable into the container instance as an environment variable when a new container instance is started with docker run --bind [APPLICATION_NAME] .... The VCAP_SERVICES variable contains all the bound services to a given Bluemix application, allowing a developer to programmatically iterate, query, and access the service credentials in a secure way with minimal touchpoints.

Most of our runtimes in both Cloud Foundry and Containers have access to the system’s environment variables at runtime. For example, Node.js applications can access this information via process.env.VCAP_SERVICES to get a string-based representation of variable. Converting this to a JSON object, you can now query the arrays of services bound to the application. Similarly, PHP and Java applications have access to the environment variables of the host process at runtime and can use them in similar fashions.

Understanding how Docker containers are built and run

The ability to dynamically bind application components and service instances makes it easy to offload the interaction from your build process, instead dynamically linking your application components at runtime. This section is going to cover the in’s and out’s of Docker containers and the impact of docker build on runtime bindings.

But this amazingly helpful capability doesn’t come without its limitations. We have to understand how Docker containers are built and run to understand where we may have some gaps in finding a solution to our problem mentioned above. If you’re familiar with Docker in this context, feel free to jump directly to the next section to see our implemented solution.

Docker containers are built from a Dockerfile

Docker container instances are simply copies of a built instance stamped out at runtime from a template image. So we don’t have any of our service instance credential information when we are building our Docker image via a docker build command. This is why we need to offload the querying of our required service credentials until they actually exist at runtime.

Docker containers are built on an inheritance-model

Docker containers allow for users to consume parent Docker images in a Dockerfile and extensible child images on top of them. Think of it as object-oriented inheritance for infrastructure. This allows us to add in additional files and configurations on a child image, for instance taking a base web application server image and putting our own application inside the WEBAPP directory.

Docker containers only run a single process.

Docker containers are built to only run one process over the course of its life. There are ways to manage multiple processes via the use of supervisord, but we can’t depend or expect every image we are building off of to use supervisord.

The single-process restriction matters because we can’t generically add additional commands to run before the parent image’s ENTRYPOINT or CMD command in the base Dockerfile. It is important to understand the difference in the Dockerfile commands that get executed during docker build and docker run. There are three important commands to touch on here:

  1. RUN – this command executes the associated command at build time and stores its result in a layer in the base Docker image
  2. ENTRYPOINT – this command allows you to configure a container that will run as an executable
  3. CMD – this command is to provide defaults for an executing container, as parameters to an ENTRYPOINT

Each of these commands affect the built Docker image differently. There are a number of intricacies with these commands and how ordering in the Dockerfile affects which commands will execute at runtime.

Solving the “not-yet-defined service credentials” problem at runtime through a new Dockerfile and Python

Now that we have an understanding of our infrastructure and what we are using, we can find a way to take advantage of this powerful Bluemix capability without reinventing the wheel. Let’s begin by restating our problem:

  1. We need to bind a Cloudant service instance to an IBM Containers container instance.
  2. The Cloudant service credentials aren’t known until runtime.
  3. The base image, the Websphere Liberty Server image, did not start via the supervisord method. It invoked a script directly to start the server.
  4. We cannot use the System.get Java method to retrieve the value of VCAP_SERVICES, since the credentials need to be placed in a config file before the WebSphere Liberty Profile (WLP) server starts.
  5. All this work needed to be done before the server is started, which is the base ENTRYPOINT of the parent image.

To summarize, we needed to retrieve the value of VCAP_SERVICES, update configuration XML files, and then start the WLP server. Just now getting to our problem statement, there’s a lot of background information that is required to understand this complexity.

Good news… the answer is actually a lot simpler than we expected. The Dockerfile for the base WLP server allows for parameterization of passing in additional CMD parameters to the base ENTRYPOINT command.

So to solve our problems, I wrote a script that performs that same server start commands as the base Dockerfile, but not until after I’ve run a Python script to query the VCAP_SERVICES and update the necessary WebSphere Liberty server.xml configuration file with default values, as shown below.

<code>root@d7e746159d37:/# cat /opt/ibm/wlp/usr/servers/defaultServer/server.xml | grep Cloudant
    &lt;jndiEntry jndiName="datastore/CloudantProxyDbAccount" value='${env.datastore_CloudantProxyDBAccount}'/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantProtocol" value='${env.datastore_CloudantProtocol}'/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantPort" value='${env.datastore_CloudantPort}'/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantProxyDbAccountUser" value='${env.datastore_CloudantProxyDbAccountUser}'/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantProxyDbAccountPassword" value='${env.datastore_CloudantProxyDbAccountPassword}'/&gt;
root@d7e746159d37:/#</code>

The three files necessary to do this delayed configuration are copied below, from a GitHub Gist available here:

  1. Our custom Dockerfile
  2. A new server start script, init-liberty.sh
  3. A new Python script to update the server XML files, update-server-xml.py

These are used in concert and similar patterns can be used for additional scenarios where custom configuration of the environment is required before starting the original base Dockerfile’s process.

Dockerfile

The core of our solution exists in lines 22 – 25 of the Dockerfile. We copy over our additional scripts and then tell the Docker image to run our init-liberty.sh script as the single process we want to run (or based on inheritance, we’re passing the init-liberty.sh script into the base container image as the parameter which it will execute directly).

init-liberty.sh

This script is pretty straight forward in running the Python script below and then starting the WebSphere Liberty server via the same command the parent Docker image used.

update-server-xml.py

This Python code is similar to what our application code would do in a regular application, however we’re doing this in a container instance before handing off to the server process the user will eventually interact with. Line 19 above is the key line that retrieves the value of VCAP from the environment variable and then we cautiously parse the values we expect inside of VCAP for Cloudant.

Lines 22 – 27 interact with the VCAP_SERVICES credentials via a normal Javascript/JSON namespace to extract our expected connection information to the Cloudant instance. Lines 29 – 30 load the server.xml that our Dockerfile cloned from a public GitHub project into an XML object. Lines 33 – 47 then interact with the server.xml document and update the XML attribute of value in a desired jndiEntry element inside the document.

Finally, the last line in the Python script dumps the XML object out to a file that the server will actually use when it starts up.

The updated WebSphere Liberty server.xml configuration file is shown below. However, the required Python script has now run and updated the file with our runtime credentials for Cloudant.

<code>root@1fff1c6186f8:/# cat /opt/ibm/wlp/usr/servers/defaultServer/server.xml | grep Cloudant
    &lt;jndiEntry jndiName="datastore/CloudantProxyDbAccount" value="08809bfe-YYYY-ZZZZ-AAAA-44e3820cdef0-bluemix.cloudant.com"/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantProtocol" value="https"/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantPort" value="443"/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantProxyDbAccountUser" value="08809bfe-YYYY-ZZZZ-AAAA-44e3820cdef0-bluemix"/&gt;
    &lt;jndiEntry jndiName="datastore/CloudantProxyDbAccountPassword" value="a9ad26fd391506efe28e24df92024580623e1b39189e2077130cbdYYYYYYYYYY"/&gt;
root@1fff1c6186f8:/#</code>

Alternate method

As an aside, Liberty servers can leverage the contents of environment variables in its configuration XML files directly. However, based on the VCAP_SERVICES environment variable being a string-formatted JSON object, we can’t directly or selectively index service instances in a JSON array etc.

Our Python script could have extracted the necessary VCAP_SERVICES credential information and set environment variables for the configuration file to read directly. At this point, it was a coin-toss as to which was a better implementation method between our above approach and this alternate approach. Feel free to try on your own and see if you have a preference for one way or the other.

Wrapping up

As you can see, we want to be as hands-off as possible when standing up our infrastructure in the cloud. The less code to do this linkage, the better. Being able to bind Bluemix Services to IBM Container instances is a very powerful capability that solves many problems. However, it isn’t a cure-all for all of your application needs. There will be instances when you need to think outside the box and creatively implement how you will dynamically couple your applications and necessary services, even with the information is “right there” in an environment variable.

In the future, some of this service binding capability will probably be more closely integrated into the IBM Container service, with more robust support in both Dockerfiles and the IBM Container service CLI. But for now, you can quickly and easily combine your application code and Bluemix services, regardless of whether you are running Cloud Foundry or IBM Container applications!

Add Comment
One Comment

Leave a Reply

Your email address will not be published.Required fields are marked *


Anshu Garg

Is it possible to run Cloudant in docker container, push it to Bluemix and use that in an application running in Bluemix instead of using Cloudant service from Bluemix ?

Reply
More How-tos Stories

Kubernetes Log Forwarding with Syslog

Logs help you troubleshoot issues with your clusters and apps. Sometimes, you might want to send logs somewhere for processing or long-term storage. On a Kubernetes cluster in the IBM Cloud Container Service, you can enable log forwarding for your cluster and choose where your logs are forwarded.

Continue reading

Keeping up-to-date with Kubernetes

Kubernetes development and adoption continues to grow at a rapid pace, and keeping current can be difficult without the right process and tools. For example, IBM Cloud Container Service launched with support for Kubernetes v1.5.6 earlier this year. Since that initial launch, the Kubernetes community provided 3 minor releases (v1.6, v1.7, and v1.8) and over 25 patch releases. By year's end there's likely to be another minor release and numerous patches. So with all this change, what's the best way to keep your cluster and apps up-to-date and secure?

Continue reading

Setting Access Control Policies for IBM Cloud Object Storage

As your organization explores more digital initiatives, including cloud and mobile, the importance of identity and access management (IAM) is paramount. Nearly all IT decision makers we talk with agree that IAM is essential to the success of their company’s cloud adoption and it is seen as a key enabler for mobility, analytics and IoT initiatives.

Continue reading