Test Driving Built-in Monitoring and Logging in IBM Containers

Share this post:

As of May 23rd IBM Bluemix Container Service now provides a native Kubernetes operations experience while removing the burden of maintaining master nodes. Kubernetes itself is based on the Docker engine for managing software images and instantiating containers. Get the details.

We introduced seamless monitoring and logging for containers in Bluemix earlier this year, which were highlighted in Monitoring and Logging for IBM Containers. No configuration needed. The key technology behind both of these monitoring and logging services is based on the “Agentless System Crawler” project described—and open sourced—in developerWorks Open. This post will describe a few examples of how you can simply make use of these built-in monitoring and logging capabilities for containers deployed on Bluemix. We will use Bluemix London region for this test drive.

Demonstration of built-in monitoring

To demonstrate built-in monitoring, we have created a container that runs a workload with sinusoidal CPU and memory demand patterns. This is the same workload we have used in our earlier work on agentless monitoring in Sigmetrics 2014. The Dockerfile that generates this container is as below:

# cat Dockerfile
FROM ubuntu:14.04
MAINTAINER Cloudsight (autogenerated by
RUN apt-get update & apt-get install -y gcc wget make
RUN mkdir /ldwave
ADD ./varyCpuAndMem /ldwave
WORKDIR /ldwave
RUN make
CMD ["/ldwave/varyCpuAndMem", "-t", "3600"]

The container simply builds and runs a single application called varyCpuAndMem with flag -t 3600 which represents the runtime in seconds. We then build a container image named ldwave from the Dockerfile :

# docker build -t ldwave .

Now docker images shows the new image locally:

# docker images
ldwave latest 5fd5be6f5bda 30 hours ago 284.5 MB

We then tag the image with the Containers London registry info and our registry-namespace (“canoreg” for me):

# docker tag 5fd

and then can log in and push the image to container registry:

# ice login -H -R -a
# ice push --local

Once push is complete we can see our remote image from CLI:

# ice images
Image Id Created Virt Size Image Name
5fd5be6f5bda1b31fbfa72f827d23cb06c9d Nov 5 22:39:03 2015 284524022

The image also appears in the Catalog:

We can create a container from the Bluemix console by clicking on the container image, which takes us to “create container” page. There, we give it a name and container gets created and starts running our workload. This takes us to the container view. After waiting for some time, the “Overview” menu shows the expected sinusoidal patterns for CPU and Memory, similar to this:

More details of monitoring can be seen in the “Monitoring and Log” tab:

If we click on the “Advanced View” on the right, it takes us to the Graphana environment for the container metrics:

This covers our example test drive to demonstrate built-in monitoring for containers. Under the covers crawlers constantly collect metrics and state information for each container and seamlessly make these available to every user.

Demonstration of built-in logging

We have introduced a seamless logging framework for containers around DockerCon in June. Similar to monitoring, logging works handsfree, without requiring any user intervention or custom software in user environments. Under the covers we use what we call “log crawlers” to crawl and expose container logs for each user. For every user, certain logs (such as Docker log, and /var/log/messages) are exported by default. For example, our ldwave container above writes to stdout as it changes the workload’s CPU and Memory demand. These are picked up by Docker logs and our log crawler will seamlessly export this log. As a result, the associated stdout logs for our container are already visible in Bluemix:

To demonstrate additional logs in action, we will use “ibmliberty” which is one of the standard IBM images in Bluemix Containers.

When creating a container from this image, we bind a public IP and add an ssh key to the container. This enables us to later ssh into the container to trigger log messages.

Once the container is started and networked, we can ssh in to trigger logs. First, we simply emit a plain text log message in the container:

instance-0003909e:~# echo "hello" > /var/log/messages

Similar to the ldwave container, the log message shows in logging view:

Again, if we click on the “Advanced View” on the right, it takes us to the Kibana environment for the container logs, but first let us add one more log:

instance-0003909e:~# echo '{"wowkey":"yourvalue"}' >> /var/log/messages

Above is a JSON formatted log line, which can be interpreted as {“key”:”value”} pairs. As one of our post DockerCon enhancements, we started adding global filters to our logcrawlers. The net result of this is that, if your app emits JSON formatted logs, they will be properly parsed and indexed in Elastic Search, which can then be queried.

Now, launching the advanced view, we see both log lines:

Furthermore, when we look at the log attributes, we also see “wowkey” as an attribute, with value “your value”.

This covers our short example demonstrating seamless logging in Bluemix Containers. We hope you find this “effortless” nature of our solutions valuable as we continue to work on our new features.

More stories
May 1, 2019

Two Tutorials: Plan, Create, and Update Deployment Environments with Terraform

Multiple environments are pretty common in a project when building a solution. They support the different phases of the development cycle and the slight differences between the environments, like capacity, networking, credentials, and log verbosity. These two tutorials will show you how to manage the environments with Terraform.

Continue reading

April 29, 2019

Transforming Customer Experiences with AI Services (Part 1)

This is an experience from a recent customer engagement on transcribing customer conversations using IBM Watson AI services.

Continue reading

April 26, 2019

Analyze Logs and Monitor the Health of a Kubernetes Application with LogDNA and Sysdig

This post is an excerpt from a tutorial that shows how the IBM Log Analysis with LogDNA service can be used to configure and access logs of a Kubernetes application that is deployed on IBM Cloud.

Continue reading