The key differentiators of Docker technology

Share this post:

Without a doubt, Docker is emerging as a next-generation image building and management solution. As widely known, one of the largest objections to the “golden image” model is that we end up with image sprawl: large numbers of (deployed) complex images in varying states of versioning. It is a common concern expressed that images also tend to be heavy and unwieldy. Compared to the traditional image models, Docker is more lightweight. That is, images are layered and we can quickly iterate on them toward easy management.

To handle the burgeoning complexity, configuration management tools are indispensable. Docker itself still needs to be installed, managed and deployed on a host and the host also needs to be managed. Also, Docker containers may need to be orchestrated, managed and deployed, often in conjunction with external services and tools. Docker encourages some different behaviors for hosts, applications and services: short-lived, disposable, and focused on single services being provided in a container.

The most commonly cited use case is testing. Docker containers are becoming a feature of fast, agile and disposable test environments that are wired into CI tools. In these use cases, a Docker container is created and configured to run the required tests and then shut down. Here, the short lifespan of the testing host does not lend itself to running a configuration management tool and indeed running that tool could well add overhead, complexity and time. However, there are several other situations wherein docker and configuration management tools have to be together.

It is not an exaggeration that there is a greater potential for some advanced deployment tools that combine containers, configuration management, continuous integration, continuous delivery and service orchestration in the days ahead.

The initially envisioned portability is fully established and sustained by the indomitable Docker concept. The overly praised virtualization mechanism fails to live up to its expectations in resolving the portability issue. That is, VMs crafted in development environments do not work in the testing, staging or production environments without substantial modifications.

The other prickling issues include the real-time provisioning of IT resources, the safeguard of higher throughput through automation, orchestration and sharing techniques, and the guarantee of greater performance through wastage elimination. The official Docker site says that developers can build their application once and then know that it can run consistently anywhere. Operators can similarly configure their servers once and then know that they can run any application.

Finally, docker, which is primarily application-centric, facilitates portable deployment across machines. Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging etc. Docker is comfortable with any tool such as make, maven, chef, puppet, salt, debian packages, rpms, source tarballs, or any combination of the above, regardless of the configuration of the machines. There are a huge number of tools integrating with docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, and Flynn), multi-node orchestration (maestro, salt, mesos, and openstack nova), management dashboards (docker-ui, openstack horizon, and shipyard), configuration management (chef and puppet), continuous integration (jenkins, strider, and travis), etc. are the mainstream tools.

In a nutshell, docker not only defines what a container is and what can be done with a container, but provides the tools to manage them at a host level. This is what we mean by container management—management at a host level. Finally, Docker automates the deployment of applications in the form of lightweight, portable, self-sufficient containers that can run in a variety of environments. Docker’s container-based model is highly flexible. Precisely speaking, Docker is rapidly establishing itself as the standard for container-based tooling.

Docker and IBM Bluemix

As discussed above, Docker is an open-source engine for building and deploying application containers that can run on any server from the VM on our laptop to the largest SoftLayer compute instance and everything in between. Docker is a great building block for the hassle-free deployment and execution of a cornucopia of web, smartphone, social, the Internet of Things (IoT) / cyber physical systems (CPS) applications, database management systems, etc. on cloud platforms.

As we know, Cloud Foundry enables businesses to empower developers to deliver great software by removing barriers such as infrastructure configuration, inconsistent deployments and service wiring chaos. IBM Bluemix, which is based on the cloud foundry platform, is a collaborative PaaS platform empowering worldwide developers to come out with next-generation generic as well as purpose-specific applications and services quickly and in a risk-free fashion for different business domains.

IBM BlueMix makes it easy to develop, deploy, deliver and manage applications in the IBM Cloud in association with other IBM tools. With the convergence of Docker and Bluemix, any developer, for example, can build and test a container locally and then upload it to the IBM Cloud for deployment, delivery and scalability. Docker’s automated deployment model ensures that the runtime environment for the application is always properly installed and configured, regardless of where it is going to get hosted and used. Thus, the impact of Docker on cloud PaaS is going to be game-changing in the near future.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading