Archive

Envisioning the Docker technology

Share this post:

Docker’s new containerization technology is gaining momentum—mainly for bringing in more decisive automation, remarkable efficiencies, higher optimization, and precision toward highly capitalized and capable IT environments.

Typically, containers are lightweight virtual machines (VMs) that are highly portable, application-aware, and self-sufficient. Therefore, creating and cloning application-loaded containers and deploying them instantly and immaculately in any server (local or remote, physical or virtual) are grandly streamlined with the smart leveraging of the emerging and evolving Docker technology.

Docker containers run virtually anywhere so that the perpetual problem of “running here, not running there” is finally put to rest. The Docker concept is being perceived as the tectonic shift in the IT landscape, prescribed as the one-stop solution for eliminating all the explored and expressed inflexibilities and deficiencies of hypervisor-provisioned VMs and finally proclaimed as the foundational and fundamental mechanism toward state-of-the-art, sophisticated IT environments for the establishment and sustenance of the smarter planet vision.

(Related: How Docker technology adds value to the cloud paradigm)

 

The principle benefits include the fulfillment of the highly articulated aspect of portability, the real-time and multiple provisioning and deployment of application containers, the recent phenomenon of DevOps, etc.  The tool ecosystem is on the rise for Docker in order to accelerate application deployment, resource provisioning, etc.

Docker is an open source project providing a systematic way to automate the faster deployment of Linux applications inside portable containers and facilitating the easy movement of Linux applications within a cloud as well as among geographically distributed clouds. This pioneering technology is being taken forward to smoothly move non-Linux applications across clouds. Basically, Docker extends the common container format called Linux Containers with a Linux kernel and a high-level API that together run processes in isolation: CPU, memory, I/O, network, etc. Docker also has the feature of namespaces to completely isolate an application’s view of the underlying operating environment, including process trees, network, user IDs and file systems

The Docker architecture

There are three major ingredients (Docker containers, images and files) in the Docker domain.

Docker containers are created by using base images. A Docker mage can be basic with nothing but the OS fundamentals, or it can consist of a sophisticated pre-built application stack ready for launch. When building images with docker, each action taken (i.e. a command executed like apt-get install) forms a new layer on top of the previous one.  In short, these containers are created using Docker images, which can be built either by executing commands manually or automatically through Dockerfiles.

Each Dockerfile is a script composed of various commands (instructions) and arguments listed successively to automatically perform actions on a base image in order to create (or form) a new one. They are used for organizing things and greatly help with deployments by simplifying the process start-to-finish. In a nutshell, docker as a project offers the complete set of higher-level tools to carry everything that forms an application across systems and machines—virtual or physical—and brings great benefits.

Docker Containers (DCs) vs. Virtual Machines (VMs)

The picture below vividly illustrates the differences between containers and virtual machines (VMs). The open source docker engine makes it easy to build, modify, publish, search, and run containers. With Docker, a container comprises an application and all of its dependencies. Docker containers can either be created manually or automatically, if a source code repository contains a Dockerfile. VMs generally contain not only the application (which may only be 10s of MB) along with the binaries and libraries needed to run that application and an OS (which may measure in 10s of GB).

Docker technology

Since all of the containers share the same operating system (and, where appropriate, binaries and libraries), they are significantly smaller than VMs in size, making it possible to store 100s of VMs on a physical host (versus a strictly limited number of VMs). Since they utilize the host OS, restarting a VM does not mean restarting or rebooting the OS. With docker containers, the efficiencies are even greater. With a traditional VM, each application, each copy of an application, and each slight modification of an application requires creating an entirely new VM.

If there is a need to run several copies of the same application on a host, there is no need to copy the shared binaries. Finally, if any change is made, only the differences have to be copied.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading