IBM supports many open source projects. One of the newest is Docker, a project for container-based virtualization that allows developers to encapsulate applications and dependencies and deploy them on Linux-based virtual machines.
People are using Docker for its packaging technology and deployment model. With KVM (Kernel-based Virtualization Machine), you get a virtual machine (VM), but with Linux containers, you get a virtual Linux kernel and you share everything else with the rest of the machine - but you don’t see the other users of the kernel unless you want to.
Currently Docker, which provides an alternative to virtual machines, is being used as a system administration tool and eventually, it is expected that it will be used by many cloud providers who want to provide Linux containers as a service. Containers give you bare metal performance and lower overhead than virtual machines, although compared with KVM that difference is not very much.
The key details about Docker are that:
It provides an automatic deployment model for applications. You can package applications in a single format and then deploy them automatically all at once. In addition, the deployment technology is distributed so you can deploy applications to remote machines.
The packaging technology is smart so that you can take a single package and run it on multiple Linux distributions - whether it is Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES). It is not necessary to package the application separately for RHEL or SLES, which means that Docker solves a big problem for application providers and also makes it easy for administrators to deploy and maintain their software applications on Linux hosts.
There is an ecosystem built around Docker packaged applications and packaged technology.
Because the packaging technology is smart, has an execution language, and is based on source code control principles, you can incrementally add or take away features from a packaged application. This means you can modify packages that someone else has created. As a result, there are many packages of software that are ready to be deployed by Docker and they are all public. Docker tools can grab a package from a public repository and deploy it very easily.
It is also possible for users to maintain a private repository of packages.
Because every application uses the same kernel, you don’t get the ability to run older kernels for older software and you don’t get the ability to run other operating systems. With KVM, you can run a Windows operating system and then run an old version of RHEL along with a new version of RHEL and that is useful if you have older software that is not being maintained for new operating systems for example.
There are security implications for Linux containers. A hypervisor like KVM provides an additional level of security over containers. With containers, you only have the Linux kernel providing isolation and if there is privilege escalation attack or vulnerability in the kernel that is exploited, then there are no further defenses. There are many things you can do to configure the host and the kernel to be safe and isolated but the one thing you cannot do is have a second level of defense. With a hypervisor, you have that additional layer of protection. First, there is the operating system’s kernel and then if the attacker manages to break into the kernel and obtain root access, they are still contained within the virtual machine. The virtual machine has the attacker contained by a combination of hardware instructions and software so an attacker then has to start all over and break out of the virtual machine which may or may not be more difficult. As a result, some people believe that web services providers, for example, should not use containers for public cloud services and should instead use virtual machines instead – and that view is shared widely.
The idea is that with Docker is that developers don’t have to worry about any application dependencies and can deploy containers for all the Linux machines that have Docker.
Docker Open Source Community
IBM is participating in the Docker Foundation Board, and it has approval to submit code into the upstream Docker repositories. Right now, Docker runs only on x86 platforms and can also work as a component of IBM Bluemix. It is our intention to get Docker supported on Power and System z in upstream Linux editions. We think that Docker will be better on Power and System z from a security standpoint. In addition, support for Docker on Power and System z will make it as easy for developers to port their applications to these platforms as it is to the x86 platform.
The ecosystem around Docker is vibrant and growing. In the past, the significant features came from employees at the company, which was originally named dotCloud and is now Docker, Inc. Additional contributors worked on bug fixes and trivial patches. That has changed over the last year. The community is becoming broader. This is something that is required when a project such as Docker becomes an essential component of the Linux ecosystem, and is an indication of the project’s increasing maturity.
Michael D. Day
Distinguished Engineer and Chief Virtualization Architect, Open Systems Development, IBM