In last few years, some of the large scale online companies including -but not limited to- Netflix, eBay and Amazon changed their application architectures to microservices architecture. It was not a revolution but evolution. They have evolved from a monolithic architecture to microservices architecture.
Everybody is talking about containers, in particular, Docker. Why container technology and Docker have been catching huge attention since last year? Is microservices architecture one of the reasons behind that huge interest? To me, yes. Microservices architecture needs light-weight mechanisms, small independently deployable services, scalability and portability. Those requirements can be met by using containers.
Before talking about containers and Docker, let see first what microservices architecture is?
“Microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare mininum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” (Martin Fowler and James Levis, “Microservices”, March 2014.)
We can look at some characteristics and requirements of microservices and then we can see if containers (Docker in particular) can meet those requirements:
each microservice is relatively small
each service can be deployed independently of other services
easier to scale development.
improved fault isolation.
each service can be developed and deployed independently
eliminates any long-term commitment to a technology stack
Containers are usually described as light-weight runtime environments with many of the core components of a virtual machine and isolated services of an operating system designed to make packaging easy and execute these micro-services smoothly. Containers are not new technology. They have been around Linux world for long time. (Windows Server Containers will be available soon. More on this: here)
Docker is an open-source project which aims to automate the deployment of applications inside portable containers that are independent of hardware, host operating system, and language. In contrast with Virtual Machines, Docker containers do not include a guest operating system but share the operating system with other containers. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces (*) to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.
What docker provides more than other linux containers do is to package an application and all of its dependencies in a virtual container that can run on any Linux server which docker runs.
Why you should consider Docker as a solution? 5 reasons :
Best suits for microservices architecture: Containers support micro services architecture. These services are built around business capabilities and independently deployable by fully automated deployment machinery. Each micro service can be deployed without interrupting the other micro services and containers provide an ideal environment for service deployment in meaning of speed, isolation management, and lifecycle. It is easy to deploy new versions of services inside containers.
Application Portability: Docker puts application and all of its dependencies into a container which is portable among different platforms, Linux distributions and clouds. You can build, move and run distributed applications with containers. By automating deployment inside containers, developers and system administrators can run the same application on laptops, virtual machines, bare-metal servers, and the cloud.
Resource Utilization: Containers comprise just the application and its dependencies, neither more nor less. Each container runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of virtual machines but is much more portable and efficient. This does not mean that containers can run not only on VMs, but also on physical servers. Due to the lightweight nature of containers, you can run more containers on a physical server than virtual machines. The result is higher resource utilization.
Beyond the virtualization is “containers”: Whenever you need tight resource control and isolation for your application environment, you use virtual machines. But, what if your application environment does not require the hardware resources of full virtualization? Containers can provide user environments whose resource requirements can be strictly controlled with or without using virtualization.
Enterprise Partnerships: Container technology is the new emerging technology in the IT industry after server virtualization revolution. Docker is leading this trend with new partnerships. Industry leading companies, including IBM, Google, Vmware, Redhat, and Microsoft signed partnership agreements with Docker. Those agreements show that Docker has huge potential in the era of cloud and virtualization.
* Containers are derived from the idea of OS resource isolation in all levels. That could be possible with a Linux kernel which supports all of the resource-isolation use cases, without the overhead and complexity of running multiple kernel instances at the same time. Linux kernel provides resource isolations with implementation of different types of namespaces. Some of those namespaces are network namespaces, process namespaces, user namespaces and mount namespaces. The purpose of each namespace is to wrap a particular host operating system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the host operating system resource. (More info on this can be found here - Michael Kerrisk’s great article)