Is Kubernetes or Docker the better choice (or is it really even a choice at all)?
When it comes to container technologies, two names emerge as open-source leaders: Kubernetes and Docker. And while they are fundamentally different technologies that assist users with container management, they are complementary to one another and can be powerful when combined. In this regard, choosing to use Kubernetes or Docker isn’t a matter of deciding which option is better; in reality, they’re not in competition with one another and can actually be used in tandem. So, to the question of whether Kubernetes or Docker is the better choice, the answer is neither.
The fact that Kubernetes and Docker are complementary container technologies clears up another frequent question: Is Kubernetes replacing Docker?
In short, no. Since Kubernetes isn’t a competing technology, this question likely derives from the news that broke in 2021 that Kubernetes would no longer be supporting Docker as a container runtime option (i.e., a container component that communicates with the operating system (OS) kernel throughout the containerization process). However, Kubernetes and Docker are still compatible and provide clear benefits when used together, as we’ll explore in greater detail later in this post. First, it’s important to start with the foundational technology that ties Kubernetes and Docker together — containers.
What is a container?
A container is an executable unit of software that packages application code with its dependencies, enabling it to run on any IT infrastructure. A container stands alone; it is abstracted away from the host OS — usually Linux — which makes it portable across IT environments.
One way to understand the concept of a container is to compare it to a virtual machine (VM). Both are based on virtualization technologies, but while a container virtualizes an OS, a VM leverages a hypervisor — a lightweight software layer between the VM and a computer’s hardware — to virtualize physical hardware.
With traditional virtualization, each VM contains a full copy of a guest operating system (OS), a virtual copy of the hardware needed to run the OS and an application (and its associated libraries and dependencies). A container, on the other hand, includes only an application and its libraries and dependencies. The absence of a guest host significantly reduces the size of a container, making it lightweight, fast and portable. Additionally, a container automatically uses the DNS settings of the host.
For a full rundown on the differences between containers and VMs, see “Containers vs. VMs: What’s the difference?“
Engineers can use containers to quickly develop applications that run consistently across a large number of distributed systems and cross-platform environments. The portability of containers eliminates many of the conflicts that come from differences in tools and software between functional teams.
This makes them particularly well-suited for DevOps workflows, easing the way for developers and IT operations to work together across environments. Small and lightweight, containers are also ideal for microservices architectures, in which applications are made up of loosely coupled, smaller services. And containerization is often the first step in modernizing on-premises applications and integrating them with cloud services:
What is Docker?
Docker is an open-source containerization platform. Basically, it’s a toolkit that makes it easier, safer and faster for developers to build, deploy and manage containers. This toolkit is also known as a containerd.
Although it began as an open-source project, Docker today also refers to Docker, Inc., the company that produces the commercial Docker product. Currently, it is the most popular tool for creating containers, whether developers use Windows, Linux or MacOS.
In fact, container technologies were available for decades prior to Docker’s release in 2013. In the early days, Linux Containers (or LXC) were the most prevalent of these. Docker was built on LXC, but Docker’s customized technology quickly overtook LXC to become the most popular containerization platform.
Among Docker’s key attributes is its portability. Docker containers can run across any desktop, data center or cloud environment. Only one process can run in each container, so an application is able to run continuously while one part of it is undergoing an update or being repaired.
Some of the tools and terminology commonly used with Docker include the following:
Docker Engine: The runtime environment that allows developers to build and run containers.
Dockerfile: A simple text file that defines everything needed to build a Docker container image, such as OS network specifications and file locations. It’s essentially a list of commands that Docker Engine will run to assemble the image.
Docker Compose: A tool for defining and running multi-container applications. It creates a YAML file to specify which services are included in the application and can deploy and run containers with a single command via the Docker CLI.
Now let’s revisit why Kubernetes stopped supporting Docker as a container runtime. As noted at the top of this section, Docker is a containerd and not a container runtime. This means that Docker sits on top of an underlying container runtime to provide users with features and tools via a user interface. To support Docker as a runtime, Kubernetes had to support and implement a separate runtime known as Docker Shim, which essentially sat between the two technologies and helped them communicate.
This was done during a time when there weren’t a lot of container runtimes available. However, now that there are — with CRI-O an example of one such container runtime — Kubernetes can provide users plenty of container runtime options, many of which that use the standard Container Runtime Interface (CRI), a way for Kubernetes and the container runtime to communicate reliably without a middle layer acting as the go-between.
However, even though Kubernetes no longer provides special support to Docker as a runtime, it can still run and manage containers built with the Open Container Initiative (OCI), Docker’s own image format that allows you to use Dockerfiles and build Docker images. In other words, Dockers still has a lot to offer in the Kubernetes ecosystem.
What are the advantages of Docker?
The Docker containerization platform delivers all the of previously mentioned benefits of containers, including the following:
Lightweight portability: Containerized applications can move from any environment to another (wherever Docker is operating), and they will operate regardless of the OS.
Agile application development: Containerization makes it easier to adopt CI/CD processes and take advantage of agile methodologies, such as DevOps. For example, containerized apps can be tested in one environment and deployed to another in response to fast-changing business demands.
Scalability: Docker containers can be created quickly and multiple containers can be managed efficiently and simultaneously.
Other Docker API features include the ability to automatically track and roll back container images, use existing containers as base images for building new containers and build containers based on application source code. Docker is backed by a vibrant developer community that shares thousands of containers across the internet via the Docker Hub.
But while Docker does well with smaller applications, large enterprise applications can involve a huge number of containers — sometimes hundreds or even thousands — which becomes overwhelming for IT teams tasked with managing them. That’s where container orchestration comes in. Docker has its own orchestration tool, Docker Swarm, but by far the most popular and robust option is Kubernetes.
See “Docker Swarm vs. Kubernetes: A Comparison“ for a closer look at the Kubernetes vs. Docker Swarm debate.
Docker has several commands used in the creation and running of containers:
docker build: This command builds a new Docker image from the source code (i.e., from a Dockerfile and the necessary files).
docker create: This command creates a new Docker image from an image without starting it, which involves creating a writeable container layer over the image and preparing it.
docker run: This command works exactly like the
docker create command, except it takes the added step of running it after creation.
docker exec: This command is used to execute a new command inside a container that is already running.
What is Kubernetes?
Kubernetes is an open-source container orchestration platform for scheduling and automating the deployment, management and scaling of containerized applications. Containers operate in a multiple container architecture called a “cluster.” A Kubernetes cluster includes a container designated as a control plane that schedules workloads for the rest of the containers — or worker nodes — in the cluster.
The master node determines where to host applications (or Docker containers), decides how to put them together and manages their orchestration. By grouping containers that make up an application into clusters, Kubernetes facilitates service discovery and enables management of high volumes of containers throughout their lifecycles.
Google introduced Kubernetes as an open source project in 2014. Now, it’s managed by an open source software foundation called the Cloud Native Computing Foundation (CNCF). Designed for container orchestration in production environments, Kubernetes is popular due in part to its robust functionality, an active open-source community with thousands of contributors and support and portability across leading public cloud providers (e.g., IBM Cloud, Google, Azure and AWS).
What are the advantages of Kubernetes?
Automated deployment: Kubernetes schedules and automates container deployment across multiple compute nodes, which can be VMs or bare-metal servers.
Service discovery and load balancing: It exposes a container on the internet and employs load balancing when traffic spikes occur to maintain stability.
Auto-scaling features: Automatically starts up new containers to handle heavy loads, whether based on CPU usage, memory thresholds or custom metrics.
Self-healing capabilities: Kubernetes restarts, replaces or reschedules containers when they fail or when nodes die, and it kills containers that don’t respond to user-defined health checks.
Automated rollouts and rollbacks: It rolls out application changes and monitors application health for any issues, rolling back changes if something goes wrong.
Storage orchestration: Automatically mounts a persistent local or cloud storage system of choice as needed to reduce latency — and improve user experience.
Dynamic volume provisioning: Allows cluster administrators to create storage volumes without having to manually make calls to their storage providers or create objects.
For more information, see our video “Kubernetes Explained”:
Kubernetes and Docker: Finding your best container solution
Although Kubernetes and Docker are distinct technologies, they are highly complementary and make a powerful combination. Docker provides the containerization piece, enabling developers to easily package applications into small, isolated containers via the command line. Developers can then run those applications across their IT environment, without having to worry about compatibility issues. If an application runs on a single node during testing, it will run anywhere.
When demand surges, Kubernetes provides orchestration of Docker containers, scheduling and automatically deploying them across IT environments to ensure high availability. In addition to running containers, Kubernetes provides the benefits of load balancing, self-healing and automated rollouts and rollbacks. Plus, it has a graphical user interface for ease of use.
For companies that anticipate scaling their infrastructure in the future, it might make sense to use Kubernetes from the very start. And for those already using Docker, Kubernetes makes use of existing containers and workloads while taking on the complex issues involved in moving to scale. For more information, watch “Kubernetes vs. Docker: It’s Not an Either/Or Question”:
Integration to better automate and manage applications
Later versions of Docker have built-in integration with Kubernetes. This feature enables development teams to more effectively automate and manage all the containerized applications that Docker helped them build.
In the end, it’s a question of what combination of tools your team needs to accomplish its business goals. Check out how to get started with these Kubernetes tutorials and explore the IBM Cloud Kubernetes Service to learn more.
Earn a badge through free browser-based Kubernetes tutorials with IBM CloudLabs.