Containers are executable units of software in which application code is packaged along with its libraries and dependencies, in common ways so that the code can be run anywhere—whether it be on desktop, traditional IT or the cloud.
To do this, containers take advantage of a form of operating system (OS) virtualization in which features of the OS kernel (e.g. Linux namespaces and cgroups, Windows silos and job objects) can be leveraged to isolate processes and control the amount of CPU, memory and disk that those processes can access.
Containers are small, fast and portable because unlike a virtual machine, containers do not need to include a guest OS in every instance and can instead simply leverage the features and resources of the host OS.
Containers first appeared decades ago with versions such as FreeBSD Jails and AIX Workload Partitions, but most modern developers remember 2013 as the start of the modern container era with the introduction of Docker.
One way to better understand a container is to comprehend how it differs from a traditional virtual machine (VM). In traditional virtualization—whether it be on-premises or in the cloud—a hypervisor is leveraged to virtualize physical hardware. Each VM then contains a guest OS and a virtual copy of the hardware that the OS requires to run, along with an application and its associated libraries and dependencies.
Instead of virtualizing the underlying hardware, containers virtualize the operating system (typically Linux) so each individual container contains only the application and its libraries and dependencies. The absence of the guest OS is why containers are so lightweight and, thus, fast and portable.
For a deeper look at this comparison, check out "Containers vs. VMs: What's the difference?"
The primary advantage of containers, especially as compared to a VM, is that they provide a level of abstraction that makes them lightweight and portable. Their primary benefits include:
Lightweight: Containers share the machine OS kernel, eliminating the need for a full OS instance per application and making container files small and easy on resources. Their smaller size, especially compared to VMs, means containers can spin up quickly and better support cloud-native applications that scale horizontally.
Portable and platform-independent: Containers carry all their dependencies with them, meaning that software can be written once and then run without needing to be re-configured across laptops, cloud and on-premises computing environments.
Supports modern development and architecture: Due to a combination of their deployment portability/consistency across platforms and their small size, containers are an ideal fit for modern development and application patterns—such as DevOps, serverless and microservices—that are built using regular code deployments in small increments.
Improves utilization: Like VMs before them, containers enable developers and operators to improve CPU and memory utilization of physical machines. Where containers go even further is that because they also enable microservices architecture, application components can be deployed and scaled more granularly—an attractive alternative to having to scale up an entire monolithic application because a single component is struggling with its load.
In a recent IBM survey (PDF, 1.4 MB), developers and IT executives reported many other benefits of using containers.
Download the full report: Containers in the enterprise (PDF, 1.4 MB)
Containers are becoming increasingly prominent, especially in cloud environments. Many organizations are even considering containers as a replacement of VMs as the general-purpose compute platform for their applications and workloads. But within that very broad scope, there are key use cases where containers are especially relevant.
Software needs to be designed and packaged differently in order to take advantage of containers—a process commonly referred to as containerization.
When containerizing an application, the process includes packaging an application with its relevant environment variables, configuration files, libraries and software dependencies. The result is a container image that can then be run on a container platform.
Container orchestration with Kubernetes
As companies began embracing containers—often as part of modern, cloud-native architectures—the simplicity of the individual container began colliding with the complexity of managing hundreds (or even thousands) of containers across a distributed system.
To address this challenge, container orchestration emerged as a way of managing large volumes of containers throughout their lifecycle, including:
While many container orchestration platforms (such as Apache Mesos, Nomad and Docker Swarm) were created, Kubernetes, an open-source project introduced by Google in 2014, quickly became the most popular container orchestration platform, and it’s the one the majority of the industry has based its standardization on.
Kubernetes enables developers and operators to declare a desired state of their overall container environment through YAML files, and then Kubernetes does all the processing work of establishing and maintaining that state, with activities that include deploying a specified number of instances of a given application or workload, rebooting that application if it fails, load balancing, auto-scaling, zero downtime deployments and more.
Kubernetes is now operated by the Cloud Native Computing Foundation (CNCF), which is a vendor-agnostic industry group operated under the auspices of the Linux Foundation.
The video below outlines how Kubernetes works:
As containers continue to gain momentum as a popular way to package and run applications, the ecosystem of tools and projects designed to accommodate and expand production use cases continues to grow. Beyond Kubernetes, two of the most popular projects in the containers ecosystem are Istio and Knative.
Istio
As developers leverage containers to build and run microservice architectures, management concerns go beyond the lifecycle considerations of individual containers and into the ways that large numbers of small services—often referred to as a “service mesh”—connect with and relate to one another. Istio was created to make it easier for developers to manage the associated challenges with discovery, traffic, monitoring, security and more.
Knative
Serverless architectures continue to grow in popularity as well, particularly within the cloud-native community. Knative, for example, offers substantial value in its ability to deploy containerized services as serverless functions.
Instead of running all the time and responding when needed (as a server does), a serverless function can “scale to zero,” which means it isn’t running at all unless it’s called upon. This model can save vast amounts of computing power when applied to tens of thousands of containers.
The video below explains more about Knative:
Red Hat OpenShift on IBM Cloud leverages OpenShift in public and hybrid environments for velocity, market responsiveness, scalability and reliability.
With IBM Cloud Satellite, you can launch consistent cloud services anywhere—on premises, at the edge and in public cloud environments.
Run container images, batch jobs or source code as server less workloads—with no sizing, deploying, networking or scaling required.
IBM Cloud Container Registry gives you a private registry that lets you manage your images and monitor them for safety issues.
Automatically determine the right resource allocation actions—and when to make them—to help ensure your Kubernetes environments and mission-critical apps get exactly what they need to meet your SLOs.
New IBM research documents the surging momentum of container and Kubernetes adoption.
Container orchestration is a key component of an open hybrid cloud strategy that lets you build and manage workloads from anywhere.
Docker is an open-source platform for building, deploying and managing containerized applications.