Containers

By: IBM Cloud Education

This guide looks at the importance of containers in cloud computing, highlights core benefits, and tours the emerging ecosystem of related technologies across Docker, Kubernetes, Istio, and Knative.

What are containers?

Containers are an executable unit of software in which application code is packaged, along with its libraries and dependencies, in common ways so that it can be run anywhere, whether it be on desktop, traditional IT, or the cloud.

To do this, containers take advantage of a form of operating system (OS) virtualization in which features of the OS (in the case of the Linux kernel, namely the namespaces and cgroups primitives) are leveraged to both isolate processes and control the amount of CPU, memory, and disk that those processes have access to.

Containers are small, fast, and portable because unlike a virtual machine, containers do not need include a guest OS in every instance and can, instead, simply leverage the features and resources of the host OS.

Containers first appeared decades ago with versions like FreeBSD Jails and AIX Workload Partitions, but most modern developers remember 2013 as the start of the modern container era with the introduction of Docker.

Containers vs. VMs

The easiest way to understand a container is to understand how it differs from a traditional virtual machine (VM). In traditional virtualization—whether it be on-premises or in the cloud—a hypervisor is leveraged to virtualize physical hardware. Each VM then contains a guest OS, a virtual copy of the hardware that the OS requires to run, along with an application and its associated libraries and dependencies.

Instead of virtualizing the underlying hardware, containers virtualize the operating system (typically Linux) so each individual container contains only the application and its libraries and dependencies. The absence of the guest OS is why containers are so lightweight and, thus, fast and portable.

Benefits

The primary advantage of containers, especially compared to a VM, is providing a level of abstraction that makes them lightweight and portable.

  • Lightweight: Containers share the machine OS kernel, eliminating the need for a full OS instance per application and making container files small and easy on resources. Their smaller size, especially compared to virtual machines, means they can spin up quickly and better support cloud-native applications that scale horizontally.  
  • Portable and platform independent: Containers carry all their dependencies with them, meaning that software can be written once and then run without needing to be re-configured across laptops, cloud, and on-premises computing environments.
  • Supports modern development and architecture: Due to a combination of their deployment portability/consistency across platforms and their small size, containers are an ideal fit for modern development and application patterns—such as DevOps, serverless, and microservices—that are built are regular code deployments in small increments.

Containerization

Software needs to be designed and packaged differently in order to take advantage of containers—a process commonly referred to as containerization.

When containerizing an application, the process includes an application with its relevant environment variables, configuration files, libraries, and software dependencies. The result is a container image that can then be run on a container platform. For more information, check out this video on “Containerization Explained”:

Container orchestration

As companies began embracing containers—often as part of modern, cloud-native architectures—the simplicity of the individual container began colliding with the complexity of managing hundreds (even thousands) of containers across a distributed system.

To address this challenge, container orchestration emerged as a way managing large volumes of containers throughout their lifecycle, including:

  • Provisioning
  • Redundancy
  • Health monitoring
  • Resource allocation
  • Scaling and load balancing
  • Moving between physical hosts

While many container orchestration platforms (such as Apache Mesos, Nomad, and Docker Swarm) were created to help address these challenges, Kubernetes quickly became the most popular container orchestration platform, and it is the one the majority of the industry has standardized on.

For an overview of how orchestration works, see "Container Orchestration Explained":

Docker and Kubernetes

There is a common misconception that Docker and Kubernetes compete with one another. In reality, they are complementary technologies that help companies manage the somewhat distinct tasks of containerizing software and then orchestrating the lifecycles of potentially large volumes of individual containers.

Docker was one of the first popular mainstream container software tools to hit the market. Created by Docker, Inc. in 2013, the program manages the containerization and running of container packages and is largely credited with kicking off the modern container era.

Kubernetes, created by Google in 2014, is a container orchestration system that manages the creation, operation, and termination of many containers. It is now operated by the Cloud Native Computing Foundation (CNCF), which is a vendor-agnostic industry group under the auspices of the Linux Foundation.

Docker turns program source code into containers and then executes them, whereas Kubernetes manages the configuration, deployment, and monitoring of many containers at once (including both Docker containers and others).

This "Kubernetes Explained" video offers a high-level overview of Kubernetes’ architecture, "Kubernetes: An Essential Guide" gives a deep dive into the container orchestration platform.

The service that Kubernetes does compete directly with is Docker Swarm. This is Docker Inc.’s own container orchestration tool, and it is built into Docker as a native service. For a closer look at how the two container orchestration tools relate, see “Docker Swarm vs. Kubernetes: A Comparison.”

For more information about Docker and Kubernetes, read the Kubernetes vs. Docker blog post and watch the "Kubernetes vs. Docker: It's Not an Either/Or Question" video:

Istio, Knative, and the expanding containers ecosystem

As containers continue to gain momentum as a popular way to package and run applications, the ecosystem of tools and projects designed to harden and expand production use cases continues to grow. Beyond Kubernetes, two of the most popular projects in the containers ecosystem are Istio and Knative.

Istio

As developers leverage containers to build and run microservice architectures, management concerns go beyond the lifecycle considerations of individual containers and into the way that large numbers of small services—often referred to as a “service mesh”—connect with and relate to one another. Istio was created to make it easier for developers to manage the associated challenges with discovery, traffic, monitoring, security, and more. For more information on Istio, see "What is Istio?" and watch this Istio explainer video video.

Knative

Serverless architectures continue to grow in popularity as well, particularly within the cloud-native community. Knative’s big value is its ability to offer container services as serverless functions (see "Knative Explained" for a comprehensive overview).

Instead of running all the time and responding when needed (as a server does), a serverless function can “scale to zero,” which means it is not running at all unless it is called upon. This model can save vast amounts of computing power when applied to tens of thousands of containers.

For more information on Knative, watch this video on “Knative Explained.” 

This blog post on unifying containers, apps, and functions provides a useful explanation of how Knative sits in the broader container ecosystem.

Tutorials

To get started with containers, the following tutorials are a useful way to understand how to deploy apps into clusters and then how to bring DeVops practices, such as continuous deployment, to your containers environment:

Containers and IBM

For more information on IBM’s container offerings, check out the IBM Cloud Kubernetes Service.

Our video, "Advantages of Managed Kubernetes," gives a great overview of how managed Kubernetes can help you in your cloud journey:

To learn more about best practices to enable and expedite container deployment in production environments, see the report "Best Practices for Running Containers and Kubernetes in Production."

Sign up for the IBM Cloud account type of your choice.

Follow IBM Cloud

IBM Cloud News connects you to insight and information you can put to work right away—straight from the minds of IBM Cloud experts, IBM customers, and business and IT leaders.

Email subscribeRSS

Be the first to hear about news, product updates, and innovation from IBM Cloud