This guide looks at the importance of containers in cloud computing, highlighting the benefits and showing how containers figure into such technologies as Docker, Kubernetes, Istio, VMs, and Knative.
What are containers?
A container is a small file that packages together application code along with all the libraries and other dependencies that it needs to run. By packaging together applications, libraries, environment variables, other software binaries and configuration files, a container guarantees that it has everything needed to run the application out of the box, regardless of the operating environment in which the container runs. A key characteristic of a container is that it is small and fast because it uses some of the underlying host operating system's resources to run rather than containing a whole OS of its own.
Containers first appeared decades ago with versions such as FreeBSD Jails and AIX Workload Partitions, but most modern developers remember 2013 as the start of the modern container era with the introduction of Docker. Since then, other developers have released several other container environments.
Benefits of containers
Containers make life easier for developers and administrators alike because they are easy to run. They are lightweight to run and extremely quick to start, which can increase performance time while decreasing compute and storage load. Administrators can run many of them at once to create a highly scalable environment. Their cloud-friendly nature makes it easier to deploy them automatically, and containers can run in many different operating environments because they contain the files on which they depend.
- Lightweight: Containers share the machine OS kernel, eliminating the need for a full OS instance per application and making container files small and easy on resources.
- Efficient and scalable: Small, light containers are quick to spin up and delete. This creates an agile infrastructure that grows with your business. It isn’t uncommon to see tens of thousands of containers in a single computing environment.
- Portable: Containers carry all their dependencies with them, meaning that they can run on a variety of computing environments without requiring time-consuming, frustrating platform configurations.
- Supports agile development: Containers are built for cloud-first development, supporting continuous integration/continuous development environments and DevOps for agile development.
Containerization is the act of readying an application for distribution in a container by packaging its various runtime components together. These components include the relevant environment variables, configuration files, libraries, and software dependencies. The result is a container image that can then be run on a container platform. For more information, check out this video on “Containerization Explained”:
Containers are small and easy to reproduce consistently, and companies tend to use a lot of them. Organizations can run many identical container images alongside each other to scale an application resiliently.
This requires a new way of managing software. Traditionally, companies would manage a smaller number of physical or virtual servers and diligently look after each one. However, containers can be created and deleted very quickly.
Container orchestration is the process of managing each container throughout its lifecycle and encompasses:
- Health monitoring
- Resource allocation
- Scaling and load balancing
- Moving between physical hosts
Kubernetes is the most popular container orchestration system by far, but there are others, including Apache Mesos, Nomad, and Docker Swarm.
For more information, check out our video: "Container Orchestration Explained":
Docker and Kubernetes
There is a common misconception that Docker and Kubernetes compete with each other and that you must use one rather than the other. This is untrue; the two projects do different things, and they can complement each other.
Docker was one of the first popular mainstream container software tools to hit the market. Created by Docker, Inc. in 2013, the program manages the containerization and running of container packages.
Kubernetes, created by Google in 2014, is a container orchestration system that manages the creation, operation, and termination of many containers. It is now operated by the Cloud Native Computing Foundation (CNCF), which is a vendor-agnostic industry group under the auspices of the Linux Foundation.
Docker turns program source code into containers and then executes them, whereas Kubernetes manages the configuration, deployment, and monitoring of many containers at once (including both Docker containers and others).
This "Kubernetes Explained" video offers a high-level overview of Kubernetes’ architecture.
The service that Kubernetes does compete directly with is Docker Swarm. This is Docker Inc.’s own container orchestration tool, and it is built into Docker as a native service.
While Docker Swarm is naturally focused on managing Docker containers, Kubernetes is focused on openness, supporting a variety of technologies and third-party products. As such, it supports not only the Docker containers in its original implementation but also a range of other container frameworks, such as Kata Containers. In 2018, it engineered this support via its Container Runtime Interface (CRI), which acts as an interface between Kubernetes and a variety of container frameworks.
For more information about Docker and Kubernetes, read the Kubernetes vs. Docker blog post and watch the "Kubernetes vs. Docker: It's Not an Either/Or Question" video:
For more information about Docker Swarm vs Kubernetes, visit the Docker Swarm vs. Kubernetes blog.
Containers vs. Virtual Machines (VMs)
Containers are the latest approach to software virtualization. Virtualization hides software from its physical computing environment by putting it into a software wrapper, making it more portable.
One of the first modern virtualization trends was the virtual machine (VM). VMs collect an entire operating system image in a single file and enable it to run on a physical computer. VMs run hypervisors, which are thin software layers that manage the interaction between the VM and the underlying physical hardware. Hypervisors come in two flavors:
- Type 1 hypervisor: This runs directly on the system hardware.
- Type 2 hypervisor: This runs on a host operating system that in turn interfaces with the hardware.
Benefits of VMs
The following are a few of the most relevant benefits of virtual machines (VMs).
Portability: Virtual machines can be backed up and moved between physical machines as needed. Careful consideration must be taken for any installed software that may have hostname or IP dependencies.
Efficiency: Physical computers interacting directly with an operating system (OS) could only run one OS at a time, and each operating system typically only runs one application at a time for reliability reasons. Running a single OS and application on a physical server, therefore, wasted most of its computing capacity. Conversely, multiple VMs can run on a single physical server, dramatically increasing its resource utilization.
Virtual machines are relatively cumbersome because each one contains all the internal services needed by an operating system. Every time you spawn a VM, you reproduce an entire operating system, which can be wasteful for small applications.
Containers evolved to pare down virtual machines to only the necessary parts. Instead of reproducing the entire operating system, a container will share a single operating system’s kernel with all of the other containers on the host computer. This creates a far more lightweight, easily-manageable way to virtualize software.
Containers and Istio
Containers are a great platform for transitioning from monolithic applications to large collections of small, independent services that work together. Together, they form what developers call a service mesh.
Istio is a software tool that enables administrators to communicate with that service mesh securely and reliably. It also monitors the service mesh to give administrators a clear picture of service mesh health via telemetry data.
Istio consists of a few main components:
- Envoy is the gateway for all communication with the service mesh.
- Mixer enforces policies across the service mesh and collects telemetry from the Envoy.
- Pilot is Istio’s switchboard, routing traffic between different services. It not only enables A/B testing and canary deployments by routing traffic to specific services, but it also enables services to discover and address each other.
- Citadel manages security for the service mesh. It handles functions like access control, encryption, and policies based on a service’s identity.
- Galley manages configuration for Istio.
Containers and Knative
Once they’re up and running, containers can dramatically streamline your software operations. Getting your software into a container-based form and using it to drive your business processes is still a daunting prospect for many, though. That is where Knative comes in. It is a tool that works with Kubernetes to build and run containers.
Knative’s big sell is its ability to offer container services as serverless functions. Instead of running all the time and responding when needed (as a server does), a serverless function can “scale to zero,” which means it is not running at all unless it is called upon. This model can save vast amounts of computing power when applied to tens of thousands of containers.
Knative has three main parts:
- Build is the part of Knative that turns a code repository into a packaged container image and registers it in an image repository, making it ready for use by a container orchestration system.
- Serve enables developers to create services from their container images and make them available as functions. The Serve component uses a configuration file to create new revisions of a function, each of which can run alongside each other. This enables administrators to create and run older and newer functions together. A separate routing capability then lets them route different properties of queries to each revision. This is useful when rolling out canary releases of new functions, enabling admins to watch for problems and quickly roll all users back to an older version where necessary.
- Event lets administrators set up events that trigger the functions in a Knative implementation. Examples might include external triggers, such as a change in a data file, that invoke a function to run.
For more information on Knative, watch this video on “Knative Explained” and check out this page on Knative Learn Page. IBM also offers a managed Knative service as part of the IBM Cloud Kubernetes Service.
This blog post on unifying containers, apps, and functions provides a useful explanation of how Knative sits in the broader container ecosystem.
Linux and containers
Containers start off in the Linux world, and most container frameworks are built to support the Linux operating system. Linux also has its own container framework too, which ships in most major server-based distributions. There have been two significant native Linux container projects:
LXC: LXC is the original Linux-based container project. It will receive security updates and bug fixes until April 2019. Early versions of Docker used LXC to execute containers.
LXD: Led by Canonical, this container architecture is built on and extends LXC, offering a new user experience. Features include a single command line tool for container management and management of containers over the network via a REST-based API. It also integrates with cloud platforms like OpenStack.
There are also several flavors of Linux that have been developed specifically with containers in mind. These include CoreOS, which is now owned by Red Hat, and RancherOS, which is a bare-bones Linux implementation designed just to host Docker containers. Others include Project Atomic and VMWare’s PhotonOS. A related OS is Snappy Ubuntu Core, which is a version of Ubuntu Core designed to run very small footprint apps, serving both the Internet of Things (IoT) and container worlds.
Modernizing Java applications
Developers of Java applications can easily modernize their legacy applications for use in a cloud-native, container-based environment. This provides new administrative features and also creates opportunities for rapid functional upgrades using other cloud-based services.
The backend coding required to containerize Java applications is relatively minimal. Developers use Docker files to describe the requirements for each component of the Java application. This will normally reflect the app’s structure, involving separate containers for the application logic and for the backend database.
Developers can then use Kubernetes or an equivalent container orchestration system to run those components using IBM WebSphere, creating multiple instances for reliability and even updating container code where necessary.
Having containerized the legacy Java application, developers can now find other services in the IBM Cloud Container Registry to enhance its functionality. Examples include SMS-based messaging and image recognition systems to create new input channels for end users.
See an example of how to modernize a legacy Java web app using Kubernetes in this blog post and video.
There are several tutorials out there on using containers in practice. Visit the following resources for more information:
Containers and IBM
For more information on IBM’s container offerings, check out the IBM Cloud Kubernetes Service.
Our video, "Advantages of Managed Kubernetes," gives a great overview of how managed Kubernetes can help you in your cloud journey:
To learn more about best practices to enable and expedite container deployment in production environments, see the report "Best Practices for Running Containers and Kubernetes in Production."
Sign up for the IBM Cloud account type of your choice here.