What is container orchestration?
Container orchestration automates the provisioning, deployment, networking, scaling, availability, and lifecycle management of containers. Today, Kubernetes is the most popular container orchestration platform, and most leading public cloud providers - including Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud and Microsoft Azure - offer managed Kubernetes services. Other container orchestration tools include Docker Swarm and Apache Mesos.
More on containers, and why they need orchestration
Containers are lightweight, executable application components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.
The ability to create containers has existed for decades, but it became widely available in 2008 when Linux included container functionality within its kernel, and widely used with the arrival of the Docker open-source containerization platform in 2013. (Docker is so popular that "Docker containers" and "containers" are often used interchangeably.)
Because they are smaller, more resource-efficient and more portable than virtual machines (VMs), containers - and more specifically, containerized microservices or serverless functions - have become the de facto compute units of modern cloud-native applications. (For more on the benefits of containers see the interactive data visualization below)
In small numbers, containers are easy enough to deploy and manage manually. But in most organizations the number of containerized applications is growing rapidly, and managing them at scale - especially as part of a continuous integration/continuous delivery (CI/CD) or DevOps pipeline - is impossible without automation.
Enter container orchestration, which automates the operations tasks around deploying and running containerized applications and services. According to the latest IBM research (PDF, 1.4MB), 70% of developers using containers report using container orchestration solution, and 70% of those report using a fully-managed (cloud-managed) container orchestration service at their organization.
How container orchestration works
While there are differences in methodologies and capabilities across tools, container orchestration is essentially a three-step process (or cycle, when part of an iterative agile or DevOps pipeline).
Most container orchestration tools support a declarative configuration model: A developer writes a configuration file (in YAML or JSON depending on the tool) that defines a desired configuration state, and the orchestration tool runs the file uses its own intelligence to achieve that state. The configuration file typically
- Defines which container images make up the application, and where they are located (in what registry)
- Provisions the containers with storage and other resources
- Defines and secures the network connections between containers
- Specifies versioning (for phased or canary rollouts)
The orchestration tool schedules deployment of the containers (and replicas of the containers, for resiliency) to a host, choosing the best host based on available CPU capacity, memory, or other requirements or constraints specified in the configuration file.
Once the containers are deployed the orchestration tool manages the lifecycle of the containerized application based on the container definition file (very often a Dockerfile). This includes
- Managing scalability (up and down), load balancing, and resource allocation among the containers;
- Ensuring availability and performance by relocating the containers to another host in the event an outage or a shortage of system resources
- Collecting and storing log data and other telemetry used to monitor the health and performance of the application.
Benefits of container orchestration
It's probably clear that the chief benefit of container orchestration is automation - and not only only because it reduces greatly the effort and complexity of managing a large containerized application estate. By automating operations, orchestration supports an agile or DevOps approach that allows teams to develop and deploy in rapid, iterative cycles and release new features and capabilities faster.
In addition, an orchestration tool's intelligence can enhance or extend many of the inherent benefits of containerization. For example, automated host selection and resource allocation, based on declarative configuration, maximizes efficient use of computing resources; automated health monitoring and relocation of containers maximizes availability.
As noted above, Kubernetes is the most popular container orchestration platform. Together with other tools in the container ecosystem, Kubernetes enables a company to deliver a highly productive platform-as-a-service (PaaS) that addresses many of the infrastructure- and operations-related tasks and issues around cloud-native application development, so that development teams can focus exclusively on coding and innovation.
Kubernetes’ advantages over other orchestration solutions are largely a result of its more comprehensive and sophisticated functionality in several areas, including:
- Container deployment. Kubernetes deploys a specified number of containers to a specified host and keeps them running in a desired state.
- Rollouts. A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or roll back rollouts.
- Service discovery. Kubernetes can automatically expose a container to the internet or to other containers using a DNS name or IP address.
- Storage provisioning. Developers can set Kubernetes to mount persistent local or cloud storage for your containers as needed.
- Load balancing and scalability. When traffic to a container spikes, Kubernetes can employ load balancing and scaling to distribute it across the network to ensure stability and performance. (It also saves developers the work of setting up a load balancer.)
- Self-healing for high availability. When a container fails, Kubernetes can restart or replace it automatically. It can also take down containers that don’t meet your health-check requirements.
- Support and portability across multiple cloud providers. As noted earlier, Kubernetes enjoys broad support across all leading cloud providers. This is especially important for organizations deploying applications to a hybrid cloud or hybrid multicloud environment.
- Growing ecosystem of open-source tools. Kubernetes also has an ever-expanding stable of usability and networking tools to enhance its capabilities via the Kubernetes API. These include Knative, which enables containers to run as serverless workloads; and Istio, an open source service mesh.
Container orchestration and IBM Cloud
Containers are ideal for modernizing your applications and optimizing your IT infrastructure. Container services from IBM Cloud, built on open source technologies like Kubernetes, can facilitate and accelerate your path to cloud-native application development, and to an open hybrid cloud approach that integrates the best features and functions from private cloud, public cloud and on-premises IT infrastructure.
Take the next step:
- Learn how you can deploy highly available, fully managed clusters for your containerized applications with a single click using Red Hat OpenShift on IBM Cloud.
- Deploy and manage containerized applications consistently across on-premises, edge computing and public cloud environments from any vendor with IBM Cloud Satellite.
- Run container images, batch jobs or source code as a serverless workload - no sizing, deploying, networking or scaling required - with IBM Cloud Code Engine.
To get started right away, sign up for an IBM Cloud account.
Containers in the enterprise
New IBM research documents the surging momentum of container and Kubernetes adoption.
Combine the best features of cloud and traditional IT
Container orchestration is a key component of an open hybrid cloud strategy that lets you build and manage workloads from anywhere.