Kubernetes is an open source solution that automates the deployment, scaling and monitoring of containerized applications.
A container is software that’s separated from its underlying infrastructure. With containers, everything is virtualized to the operating system level. It means that no matter where an application is run, it will function the same way.
Most containers today run on a Kubernetes platform. According to the organization: “Kubernetes provides a container-centric management environment. It orchestrates computing, networking and storage infrastructure on behalf of user workloads…and enables portability across infrastructure providers.” ⁽¹⁾
Kubernetes had its beginnings at Google, where a team of engineers developed a cluster management system named Borg. In 2014, the company introduced an open source version of Borg called Kubernetes.⁽²⁾
The first version of Kubernetes was released in 2015 to provide container orchestration for distributed applications. Along with the release, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation and offered Kubernetes as a seed technology.⁽³⁾
Since its debut, Kubernetes has become increasingly popular because of the way it helps development achieve portability across systems based on container technology. With more than 1,400 contributors, the Kubernetes open source community is now one of the largest in the world.
Docker is the most commonly used container technology, but Kubernetes supports others as well. Docker defined templates for packaging software into standardized units that included all the elements to run an application.
Kubernetes orchestrates the container environment – while optimizing server usage and space. It manages where and how containers are deployed using features like intelligent scheduling, load balancing, scalability, storage management and batch execution.
Some key concepts:
- The basic Kubernetes management unit is a pod or group of containers. Containers in a pod share the same storage, resources and IP address.
- A Kubernetes cluster consists of master and worker nodes. (A node is the host or server that a container runs on; it can be a virtual or physical machine.)
- A master node manages the container workload and directs communications across the system. It includes a scheduler that controls performance, capacity and availability.
- Worker nodes run the pods under the direction of the master node.
- Configuration files allow teams to specify operational parameters such as the number of pods that can be run at one time. Using Kubernetes, it’s possible to manage resources in an entire data center as if it were a single system.
Why Kubernetes is important
Large enterprise applications may include a massive number of containers. This kind of architecture can quickly become complex.
Organizations need to be able to orchestrate all the moving parts in a container environment, preferably from a single vantage point. Many choose Kubernetes as the solution.
Kubernetes manages the ecosystem and adjusts computing and storage to ensure containers are available and deployed efficiently. All the while, development can see where everything is at any one time.
Kubernetes can help organizations simplify deployment of new applications, streamline container and resource management, reduce upgrade risks and avoid downtime. It can scale application components, either individually or as a group, and support portable cloud-native applications.
In his IBM blog, Matt Johnsen outlines some of the advantages:
- Cost savings: Kubernetes clusters are known for being low maintenance. Teams don’t have to write their own container automation scripts. They can leverage a shared infrastructure. They can reduce hardware costs by making more effective use of current hardware.
- Faster time to market: Kubernetes is perfect for DevOps. Good container management means that so long as the software runs, the deployment will almost always be painless.
- IT flexibility: In the modern enterprise, software runs on any number of private and shared infrastructures. Using a container management solution means teams don’t have to sacrifice performance or make major adjustments to move applications. They can run software wherever the business needs it.
Another benefit of Kubernetes is horizontal scaling, which can help address shifting performance demands.
“If you’re already taking advantage of Docker and containers with your applications, moving them onto Kubernetes can really help you tackle some of the operations overhead that almost every application is going to run into when moving to scale,” says Sai Vennam, IBM Developer Advocate.
Kubernetes as a service
Organizations may use an in-house Kubernetes system to orchestrate their container deployments. Alternatively, a service provider can offer a Kubernetes-based platform or infrastructure as a service.
Customers benefit from the same capabilities, but with less complexity and overhead. According to Jason McGee, VP and IBM Fellow, IBM Cloud:
“The mechanics of installing, connecting and configuring a collection of resources into a functioning container cluster is not easy. It takes work and it takes knowledge. What if you need to add or remove capacity from the container environment? How do you recover when failures happen? Containers and Kubernetes are also changing at a blistering rate.
“It’s hard to keep up if you do it yourself. One of the advantages of a managed service is that all of this is done for you, and you can just focus on your applications.” ⁽⁴⁾
A managed service provider like IBM handles the compute, network and storage resources in each node cluster. The IBM service offers intelligent scheduling, simplified cluster management, container security and isolation policies, and infrastructure upgrades.
IBM customers can also use the Kubernetes service on a bare metal cloud infrastructure. This brings agility and speed to applications that require very high computing performance such as machine learning or AI workloads.
“Developers can now choose bare metal machine configurations that meet their needs – whether it’s isolation, increased processing capability or large local disk storage – while taking advantage of the benefits of containers such as moving data easily across systems or allowing multiple team members to work on multiple parts of an application simultaneously,” says McGee.
What are containers and why do you need them?
Kubernetes adoption and the importance of application performance management
Kubernetes vs. Docker: Why not both?
IBM brings the ease of containers to complex workloads with managed Kubernetes on bare metal
Learn from Watson: How containers scale AI workloads
Monitoring IBM Cloud Service with Outlyer