What is Kubernetes?
Kubernetes, also known as “k8s” or “kube,” is a container orchestration platform for scheduling and automating the deployment, management and scaling of containerized applications. Kubernetes was first developed by engineers at Google before being released as open source software in 2014. It’s a descendant of “Borg,” a container orchestration platform used internally at Google. “Kubernetes” is Greek for helmsman or pilot, hence the helm in the Kubernetes logo (link resides outside of IBM).
Today, Kubernetes and the broader container ecosystem are maturing into a general-purpose computing platform and ecosystem that rival — if not surpass — virtual machines (VMs) as the basic building blocks of modern cloud infrastructure and applications. This ecosystem enables organizations to deliver a platform as a service that addresses multiple infrastructure- and operations-related tasks and issues surrounding cloud-native development. This means development teams can focus solely on coding and innovation.
What are containers?
A container is an executable unit of software in which application code is packaged — together with libraries and dependencies — in common ways so that it can run anywhere on the desktop, traditional IT or in the cloud. Containers take advantage of a form of OS virtualization that lets applications share the OS by isolating processes and controlling access to CPU, memory and processes.
How did the container orchestration with Kubernetes evolve?
As containers proliferated, operations teams needed to schedule and automate container deployment, networking, scalability and availability. While other container orchestration options — most notably Docker Swarm and Apache Mesos — were popular, Kubernetes became the most widely adopted. Developers choose Kubernetes for its functionality, its ecosystem of open source-supporting tools, and its portability across the leading cloud providers, some of whom now offer fully managed Kubernetes services.
What are the chief components of Kubernetes architecture?
Clusters and nodes (compute):
Clusters are the building blocks of Kubernetes architecture. The clusters are made up of nodes, each of which represents a single compute host. Each cluster consists of multiple worker nodes that deploy, run and manage containerized applications, and one master node that controls and monitors the worker nodes. The master node runs a scheduler service that automates when and where the containers are deployed based on developer-set deployment requirements and available computing capacity. Each worker node includes the tool that is being used to manage the containers, such as Docker, and a software agent called a kubelet that receives and executes orders from the master node.
Pods and deployments (software):
Pods are groups of containers that share the same compute resources and the same network. They’re also the unit of scalability in Kubernetes. If a container in a pod is getting more traffic than it can handle, Kubernetes will replicate the pod to other nodes in the cluster. For this reason, it’s a good practice to keep pods compact so that they comprise only containers that must share resources. The deployment controls the creation and state of the containerized application and keeps it running. It specifies how many replicas of a pod should run on the cluster. If a pod fails, the deployment will create a new one.
What is Istio service mesh?
Kubernetes can deploy and scale pods, but it can’t manage or automate routing between them and doesn’t provide any tools to monitor, secure or debug these connections. As the number of containers in a cluster grows, the number of possible connection paths between them escalates exponentially. For example, two containers have two potential connections, but 10 pods have 90. This creates a potential configuration and management nightmare. Enter Istio on IBM Cloud, an open source service mesh layer for Kubernetes clusters.
To each Kubernetes cluster, Istio adds a sidecar container — essentially invisible to the programmer and the administrator — that configures, monitors and manages interactions between the other containers. With Istio, you set a single policy that configures connections between containers so that you don’t have to configure each connection individually. This makes connections between containers easier to debug. Istio also provides a dashboard that DevOps teams and administrators can use to monitor latency, time-in-service errors and other characteristics of the connections between containers. And it builds in security — specifically, identity management that keeps unauthorized users from “spoofing” a service call between containers. It also offers authentication/authorization/auditing (AAA) capabilities that security professionals can use to monitor the cluster.
What is the difference between Knative and serverless computing?
Knative is an open source platform that sits on top of Kubernetes and provides two important classes of benefits for cloud-native development. It provides an easy onramp to serverless computing. Serverless computing is a relatively new way of deploying code that makes cloud-native applications more efficient and cost-effective. Instead of deploying an on-going instance of code, serverless computing brings up the code as needed, scaling it up or down as demand fluctuates, and then takes the code down when not in use. Knative also enables developers to build a container once and run it as a software service or as a serverless function. It’s all transparent to the developer. Knative handles the details in the background, and the developer can focus on code. Knative simplifies container development and orchestration. For developers, containerizing code requires lots of repetitive steps, and orchestrating containers requires lots of configuration and scripting. For example, this includes generating configuration files, installing dependencies, managing logging and tracing, and writing continuous integration/continuous deployment (CI/CD) scripts.
Is Kubernetes just a trend or is it here to stay?
Kubernetes is one of the fastest-growing open source projects in history, and growth is accelerating. Adoption continues to soar among developers and the companies that employ them. A few data points worth noting:
- At this writing, more than 90,000 commits have been made to the Kubernetes repository on GitHub (link resides outside IBM) and there are over 2,300 active contributors to the project. According to the Cloud Native Computing Foundation (link resides outside IBM), there have been more than 148,000 commits across all Kubernetes-related repositories, including Kubernetes Dashboard, Kubernetes MiniKube and more.
- More than 1,500 companies use Kubernetes in their production software stacks. These include enterprises such as AirBnB, Bose, CapitalOne, Intuit, Nordstrom, Philips, Reddit, Slack, Spotify, Tinder and, of course, IBM. Read these and other adoption case studies (link resides outside IBM).
- A July 2019 survey cited in Container Journal (link resides outside IBM) found a 51% increase in Kubernetes adoption during the previous six months.
- More than 12,000 people attended the KubeCon + CloudNativeCon North America 2019 (link resides outside IBM) conference — up more than 3,000 from the previous year’s record-setting attendance.
- According to ZipRecruiter (link resides outside IBM), the average annual salary (in North America) for a Kubernetes-related job is USD 144,628. At this writing, there are currently more than 21,000 Kubernetes-related positions listed on LinkedIn (link resides outside IBM).
Are Kubernetes tutorials available?
Ready to start working with Kubernetes on IBM Cloud or looking to build your skills with Kubernetes and Kubernetes ecosystem tools? You can complete this interactive, browser-based training for deploying and operating one free cluster on IBM Cloud Kubernetes Service for three hours. No downloads or configuration are required and upon mastery of the labs, you can apply for an IBM Cloud Kubernetes certification badge. Master the skills you need to run an enterprise cloud today.