Overview
Run Kubernetes at enterprise scale
IBM Cloud® Kubernetes Service is a managed offering built for creating a Kubernetes cluster of compute hosts to deploy and manage containerized apps on IBM Cloud. A certified Kubernetes solution, it provides intelligent scheduling, self-healing, horizontal scaling and more.
Learn to deploy and operate a Kubernetes cluster on IBM Cloud — no cost, no credit card.
Use cases
Deploy a scalable web app

Deploy a scalable web app
Analyze logs, monitor apps

Analyze logs, monitor apps
Deploy apps continuously

Deploy apps continuously
Features
Kubernetes made easy
Native Kubernetes
Secure clusters
Leverage IBM Watson
Hands-on labs with certification
TrustRadius report
Quick start for developers
Kubernetes explained
Get answers to common questions and links to learn more.
What are containers?
A container is an executable software unit in which application code is packaged — together with libraries and dependencies — so it can run anywhere on the desktop, traditional IT or in the cloud. Containers exploit a form of OS virtualization that lets applications share the OS by isolating processes and controlling access to CPU, memory and processes.
How did the container orchestration with Kubernetes evolve?
As containers proliferated, operations teams needed to schedule and automate container deployment, networking, scalability and availability. Kubernetes became the most widely adopted for its functionality, its ecosystem of open source-supporting tools and its portability across leading cloud providers, some of which offer fully managed Kubernetes services.
What are the chief components of Kubernetes architecture?
Clusters comprise nodes. Each node represents 1 compute host. Worker nodes in a cluster deploy, run and manage containerized apps. Pods share compute resources and network and are key to scaling. If a container in a pod gets too much traffic, Kubernetes replicates the pod. Deployments control the creation and state of the containerized app and keep it running.
What is Istio service mesh?
As the number of containers in a cluster grows, the possible connection paths between them grow exponentially, making configuration and management complex. Istio on IBM Cloud, an open source service mesh layer for Kubernetes clusters, adds a sidecar container to each Kubernetes cluster. A sidecar configures, monitors and manages interactions between other containers.
What is the difference between Knative and serverless computing?
Knative, an open source platform, sits on top of Kubernetes and provides two vital benefits for cloud-native development. It’s an easy access method to serverless computing and a way to build a container once and run it as a software service or serverless function. Knative transparently handles things like generating configuration files and writing CI/CD scripts.