Run Kubernetes at enterprise scale

IBM Cloud® Kubernetes Service is a certified, managed Kubernetes solution, built for creating a cluster of compute hosts to deploy and manage containerized apps on IBM Cloud®. It provides intelligent scheduling, self-healing, horizontal scaling and securely manages the resources that you need to quickly deploy, update and scale applications. IBM Cloud Kubernetes Service manages the master, freeing you from having to manage the host OS, container runtime and Kubernetes version-update process.

Learn to deploy and operate a cluster in a live Kubernetes environment on IBM Cloud at no cost. This is not a demo, yet you won't need to configure or download anything.

Kubernetes dashboard

Use cases

Create Kubernetes clusters

Diagram showing how to use Kubernetes capabilities to deploy a containerized app on IBM Cloud

Create Kubernetes clusters

See how a fictional public relations firm uses Kubernetes capabilities to deploy a containerized app on IBM Cloud. With IBM Watson® Tone Analyzer Service, the firm gets feedback on its press releases.

Deploy a scalable web app

Diagram showing how to scaffold a web app, run it locally in a container, then deploy it to an IBM Cloud Kubernetes cluster

Deploy a scalable web app

Learn how to scaffold a web app, run it locally in a container, then deploy it to an IBM Cloud Kubernetes cluster. Also learn how to bind a custom domain, monitor the health of the environment and scale.

Analyze logs, monitor apps

Diagram showing how to create a cluster and configure the log analysis and monitoring services, then deploy an app to the cluster

Analyze logs, monitor apps

Learn how to create a cluster and configure the log analysis and monitoring services. Then, deploy an app to the cluster, view and analyze logs with Kibana and view health and metrics with Grafana.

Deploy apps continuously

Diagram showing how to set up a continuous integration and continuous delivery pipeline for containerized apps running in Kubernetes

Deploy apps continuously

Learn how to set up a CI/CD pipeline for containerized apps running in Kubernetes. This use case covers the setup of source control, build, test and deploy and integrating security scanners, analytics and more.


Kubernetes made easy

Our clients

Logo for epiphany
Logo for ethidad
Logo for exxon
Logo for gia
Logo for irene
Logo for pbm
Logo for harry rosen
Logo for crm.com

Kubernetes resources

From hands-on labs to documentation, get all of the help you need.

Hands-on labs with certification

Take our hands-on Kubernetes labs at no charge and earn your certification.

Quick start for developers

Follow this curated learning to deploy highly available containerized apps in Kubernetes clusters.

Related products

IBM Cloud Code Engine

Run your application, job or container on a managed serverless platform.

Red Hat OpenShift on IBM Cloud

Deploy and secure enterprise workloads on native Red Hat® OpenShift® with developer-focused tools to run highly available apps.

IBM Cloud Foundry

Create and deploy applications on a managed multitenant Cloud Foundry environment.

Kubernetes explained

Get answers to common questions and links to learn more.

What is Kubernetes?

Kubernetes, Greek for helmsman and also known as “k8s” or “kube,” is a container-orchestration platform used to schedule and automate the deployment, management and scaling of containerized applications. It rivals VMs for delivering platform as a service that addresses many infrastructure- and operations-related tasks and issues around cloud-native development.

What are containers?

container is an executable software unit in which application code is packaged — together with libraries and dependencies — so it can run anywhere on the desktop, traditional IT or in the cloud. Containers exploit a form of OS virtualization that lets applications share the OS by isolating processes and controlling access to CPU, memory and processes.

How did the container orchestration with Kubernetes evolve?

As containers proliferated, operations teams needed to schedule and automate container deployment, networking, scalability and availability. Kubernetes became the most widely adopted for its functionality, its ecosystem of open source-supporting tools and its portability across leading cloud providers, some of which offer fully managed Kubernetes services.

What are the chief components of Kubernetes architecture?

Clusters comprise nodes. Each node represents 1 compute host. Worker nodes in a cluster deploy, run and manage containerized apps. Pods share compute resources and network and are key to scaling. If a container in a pod gets too much traffic, Kubernetes replicates the pod. Deployments control the creation and state of the containerized app and keep it running.

What is Istio service mesh?

As the number of containers in a cluster grows, the possible connection paths between them grow exponentially, making configuration and management complex. Istio on IBM Cloud, an open source service mesh layer for Kubernetes clusters, adds a sidecar container to each Kubernetes cluster. A sidecar configures, monitors and manages interactions between other containers.

What is the difference between Knative and serverless computing?

Knative, an open source platform, sits on top of Kubernetes and provides two vital benefits for cloud-native development. It’s an easy access method to serverless computing and a way to build a container once and run it as a software service or serverless function. Knative transparently handles things like generating configuration files and writing CI/CD scripts.

What is the difference between Kubernetes and Docker?

Docker provides the containerization piece, enabling developers to easily package applications into small, isolated containers using the command line. When demand surges, Kubernetes provides orchestration of Docker containers, scheduling and automatically deploying them across IT environments to ensure high availability.

What is Ingress in Kubernetes?

Ingress in Kubernetes is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster. With Ingress, you can easily set up rules for routing traffic without creating a bunch of load balancers or exposing each service on the node. This makes it the best option to use in production environments.

What is the difference between Docker Swarm and Kubernetes?

Docker Swarm is deployed using Docker Engine and is readily available in your environment. Swarm is easier to start with and may be more ideal for smaller workloads. 

Kubernetes is more powerful, customizable and flexible, but has a steeper learning curve. Running Kubernetes with a managed service simplifies open source management responsibilities and allows you to focus on building applications.

What are Kubernetes Operators?

Kubernetes Operators are quickly picking up traction in the developer community as a great way of managing complex applications on Kubernetes.