What is Kubernetes networking?
Explore IBM's Kubernetes networking solution Subscribe for cloud updates
Illustration with collage of pictograms of computer monitor, server, clouds, dots

Published: 21 November 2023
Contributors: Stephanie Susnjara, Ian Smalley

What is Kubernetes networking?

Kubernetes networking provides the network infrastructure for enabling communication, scalability, security and external access for containerized applications.

The network is complex and involves communication between all major components that exist inside like pods, nodes, containers and services; and outside like the external traffic of a Kubernetes cluster.

These components rely on four distinct networking methods to communicate:

1. Container-to-container networking.

2. Pod-to-pod networking.

3. Pod-to-service networking.

4. External-to-service networking.

Achieve workplace flexibility with DaaS

Read how desktop as a service (DaaS) enables enterprises to achieve the same level of performance and security as deploying the applications on premises.

Related content

Register for the guide on hybrid cloud

What is Kubernetes?

The name Kubernetes originates from Greek, meaning helmsman or pilot. Based on Borg, Google’s internal container orchestration platform, Kubernetes was introduced to the public as an open source tool in 2014.

That same year, Google donated Kubernetes to the Cloud Native Computing Foundation (link resides outside ibm.com), the open source, vendor-neutral hub of cloud-native computing. Since then, Kubernetes has become the most widely used container orchestration tool for running container-based workloads worldwide.

Kubernetes—also referred to as k8s or kube—was explicitly designed to automate the management of containers—the standard unit of software that packages up code and all its dependencies. The orchestration tool is highly valued for running quickly and reliably in any infrastructure environment, whether on-premises, private cloud, public cloud or hybrid cloud.

Unlike virtual machines (VMs) that virtualize physical hardware, containers virtualize the operating system, such as Linux® or Windows. Each container holds just the application’s libraries and dependencies. Because containers share the same operating system kernel as the host, they are considered lightweight, fast and portable.

Kubernetes and its ecosystem of services, support and tools have become the foundation for modern cloud infrastructure and application modernization. All major cloud providers, including Amazon Web Services (AWS), Google, Microsoft, IBM® and Red Hat®, integrate Kubernetes within their cloud platforms to enhance Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) capabilities.

Kubernetes architecture

The following fundamental components comprise the Kubernetes architecture: 


A Kubernetes cluster is a set of physical or virtual machines (nodes) that work together to run containerized applications. Clusters form the foundation of Kubernetes architecture.

Master nodes

Master nodes represent a single compute host, either a virtual or physical machine. They host the Kubernetes control plane components and are responsible for scheduling and scaling applications.

By managing all of the compute, network and storage resources in a Kubernetes cluster, the master node helps ensure containerized applications and services are equally deployed to worker nodes in the cluster.

Worker nodes

Worker nodes are responsible for running the containers and performing any work assigned by the master node. They also host application containers, which are grouped as pods.


Pods are groups of one or more containers, such as Linux or Docker that share the same compute resources and network. They are cluster deployment units that also function as units of scalability.

For instance, if a container in a pod experiences heavy traffic volume, Kubernetes can replicate that pod to other nodes in the cluster. Kubernetes can also shut down pods if traffic volume decreases.

Other Kubernetes components include the following:

The deployment in Kubernetes manages a set of pods to run an application workload. A deployment identifies how many replicas of a pod should run on the cluster. If a pod fails, the deployment creates a new one.

This critical feature helps scale the number of replica pods, roll code updates and maintain availability. Deployments are carried out using kubectl, the Kubernetes-specific command line tool. 

A Kubernetes service is an abstraction layer that defines a logical set of pods and how to access them. A service exposes a network application running on one or more pods in a cluster. It provides an abstract way to load balance pods. 

Application programming interface (API) server:
The API server in Kubernetes exposes the Kubernetes API (the interface used to manage, create and configure Kubernetes clusters) and serves as the entry point for all commands and queries.

Networking terms and concepts

Basic computer networking involves connecting two or more computing devices to share data and exchange resources, either by cables (wired) or wifi.

In physical networking, physical servers are connected to physical network equipment like switches, routers and Ethernet cables to connect to the internet. 

In virtual networking, software-defined networks (SDN), components like virtual ethernet devices and virtual interfaces are installed on bare metal servers or virtual machines to connect to the internet. Kubernetes deployment relies on SDN to configure and manage network communication across clusters.

Before delving deeper into Kubernetes networking, it’s worth reviewing basic networking terms:

Network host
: A network host is any computer connected to a network that provides information, applications or services to other hosts or nodes on the network.

Internet protocol (IP) address: 
An IP address is a unique number assigned to every device connected to a network that uses the IP for communication. It identifies the device’s host network and the device's location on the host network.

Localhost is a default hostname that acts as a private IP address, pointing directly to the computer or device they use.

A port identifies a specific connection between network devices. A number identifies each port. Computers use port numbers to determine which application, service or process should receive particular messages.

Network address translation (NAT):
NAT changes internal or private addresses to public or globally routable IP addresses, allowing for safe internet access. NAT enables one, unique IP address to represent an entire group of computing devices.

Node agents: 
Node agents are administrative agents that monitor application servers on a host system and route administrative requests to other servers.

Network namespace:
 A network namespace is a collection of network interfaces and routing table instructions that provides isolation between network devices.

Proxy or proxy server: 
A proxy provides a gateway between users and the internet. 

How does Kubernetes networking work?

Kubernetes was created to run distributed systems with a network plane spread across a cluster of machines. In addition to providing interconnectivity between components, Kubernetes cluster networking creates a seamless environment where data can move freely and efficiently through software-defined networking.

Another distinct feature of Kubernetes networking is its flat network structure, which means all components can connect without relying on other hardware. In Kubernetes, every pod in a cluster can communicate with every other pod, no matter what node it is running on. The flat network provides an efficient way to share resources and eliminates the need for dynamic port allocation.

Overall, Kubernetes networking abstracts complexity, allowing developers and operators to focus on building and maintaining applications rather than dealing with intricate network configurations.

Kubernetes networking model

Kubernetes provides a networking model to help address the challenges of orchestrating containerized applications across a distributed environment. The container runtime on each node implements the network model and adheres to the following rules:

Each pod has its own IP address, which can be routed within the cluster. This feature eliminates the need to create links between pods and mapping ports.

Because each pod has its own IP address, NAT is not required. All pods can communicate with all other pods in the cluster without NAT.

Agents on a node, such as the kubelet, the primary node agent that runs on each node can communicate with all pods on that specific node. 

The Kubernetes networking model applies to 4 basic types of Kubernetes communication:

1. Container-to-container networking

Containers are the smallest unit in a Kubernetes network. In basic networking configurations, containers communicate within a single pod through localhost.

This communication is possible because containers in the same pod share the same network namespace, which includes networking resources like storage, IP address and port space.

2. Pod-to-pod networking

Pod-to-pod communication includes communication between pods on the same node and communication between pods on different nodes. Each pod in a Kubernetes cluster has its own unique IP address, allowing for direct communication between pods regardless of the node on which they exist.

Moreover, each Kubernetes cluster automatically provides a domain name system service (DNS service) in addition to the pod IP address. The DNS service where names are assigned to pods and services creates easy, readable names for administrators, providing a lightweight mechanism for service discovery.

3. Pod-to-service networking

A service in Kubernetes is an abstraction that defines a logical set of pods and enables external traffic exposure, load balancing and service discovery to those pods. Services facilitate both pod-to-service and external-to-service communication.

According to the Kubernetes networking model, pod IP addresses are ephemeral. Therefore, if a pod crashes or is deleted and a new pod is created in its place, the new pod will most likely receive a new IP address.

In pod-to-service communication, a ClusterIP is a type of service that provides a stable virtual IP address to a set of pods. This internal IP is reachable only within the cluster and can be used for internal communications between pods and services.

The kube-proxy, installed on every node in a cluster, maintains network rules on the host and monitors changes in services and pods. As pods get created or destroyed, the kube-proxy updates iptables (a utility program used to create rules on the Linux kernel firewall for routing traffic) to reflect that change so traffic sent to the service IP is routed correctly.

4. External-to-service networking

External-to-service networking refers to exposing and accessing services, such as external services or databases, from outside the Kubernetes cluster. 

Kubernetes provides several services to facilitate external traffic into a cluster:

While ClusterIP is the default Kubernetes service for internal communications, external traffic can access it through the kube-proxy. ClusterIP can be helpful when accessing a service on a laptop or debugging a service.

The NodePort exposes the service on a static port on each node’s IP, making the service accessible outside of the cluster. The NodePort is the most basic way to perform external-to-service networking and is often used for testing purposes, such as testing for public access to an app.

The standard for external-service networking, the LoadBalancer exposes the service externally by using a cloud provider’s load balancer and assigns the service a public IP address. Traffic from the external load balancer is then directed to the backend pods.

 An external name service allows access to an external service by DNS name without exposing it in the DNS cluster. This type of service helps provide a stable DNS name for external services, such as messaging services not hosted within the cluster. 

Kubernetes ingress is a collection of routing rules surrounding external access to services within the cluster. The Ingress Controller is a load balancer that acts as a network bridge between Kubernetes services and external services. 

Kubernetes network policies

Kubernetes network policies are an application construct that plays a vital role in Kubernetes networking. These policies allow administrators and developers to define rules specifying how pods can communicate with each other and other network endpoints.

Network policies are applied by using the Kubernetes Network Policies API and consist of the following basic components: 

Pod selector:
The pod selector specifies which pods the policy applies to based on labels and selectors.

Ingress defines rules for incoming traffic to pods

Egress defines rules for outgoing traffic from pods.

Kubernetes network policies help define and manage security policies by defining rules that control which pods can communicate with each other, thus preventing unauthorized access and preventing malicious attacks.

Network policies also ensure isolation between pods and services so that only those pods or services can communicate with an allowed set of peers. For example, isolation is critical for multi-tenancy situations when DevOps or other teams are sharing the same Kubernetes cluster yet working on different projects.

For companies with specific compliance requirements, network policies help specify and enforce network access controls. This helps meet regulatory standards and helps ensure that the cluster adheres to organization policies.

Container Network Interface (CNI) and network plug-ins

Container Network Interface (CNI) is another essential feature tied to Kubernetes networking. Created and maintained by the Cloud Native Computing Foundation and used by Kubernetes and other container runtimes, including RedHat OpenShift® and Apache Mesos, CNI is a standardized specification and set of APIs that define how network plug-ins should enable container networking.

CNI plug-ins can assign IP addresses, create network namespaces, set up network routes, and so on, to enable pod-to-pod communication, both within the same node and across nodes.

While Kubernetes provides a default CNI, numerous third-party CNI plug-ins, including Calico, Flannel and Weave, are designed to handle configuration and security in container-based networking environments.

While each might have different features and approaches to networking, such as overlay networks or direct routing, they all adhere to CNI specifications that are compatible with Kubernetes. 

Kubernetes tutorials

If you want to start working with Kubernetes or want to ramp up your existing skills with Kubernetes and Kubernetes ecosystem tools, try one of these tutorials.

Kubernetes tutorials: Hands-on labs with certification

Interactive browser-based training for deploying and operating a cluster on IBM Cloud® Kubernetes Service. No downloads or configuration required.

8 Kubernetes tips and tricks

Unlock Kubernetes efficiency with expert tips in this blog. Learn to maximize productivity using kubectl, the command-line tool for Kubernetes clusters.

Kubernetes tutorials on IBM Developer

Explore concise Kubernetes tutorials offering developers step-by-step guidance for mastering essential tasks, enabling hands-on learning for your projects.

Deploy a microservices app on IBM Cloud by using Kubernetes

Microservices, a cloud-native approach, break down applications into small, independent components for enhanced flexibility and scalability in modern software development.

Related solutions
IBM Cloud® Kubernetes Service

Deploy secure, highly available clusters in a native Kubernetes experience.

Explore IBM Cloud Kubernetes Service

Red Hat® OpenShift® on IBM Cloud®

With Red Hat OpenShift on IBM Cloud, OpenShift developers have a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters.

Explore Red Hat OpenShift on IBM Cloud

IBM Cloud® Code Engine

A fully managed serverless platform, IBM Cloud Code Engine lets you run your container, application code or batch job on a fully managed container runtime.

Explore IBM Cloud Code Engine
Resources IBM Cloud training for developers

Build Kubernetes skills through courses contained within the IBM Cloud Professional Developer certification.

The show must go on

Students showcase their art with Red Hat OpenShift on IBM Cloud.

Containers in the enterprise

Containers are part of a hybrid cloud strategy that lets you build and manage workloads from anywhere.

What is Kubernetes?

Kubernetes is a container orchestration platform for scheduling and automating the deployment, management, and scaling of containerized applications.

What are containers?

Containers are executable units of software in which application code is packaged along with its libraries and dependencies, in common ways so that the code can be run anywhere.

What is container orchestration?

Container orchestration automates and simplifies the provisioning, deployment and management of containerized applications.

Take the next step

Red Hat OpenShift on IBM Cloud offers developers a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters. Offload tedious and repetitive tasks involving security management, compliance management, deployment management and ongoing lifecycle management. 

Explore Red Hat OpenShift on IBM Cloud Start for free