What is Kubernetes Ingress?
30 April 2020
5 min read
Kubernetes Ingress is an API object that provides routing rules to manage external users’ access to the services in a Kubernetes cluster.

In this article, we’ll look at how and why you may need to expose an application to the outside of your Kubernetes cluster, the different options available, and the situations in which Kubernetes Ingress is most useful. 

This blog assumes you have a basic understanding of Kubernetes, but if you need more background information, check out the following resources:

Options for exposing applications deployed in Kubernetes

There are several ways to expose your application to the outside of your Kubernetes cluster, and you’ll want to select the appropriate one based on your specific use case. 

The four main options we’ll be comparing in this post are: ClusterIP, NodePort, LoadBalancer, and Ingress. Each provides a way to expose services and is useful in different situations. A service is essentially a frontend for your application that automatically reroutes traffic to available pods in an evenly distributed way. Services are an abstract way of exposing an application running on a set of pods as a network service. Pods are immutable, which means that when they die, they are not resurrected. The Kubernetes cluster creates new pods in the same node or in a new node once a pod dies. 

Similar to pods and deployments, services are resources in Kubernetes. A service provides a single point of access from outside the Kubernetes cluster and allows you to dynamically access a group of replica pods. 

For internal application access within a Kubernetes cluster, ClusterIP is the preferred method. It is a default setting in Kubernetes and uses an internal IP address to access the service.

To expose a service to external network requests, NodePort, LoadBalancer, and Ingress are possible options. We’ll look at Ingress first and compare the services later in the article.

What is Kubernetes Ingress and why is it useful?

Kubernetes Ingress is an API object that provides routing rules to manage external users’ access to the services in a Kubernetes cluster, typically via HTTPS/HTTP. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node. This makes it the best option to use in production environments. 

In production environments, you typically need content-based routing, support for multiple protocols, and authentication. Ingress allows you to configure and manage these capabilities inside the cluster.

Ingress is made up of an Ingress API object and the Ingress Controller. As we have discussed, Kubernetes Ingress is an API object that describes the desired state for exposing services to the outside of the Kubernetes cluster. An Ingress Controller is essential because it is the actual implementation of the Ingress API. An Ingress Controller reads and processes the Ingress Resource information and usually runs as pods within the Kubernetes cluster.

An Ingress provides the following:

  • Externally reachable URLs for applications deployed in Kubernetes clusters
  • Name-based virtual host and URI-based routing support
  • Load balancing rules and traffic, as well as SSL termination

For a quick visual overview of Kubernetes Ingress, check out the following video:

What is the Ingress Controller?

If Kubernetes Ingress is the API object that provides routing rules to manage external access to services, Ingress Controller is the actual implementation of the Ingress API. The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services. 

Layer 4 (L4) refers to the connection level of the OSI network stack—external connections load-balanced in a round-robin manner across pods. Layer 7 (L7) refers to the application level of the OSI stack—external connections load-balanced across pods, based on requests. Layer 7 is often preferred, but you should select an Ingress Controller that meets your load balancing and routing requirements.

Ingress Controller is responsible for reading the Ingress Resource information and processing that data accordingly. The following is a sample Ingress Resource:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress 
spec:
  backend:
    serviceName:ServiceName
    servicePort:<Port Number>

As an analogy, if Kubernetes Ingress is a computer, then Ingress Controller is a programmer using the computer and taking action. Furthermore, Ingress Rules act as the manager who directs the programmer to do the work using the computer. Ingress Rules are a set of rules for processing inbound HTTP traffic. An Ingress with no rules sends all traffic to a single default backend service. 

Looking deeper, the Ingress Controller is an application that runs in a Kubernetes cluster and configures an HTTP load balancer according to Ingress Resources. The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally. Different load balancers require different Ingress Controller implementations.

Various Ingress Controllers are available in the market, and it’s important to choose the right one for managing the traffic and load coming into your Kubernetes cluster.

Ingress vs. ClusterIP vs. NodePort vs. LoadBalancer

Ingress, ClusterIP, NodePort, and LoadBalancer are all ways to get external traffic into your cluster, and they each do it differently. Let’s take a look at how each works and where you would use them.

ClusterIP

ClusterIP is the preferred option for internal service access and uses an internal IP address to access the service. Some examples of where ClusterIP might be the best option include service debugging during development and testing, internal traffic, and dashboards.

NodePort

A NodePort is a virtual machine (VM) used to expose a service on a Static Port number. It’s primarily used for exposing services in a non-production environment (in fact, production use is not recommended). As an example, a NodePort would be used to expose a single service (with no load-balancing requirements for multiple services).

LoadBalancer

This method uses an external LoadBalancer to expose services to the Internet. You can use LoadBalancer in a production environment, but Ingress is often preferred.

Ingress

Ingress enables you to consolidate the traffic-routing rules into a single resource and runs as part of a Kubernetes cluster. Some reasons Kubernetes Ingress is the preferred option for exposing a service in a production environment include the following:

  • Traffic routing is controlled by rules defined on the Ingress Resource.
  • Ingress is part of the Kubernetes cluster and runs as pods.
  • An external Load Balancer is expensive, and you need to manage this outside the Kubernetes cluster. Kubernetes Ingress is managed from inside the cluster.

In production environments, you typically use Ingress to expose applications to the Internet. An application is accessed from the Internet via Port 80 (HTTP) or Port 443 (HTTPS), and Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. 

Summary

The Kubernetes Ingress API lets you expose your applications deployed in a Kubernetes cluster to the Internet with routing rules into a single source. To implement Ingress, you need to configure an Ingress Controller in your cluster—it is responsible for processing Ingress Resource information and allowing traffic based on the Ingress Rules. It’s important to choose the right service with appropriate configuration to expose your application to the Internet based on the guidelines listed above. 

Author
Ravi Saraswathi IBM Chief Architect, IBM Blog