An ingress is a collection of rules to allow inbound connections to the Kubernetes cluster services. It can be configured to give Kubernetes services externally reachable URLs, terminate TLS connections, offer name-based virtual hosting, and more.
Ingress controller for layer 7 traffic
Ingress resources in Kubernetes are used to proxy layer 7 traffic to containers in the cluster.
Ingress resources require an ingress controller component to run as a layer 7 proxy service inside the cluster. In IBM® Cloud Private, an nginx-based ingress controller is provided by default and is deployed on the proxy or master (in case master acts as proxy) nodes.
The default ingress controller watches Kubernetes ingress objects on all namespaces through the Kubernetes API and dynamically programs the nginx proxy rules for upstream services based on the ingress resource. By default, the ingress controller is bootstrapped with load balancing policies, such as load balancing algorithms, backend weight scheme, etc.
More than one ingress controller can also be deployed if isolation between namespaces is required. The ingress controller itself is a container deployment that can be scaled out. It is exposed on a host port on the proxy nodes and can proxy all of the pod and service IP mesh that is running in the cluster.
To deny all ingress traffic to applications that are under your namespace, see Denying ingress traffic.
Single service ingress
You can expose a single service through ingress. In the following example, a Node.js server was created with the service name
mynode-ibm-nodejs-sample on port 3000. In this case, all traffic on the ingress controller's address and
port (80 or 443) is forwarded to this service.
With a simple fanout, you can define multiple HTTP services at different paths and provide a single proxy that routes to the correct endpoints in the backend. When you have a highly available load balancer that is managing your traffic, this type of ingress resource is helpful in reducing the number of load balancers to a minimum.
In the following example,
/ is the rewrite target for two services:
customer-api on port 4191 and
orders-api on port 9090. Both of these services' context roots are at
/; the ingress rewrites
/ when it proxies the requests to the backends.
- host: api.example.com
Name-based virtual hosting
Name-based virtual hosting provides the capability to host multiple applications that are using the same ingress controller address. This kind of ingress routes HTTP requests to different services based on the
Host header. In the following
example, two Node.js servers are deployed. The console for the first service can be accessed by using host name
mynode1.example.com and the second on
mynode2.example.com. In DNS,
mynode2.example.com can be either an A record for the proxy node virtual IP 10.0.0.1, or a CNAME for the load balancer that forwards traffic to where the ingress controller is listening.
- host: mynode1.example.com
- host: mynode2.example.com
It is usually a good practice to provide some value for host, as the default is
*, which forwards all requests to the backend.
An ingress service can be secured by using a TLS private key and certificate. The TLS private key and certificate must be defined in a secret with key names
tls.crt. The ingress presumes TLS termination and
traffic are proxied only on port 443.
- host: api.example.com
In the previous example, the TLS termination is added to the
api.example.com ingress resource. The certificate's subject name or subject alternative names (SANs) must match the
host value in the ingress resource, be
valid (not expired), and the full certificate chain (including any intermediate and root certificates) must be trusted by the client, otherwise the application shows a security warning during the TLS handshake. In the example, the
tls.crt subject name contains either
api.example.com or is a wildcard certificate for
*.example.com. The DNS entry for
api.example.com is an A record for the proxy nodes' virtual IP address.
api-tls-secret is created in the same namespace as the ingress resource by using the following command:
kubectl create secret tls api-tls-secret --key=/path/to/tls.key --cert=/path/to/tls.crt
The secret can also be created in yaml if the TLS key and certificate payloads are base-64 encoded.
tls.crt: <base64-encoded cert>
tls.key: <base64-encoded key>
Support of WebSockets is provided by nginx ingress controller out of the box. As the default timeout is 60 seconds, the timeout needs to be increased. To expose the service as a WebSocket service, it must be annotated in the ingress resource definition
Shared ingress controller
In IBM Cloud Private, a global ingress controller is installed by default and is deployed on all proxy nodes. This provides the capability to define the ingress resources for your applications across all namespaces. The global ingress controller
runs in the
kube-system namespace. If a NetworkPolicy is used to isolate namespace traffic, another one needs to be created to allow traffic from the ingress controller to any proxied backend services in other namespaces.
- A common ingress controller reduces compute resources that are required to host applications.
- A common ingress controller is available for immediate use.
- All client traffic passes through a shared ingress controller. One service's client traffic can affect the other.
- Limited ability to isolate upstream ingress resource traffic from downstream ingress traffic. A public-facing API and an operations dashboard that is running in the same cluster share the same ingress controller.
- If an attacker were to gain access to the ingress controller, they would be able to observe decrypted traffic for all proxied services.
- You need to maintain different ingress resource documents for different stages; that is, maintain multiple copies of the same ingress resource yaml file with different namespace fields.
- The ingress controller needs access to read ingress, service, and pod resources in every namespace in the Kubernetes API to implement the ingress rules.
Isolated ingress controllers per namespace
An ingress controller can be installed as a Helm chart in an isolated namespace and perform ingress for services in the namespace. In this deployment type, the ingress controller is given a role that can access only ingress and resources in the namespace.
- Delineation of ingress resources for various stages of development, production, and so on.
- Performance for each namespace can be scaled individually.
- Traffic is isolated; when combined with isolated worker nodes on separate VLANs, true layer 2 isolation can be achieved as the upstream traffic does not leave the VLAN.
- CI/CD can use the same ingress resource document to deploy (assuming that development namespace is different from production namespace) across various stages.