Ingress resources
An ingress is a collection of rules to allow inbound connections to the Kubernetes cluster services. It can be configured to give Kubernetes services externally reachable URLs, terminate TLS connections, offer name-based virtual hosting, and more.
Ingress controller for layer 7 traffic
Ingress resources in Kubernetes are used to proxy layer 7 traffic to containers in the cluster.
Ingress resources require an ingress controller component to run as a layer 7 proxy service inside the cluster. In IBM® Cloud Private, an nginx-based ingress controller is provided by default and is deployed on the proxy or master (in case master acts as proxy) nodes.
The default ingress controller watches Kubernetes ingress objects on all namespaces through the Kubernetes API and dynamically programs the nginx proxy rules for upstream services based on the ingress resource. By default, the ingress controller is bootstrapped with load balancing policies, such as load balancing algorithms, backend weight scheme, etc.
More than one ingress controller can also be deployed if isolation between namespaces is required. The ingress controller itself is a container deployment that can be scaled out. It is exposed on a host port on the proxy nodes and can proxy all of the pod and service IP mesh that is running in the cluster.
To deny all ingress traffic to applications that are under your namespace, see Denying ingress traffic.
Single service ingress
You can expose a single service through ingress. In the following example, a Node.js server was created with the service name mynode-ibm-nodejs-sample
on port 3000. In this case, all traffic on the ingress controller's address and
port (80 or 443) is forwarded to this service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mynode1ing
spec:
backend:
serviceName: mynode-ibm-nodejs-sample
servicePort: 3000
Simple fanout
With a simple fanout, you can define multiple HTTP services at different paths and provide a single proxy that routes to the correct endpoints in the backend. When you have a highly available load balancer that is managing your traffic, this type of ingress resource is helpful in reducing the number of load balancers to a minimum.
In the following example, /
is the rewrite target for two services: customer-api
on port 4191 and orders-api
on port 9090. Both of these services' context roots are at /
; the ingress rewrites
the path /api/customers/*
and /api/orders/*
to /
when it proxies the requests to the backends.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: api
spec:
rules:
- host: api.example.com
http:
paths:
- backend:
serviceName: customer-api
servicePort: 4191
path: /api/customer/*
- backend:
serviceName: orders-api
servicePort: 9090
path: /api/orders/*
Name-based virtual hosting
Name-based virtual hosting provides the capability to host multiple applications that are using the same ingress controller address. This kind of ingress routes HTTP requests to different services based on the Host
header. In the following
example, two Node.js servers are deployed. The console for the first service can be accessed by using host name mynode1.example.com
and the second on mynode2.example.com
. In DNS, mynode1.example.com
and mynode2.example.com
can be either an A record for the proxy node virtual IP 10.0.0.1, or a CNAME for the load balancer that forwards traffic to where the ingress controller is listening.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
name: mynode1ing
spec:
rules:
- host: mynode1.example.com
http:
paths:
- backend:
serviceName: mynode1-ibm-nodejs-sample
servicePort: 3000
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
name: mynode2ing
spec:
rules:
- host: mynode2.example.com
http:
paths:
- backend:
serviceName: mynode2-ibm-nodejs-sample
servicePort: 3000
It is usually a good practice to provide some value for host, as the default is *
, which forwards all requests to the backend.
TLS
An ingress service can be secured by using a TLS private key and certificate. The TLS private key and certificate must be defined in a secret with key names tls.key
and tls.crt
. The ingress presumes TLS termination and
traffic are proxied only on port 443.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: api
spec:
rules:
- host: api.example.com
http:
paths:
- backend:
serviceName: customer-api
servicePort: 4191
path: /api/customer/*
- backend:
serviceName: orders-api
servicePort: 9090
path: /api/orders/*
tls:
- hosts:
- api.example.com
secretName: api-tls-secret
In the previous example, the TLS termination is added to the api.example.com
ingress resource. The certificate's subject name or subject alternative names (SANs) must match the host
value in the ingress resource, be
valid (not expired), and the full certificate chain (including any intermediate and root certificates) must be trusted by the client, otherwise the application shows a security warning during the TLS handshake. In the example, the tls.crt
subject name contains either api.example.com
or is a wildcard certificate for *.example.com
. The DNS entry for api.example.com
is an A record for the proxy nodes' virtual IP address.
The secret api-tls-secret
is created in the same namespace as the ingress resource by using the following command:
kubectl create secret tls api-tls-secret --key=/path/to/tls.key --cert=/path/to/tls.crt
The secret can also be created in yaml if the TLS key and certificate payloads are base-64 encoded.
apiVersion: v1
type: Opaque
kind: Secret
metadata:
name: api-tls-secret
data:
tls.crt: <base64-encoded cert>
tls.key: <base64-encoded key>
WebSockets
Support of WebSockets is provided by nginx ingress controller out of the box. As the default timeout is 60 seconds, the timeout needs to be increased. To expose the service as a WebSocket service, it must be annotated in the ingress resource definition
like nginx.org/websocket-services: "service1[,service2,...]"
.
Shared ingress controller
In IBM Cloud Private, a global ingress controller is installed by default and is deployed on all proxy nodes. This provides the capability to define the ingress resources for your applications across all namespaces. The global ingress controller
runs in the kube-system
namespace. If a NetworkPolicy is used to isolate namespace traffic, another one needs to be created to allow traffic from the ingress controller to any proxied backend services in other namespaces.
Advantages:
- A common ingress controller reduces compute resources that are required to host applications.
- A common ingress controller is available for immediate use.
Disadvantages:
- All client traffic passes through a shared ingress controller. One service's client traffic can affect the other.
- Limited ability to isolate upstream ingress resource traffic from downstream ingress traffic. A public-facing API and an operations dashboard that is running in the same cluster share the same ingress controller.
- If an attacker were to gain access to the ingress controller, they would be able to observe decrypted traffic for all proxied services.
- You need to maintain different ingress resource documents for different stages; that is, maintain multiple copies of the same ingress resource yaml file with different namespace fields.
- The ingress controller needs access to read ingress, service, and pod resources in every namespace in the Kubernetes API to implement the ingress rules.
Isolated ingress controllers per namespace
An ingress controller can be installed as a Helm chart in an isolated namespace and perform ingress for services in the namespace. In this deployment type, the ingress controller is given a role that can access only ingress and resources in the namespace.
Advantages
- Delineation of ingress resources for various stages of development, production, and so on.
- Performance for each namespace can be scaled individually.
- Traffic is isolated; when combined with isolated worker nodes on separate VLANs, true layer 2 isolation can be achieved as the upstream traffic does not leave the VLAN.
- CI/CD can use the same ingress resource document to deploy (assuming that development namespace is different from production namespace) across various stages.
Disadvantages:
- Additional ingress controllers must be deployed, using additional resources.
-
Ingress controllers in separate namespaces might require either a dedicated node or a dedicated external load balancer.