Setting up load balancers and DNS

You must set up load balancers and DNS for the Acceptor and Gateway components so that these components can be exposed to the public Internet and allow you to access the Instana UI. Depending on your setup, steps can be different for an Instana backend on Kubernetes and an Instana backend on Red Hat OpenShift.

For Kubernetes, you must either define Ingresses or create Services of type LoadBalancer. For Red Hat OpenShift, you must either define Routes or create Services of type LoadBalancer.

Domain configuration

For both Instana backend on Kubernetes and Instana backend on Red Hat OpenShift, you must set up A records in your DNS for the base_domain, for the Acceptor subdomain domain (usually ingress), for the OTLP Acceptor subdomains (otlp-http and otlp-grpc), and for all tenant unit subdomains:

  • <base_domain>
  • ingress.<base_domain>
  • otlp-http.<base_domain>
  • otlp-grpc.<base_domain>
  • <unit-name>-<tenant-name>.<base_domain>

Then, configure the domains in the CoreSpec as shown in the following code. For more information about CoreSpec, see Creating a Core.

spec:
  agentAcceptorConfig:
    host: ingress.<base_domain>
    port: 443
  baseDomain: <base_domain>

Instana backend on Kubernetes

To set up load balancers for your Instana backend on Kubernetes, use Services of type LoadBalancer as follows:

Acceptor

  1. Create a YAML file such as service.yaml as follows:

    • For Azure Kubernetes Service (AKS):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        annotations:
          # For additional Loadbalancer annotations, kindly refer: https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#loadbalancer-annotations
          service.beta.kubernetes.io/azure-load-balancer-resource-group: <your-resource-group>
          service.beta.kubernetes.io/azure-load-balancer-internal: "false" #if internet facing
          service.beta.kubernetes.io/azure-dns-label-name: <dns-label-name>
        name: loadbalancer-acceptor
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        ports:
          - name: http-service
            port: 443
            protocol: TCP
            targetPort: http-service
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: acceptor
          instana.io/group: service
      
    • For Amazon Elastic Kubernetes Service (Amazon EKS):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        annotations:
          # To explore on more service annotations, kindly refer the documentation - https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/service/annotations/
          service.beta.kubernetes.io/aws-load-balancer-name: <your-load-balancer-name>
          service.beta.kubernetes.io/aws-load-balancer-subnets: <subnet1-name>,<subnet2-name>,<subnet3-name>
          service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
          service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
        name: loadbalancer-acceptor
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        ports:
          - name: http-service
            port: 443
            protocol: TCP
            targetPort: http-service
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: acceptor
          instana.io/group: service
      
    • For Google Kubernetes Engine (GKE):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        annotations:
          # To explore on more service annotations, kindly refer the documentation https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer
          cloud.google.com/l4-rbs: "enabled"
        name: loadbalancer-acceptor
      spec:
        type: LoadBalancer
        loadBalancerIP: <your_loadbalancer_IP>
        externalTrafficPolicy: Local
        ports:
          - name: http-service
            port: 443
            protocol: TCP
            targetPort: http-service
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: acceptor
          instana.io/group: service
      

      Replace <your_loadbalancer_IP> with the IP address of your load balancer.

  2. Apply the YAML file by running the following command:

    kubectl apply -f service.yaml -n <CORE_NAMESPACE>
    

    Replace <CORE_NAMESPACE> with the namespace of the Core object.

Gateway

  1. Create a YAML file such as service.yaml, complete one of the following steps:

    • For Azure Kubernetes Service (AKS):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        name: loadbalancer-gateway
        annotations:
          # For additional Loadbalancer annotations, kindly refer: https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#loadbalancer-annotations
          service.beta.kubernetes.io/azure-load-balancer-resource-group: <your-resource-group>
          service.beta.kubernetes.io/azure-load-balancer-internal: "false" #internet facing
          service.beta.kubernetes.io/azure-dns-label-name: <dns-label-name>
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        ports:
          - name: https
            port: 443
            protocol: TCP
            targetPort: https
          - name: http
            port: 80
            protocol: TCP
            targetPort: http
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: gateway
          instana.io/group: service
      
    • For Amazon Elastic Kubernetes Service (Amazon EKS):

    apiVersion: v1
    kind: Service
    metadata:
      namespace: instana-core
      name: loadbalancer-gateway
      annotations:
        # To explore on more service annotations, kindly refer the documentation - https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/service/annotations/
        service.beta.kubernetes.io/aws-load-balancer-name: <your-gateway-name>
        service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
        service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
        service.beta.kubernetes.io/aws-load-balancer-subnets: <subnet1-name>,<subnet2-name>,<subnet3-name>
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local
      ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: https
        - name: http
          port: 80
          protocol: TCP
          targetPort: http
      selector:
        app.kubernetes.io/name: instana
        app.kubernetes.io/component: gateway
        instana.io/group: service
    
    • For Google Kubernetes Engine (GKE):
    apiVersion: v1
    kind: Service
    metadata:
      namespace: instana-core
      name: loadbalancer-gateway
      annotations:
        # To explore on more service annotations, kindly refer the documentation https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer
        cloud.google.com/l4-rbs: "enabled"
    spec:
      type: LoadBalancer
      loadBalancerIP: <your_loadbalancer_IP>
      externalTrafficPolicy: Local
      ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: https
        - name: http
          port: 80
          protocol: TCP
          targetPort: http
      selector:
        app.kubernetes.io/name: instana
        app.kubernetes.io/component: gateway
        instana.io/group: service
    

    Replace <your_loadbalancer_IP> with the IP address of your load balancer.

  2. Apply the YAML file by running the following command:

    kubectl apply -f service.yaml -n <CORE_NAMESPACE>
    

    Replace <CORE_NAMESPACE> with the namespace of the Core object.

Instana backend on Red Hat OpenShift

To set up load balancers for your Instana backend on Red Hat OpenShift, create Routes by running the following commands:

Acceptor

oc create route passthrough acceptor --hostname=<acceptor_subdomain> --service=acceptor  --port=8600 -n instana-core

OTLP Acceptor

oc create route passthrough otlp-http-acceptor --hostname=otlp-http.<base_domain> --service=gateway  --port=https -n instana-core
oc create route passthrough otlp-grpc-acceptor --hostname=otlp-grpc.<base_domain> --service=gateway  --port=https -n instana-core

Gateway

oc create route passthrough base-domain --hostname=<base_domain> --service=gateway --port=https -n instana-core
oc create route passthrough <unitName>-<tenantName>-ui --hostname=<unitName>-<tenantName>.<base_domain> --service=gateway --port=https -n instana-core

Website and Mobile App Monitoring (EUM)

If you want to collect only beacons for websites and mobile applications without using client IP addresses or geolocation services, the gateway configuration suffices. The client IP address is likely to appear as an internal address within the Red Hat OpenShift cluster.

To collect the actual client IP address or enable geolocation services, you must take extra steps. These steps make sure that the client’s IP address is preserved as the beacon travels through the network topology to the Instana backend.

The EUM acceptor in the backend relies on the x-forwarded-for header in incoming requests to determine the client’s IP address. Although the Instana gateway component can add this header, the client’s IP address is often incorrect by the time the request reaches the gateway in a typical Red Hat OpenShift configuration. The issue occurs because the request passes through a component that handles network address translation (NAT), such as a load balancer or reverse proxy. The most common causes of this issue are:

  • Load balancer or reverse proxy that converts a public IP address to a private IP address at the edge of the company’s network (haproxy or Nginx)
  • Load balancer or reverse proxy that sends a request to the Red Hat OpenShift master nodes (HAProxy or Nginx)
  • The Red Hat OpenShift ingress controller

In a network topology with NAT components, it is essential to set the x-forwarded-for header while the client IP address is still accurate. The solution is to configure the reverse proxy or load balancer to set this header.

A reverse proxy or load balancer can modify a request only if it is not encrypted. To add the x-forwarded-for header, the request must be decrypted, modified, and then re-encrypted. Most reverse proxies and load balancers support this process, but it requires an extra TLS certificate in the proxy or load balancer. The Red Hat OpenShift Ingress Controller can also handle this using a route of type reencrypt.

Sample configuration 1

Consider a scenario where a load balancer or reverse proxy is positioned between the client web browser and the Instana backend on Red Hat OpenShift. In this scenario, the Instana backend uses the Red Hat OpenShift Ingress Controller to handle requests. The load balancer or reverse proxy can be configured to preserve the client IP address to add the x-forwarded-for header. This configuration involves terminating the TLS connection, inserting the header, and then reencrypting the connection. After reencryption, the load balancer or reverse proxy forwards the request to the Instana gateway through the Red Hat OpenShift Ingress Controller.

sample-1

In this configuration, a passthrough route can be used to direct traffic to the Instana gateway, as the x-forwarded-for header is already added to the request with the correct IP address. As a result, the Ingress Controller does not need to decrypt the request.

Sample configuration 2

When no load balancer or reverse proxy exists between the client browser and the Instana backend on Red Hat OpenShift, the Red Hat OpenShift Ingress Controller is the sole component that performs NAT. In this scenario, a reencrypt route can add the x-forwarded-for header to the request before it reaches the Instana gateway. By default, the Red Hat OpenShift Ingress Controller either adds the x-forwarded-for header if it is missing or appends the client IP address to the header if it already exists.

sample-2