---

Setting up load balancers and DNS

To enable public access to the Acceptor, Gateway, and Instana UI endpoints, you need to configure both load balancers and DNS.

The setup process varies depending on whether you're using Kubernetes or Red Hat OpenShift for your Instana backend:

  • For Kubernetes, you must either define Ingresses or create Services of type LoadBalancer, and update your DNS A records.
  • For Red Hat OpenShift, you must either define Routes or create Services of type LoadBalancer, and update your DNS A records. Additional configurations might be required for website and mobile app monitoring, see Website and mobile app monitoring (Red Hat OpenShift).

Configuring endpoints on the Instana backend on Kubernetes

In Kubernetes, you can either define Ingresses or create LoadBalancer-type Services to expose your endpoints.

To expose your Acceptor and Gateway endpoints with a LoadBalancer-type service, refer to the platform-specific YAML samples for Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), or Google Kubernetes Engine (GKE), which provide the necessary annotations and configurations for parameters such as resource groups, DNS label, subnets, and IP address tailored to each platform. For Kubernetes, you must either define Ingresses or create Services of type LoadBalancer. For Red Hat OpenShift, you must either define Routes or create Services of type LoadBalancer.

Domain configuration

For both Instana backend on Kubernetes and Instana backend on Red Hat OpenShift, you must set up A records in your DNS for the base_domain, for the Acceptor subdomain domain (usually ingress), for the OTLP Acceptor subdomains (otlp-http and otlp-grpc), and for all tenant unit subdomains:

  • <base_domain>
  • ingress.<base_domain>
  • otlp-http.<base_domain>
  • otlp-grpc.<base_domain>
  • <unit-name>-<tenant-name>.<base_domain>

If you want to use only <base_domain> for all your Ingress traffic, see Enabling support for Single Ingress Domain.

Then, configure the domains in the CoreSpec as shown in the following code. For more information about CoreSpec, see Creating a Core.

spec:
  agentAcceptorConfig:
    host: ingress.<base_domain>
    port: 443
  baseDomain: <base_domain>

Instana backend on Kubernetes

If you'd like the Instana Enterprise Operator to create the LoadBalancer services for you, see Creating LoadBalancer services automatically.

To set up load balancers for your Instana backend on Kubernetes, use Services of type LoadBalancer as follows:

Acceptor

  1. Create a YAML file such as service.yaml.

  2. Set Acceptor in the service.yaml file, complete one of the following steps depending on your environment:

    • For Azure Kubernetes Service (AKS):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        annotations:
          # For additional Loadbalancer annotations, kindly refer: https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#loadbalancer-annotations
          service.beta.kubernetes.io/azure-load-balancer-resource-group: <your-resource-group>
          service.beta.kubernetes.io/azure-load-balancer-internal: "false" #if internet facing
          service.beta.kubernetes.io/azure-dns-label-name: <dns-label-name>
        name: loadbalancer-acceptor
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        ports:
          - name: http-service
            port: 443
            protocol: TCP
            targetPort: http-service
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: acceptor
          instana.io/group: service
      
    • For Amazon Elastic Kubernetes Service (EKS):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        annotations:
          # To explore on more service annotations, kindly refer the documentation - https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/service/annotations/
          service.beta.kubernetes.io/aws-load-balancer-name: <your-load-balancer-name>
          service.beta.kubernetes.io/aws-load-balancer-subnets: <subnet1-name>,<subnet2-name>,<subnet3-name>
          service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
          service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
        name: loadbalancer-acceptor
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        ports:
          - name: http-service
            port: 443
            protocol: TCP
            targetPort: http-service
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: acceptor
          instana.io/group: service
      
    • For Google Kubernetes Engine (GKE):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        annotations:
          # To explore on more service annotations, kindly refer the documentation https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer
          cloud.google.com/l4-rbs: "enabled"
        name: loadbalancer-acceptor
      spec:
        type: LoadBalancer
        loadBalancerIP: <your_loadbalancer_IP>
        externalTrafficPolicy: Local
        ports:
          - name: http-service
            port: 443
            protocol: TCP
            targetPort: http-service
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: acceptor
          instana.io/group: service
      

      Replace <your_loadbalancer_IP> with the IP address of your load balancer.

  3. Set Gateway in the service.yaml file, complete one of the following steps depending on your environment:

    • For Azure Kubernetes Service (AKS):

      apiVersion: v1
      kind: Service
      metadata:
        namespace: instana-core
        name: loadbalancer-gateway
        annotations:
          # For additional Loadbalancer annotations, kindly refer: https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#loadbalancer-annotations
          service.beta.kubernetes.io/azure-load-balancer-resource-group: <your-resource-group>
          service.beta.kubernetes.io/azure-load-balancer-internal: "false" #internet facing
          service.beta.kubernetes.io/azure-dns-label-name: <dns-label-name>
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        ports:
          - name: https
            port: 443
            protocol: TCP
            targetPort: https
          - name: http
            port: 80
            protocol: TCP
            targetPort: http
        selector:
          app.kubernetes.io/name: instana
          app.kubernetes.io/component: gateway
          instana.io/group: service
      
    • For Amazon Elastic Kubernetes Service (Amazon EKS):

    apiVersion: v1
    kind: Service
    metadata:
      namespace: instana-core
      name: loadbalancer-gateway
      annotations:
        # To explore on more service annotations, kindly refer the documentation - https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/service/annotations/
        service.beta.kubernetes.io/aws-load-balancer-name: <your-gateway-name>
        service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
        service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
        service.beta.kubernetes.io/aws-load-balancer-subnets: <subnet1-name>,<subnet2-name>,<subnet3-name>
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local
      ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: https
        - name: http
          port: 80
          protocol: TCP
          targetPort: http
      selector:
        app.kubernetes.io/name: instana
        app.kubernetes.io/component: gateway
        instana.io/group: service
    
    • For Google Kubernetes Engine (GKE):
    apiVersion: v1
    kind: Service
    metadata:
      namespace: instana-core
      name: loadbalancer-gateway
      annotations:
        # To explore on more service annotations, kindly refer the documentation https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer
        cloud.google.com/l4-rbs: "enabled"
    spec:
      type: LoadBalancer
      loadBalancerIP: <your_loadbalancer_IP>
      externalTrafficPolicy: Local
      ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: https
        - name: http
          port: 80
          protocol: TCP
          targetPort: http
      selector:
        app.kubernetes.io/name: instana
        app.kubernetes.io/component: gateway
        instana.io/group: service
    

    Replace <your_loadbalancer_IP> with the IP address of your load balancer.

    If Gateway Controller is enabled, update the selector of the loadbalancer-gateway as shown in the following example so that ingress traffic is forwarded correctly to the new gateway-v2 pods.

    apiVersion: v1
    kind: Service
    metadata:
      namespace: instana-core
      name: loadbalancer-gateway
    spec:
      selector:
        ...
        app.kubernetes.io/component: gateway-v2 # changed from gateway
        ...
    

    If Gateway Controller is enabled, and you have configured ports other than 443 for any of the acceptors, you need to expose those ports to ensure Kubernetes can ingress the traffic onto your Custom Edition environment by adding the following configuration block in the Gateway loadbalancer:

    apiVersion: v1
    kind: Service
    metadata:
      namespace: instana-core
      name: loadbalancer-gateway
    spec:
      type: LoadBalancer
      ports:
        ...
        - name: <ACCEPTOR_NAME>
          port: <ACCEPTOR_PORT>
          protocol: TCP
          targetPort: <ACCEPTOR_PORT>
        ...
    

    A port configuration block needs to be added for each acceptor port that is configured to a port other than 443.

  4. Apply the YAML file by running the following command:

    kubectl apply -f service.yaml -n <CORE_NAMESPACE>
    

    Replace <CORE_NAMESPACE> with the namespace of the Core object.

Configuring endpoints on the Instana backend on Red Hat OpenShift

In Red Hat OpenShift, you can either define Routes or create LoadBalancer-type Services to expose your endpoints.

  1. Expose the Acceptor and endpoints with a route-type service:

    oc create route passthrough acceptor --hostname=<acceptor_subdomain> --service=acceptor  --port=8600 -n instana-core
    
  2. Expose the OpenTelemetry Acceptor and endpoints with a route-type service:

    oc create route passthrough otlp-http-acceptor --hostname=otlp-http.<base_domain> --service=gateway  --port=https -n instana-core
    oc create route passthrough otlp-grpc-acceptor --hostname=otlp-grpc.<base_domain> --service=gateway  --port=https -n instana-core
    
  3. Expose the Gateway endpoints with a route-type service:

    oc create route passthrough base-domain --hostname=<base_domain> --service=gateway --port=https -n instana-core
    
    oc create route passthrough <unitName>-<tenantName>-ui --hostname=<unitName>-<tenantName>.<base_domain> --service=gateway --port=https -n instana-core
    
  4. (Optional) Monitor website and mobile app monitoring on Red Hat OpenShift, see Website and Mobile App Monitoring (Red Hat OpenShift.

DNS configuration

For both the Instana backend on Kubernetes and Red Hat OpenShift, you need to create A records in your DNS that point to the public IPs of your services for the following subdomains:

  • <base_domain>: The parent or root domain for all related subdomains.
  • ingress.<base_domain>: Typically used for the Agent Acceptor or Ingress service.
  • otlp-http.<base_domain>: Handles telemetry data over OTLP HTTP. It is used for sending traces and metrics data through HTTP requests.
  • otlp-grpc.<base_domain>: Handles telemetry data over OTLP gRPC. It is used for sending traces and metrics data through gRPC. gRPC is more efficient and performant over HTTP.
  • <unit-name>-<tenant-name>.<base_domain>: The tenant unit subdomain is used to access the Instana UI of a specific unit within a tenant.
  1. Configure the domains in the CoreSpecs that is shown in the following code and apply it. For more information about CoreSpec, see Creating a Core.

    spec:
      agentAcceptorConfig:
        host: ingress.<base_domain>
        port: 443
      baseDomain: <base_domain>
    
  2. After the installation is complete, obtain the public IP of the LoadBalancer or Ingress service.

  3. Create A records in your DNS provider to point your subdomains to the public IP addresses.

Website and mobile app monitoring (Red Hat OpenShift)

If you want to collect only beacons for websites and mobile applications without using client IP addresses or geolocation services, the gateway configuration suffices. The client IP address is likely to appear as an internal address within the Red Hat OpenShift cluster.

To collect the actual client IP address or enable geolocation services, you must take a few more steps. These steps make sure that the client’s IP address is preserved as the beacon travels through the network topology to the Instana backend.

The EUM acceptor in the backend relies on the x-forwarded-for header in incoming requests to determine the client’s IP address. Although the Instana gateway component can add this header, the client’s IP address is often incorrect by the time the request reaches the gateway in a typical Red Hat OpenShift configuration. The issue occurs because the request passes through a component that handles network address translation (NAT), such as a load balancer or reverse proxy. The most common causes of this issue are:

  • Load balancer or reverse proxy that converts a public IP address to a private IP address at the edge of the company’s network (haproxy or Nginx)
  • Load balancer or reverse proxy that sends a request to the Red Hat OpenShift master nodes (HAProxy or Nginx)
  • The Red Hat OpenShift ingress controller

In a network topology with NAT components, it is essential to set the x-forwarded-for header while the client IP address is still accurate. The solution is to configure the reverse proxy or load balancer to set this header.

A reverse proxy or load balancer can modify a request only if it is not encrypted. To add the x-forwarded-for header, the request must be decrypted, modified, and then reencrypted. Most reverse proxies and load balancers support this process, but it requires an extra TLS certificate in the proxy or load balancer. The Red Hat OpenShift Ingress Controller can also handle this using a route of type reencrypt.

Sample configuration: Load balancer or reverse proxy between the client web browser and the Instana backend

Consider a scenario where a load balancer or reverse proxy is positioned between the client web browser and the Instana backend on Red Hat OpenShift. In this scenario, the Instana backend uses the Red Hat OpenShift Ingress Controller to handle requests. The load balancer or reverse proxy can be configured to preserve the client IP address to add the x-forwarded-for header. This configuration involves terminating the TLS connection, inserting the header, and then reencrypting the connection. After reencryption, the load balancer or reverse proxy forwards the request to the Instana gateway through the Red Hat OpenShift Ingress Controller.

sample-1

In this configuration, a passthrough route can be used to direct traffic to the Instana gateway, as the x-forwarded-for header is already added to the request with the correct IP address. As a result, the Ingress Controller does not need to decrypt the request.

Sample configuration: No load balancer or reverse proxy between the client browser and the Instana backend

When no load balancer or reverse proxy exists between the client browser and the Instana backend on Red Hat OpenShift, the Red Hat OpenShift Ingress Controller is the sole component that performs NAT. In this scenario, a reencrypt route can add the x-forwarded-for header to the request before it reaches the Instana gateway. By default, the Red Hat OpenShift Ingress Controller either adds the x-forwarded-for header if it is missing or appends the client IP address to the header if it exists.

sample-2