Integrating OpenTelemetry with Instana for NGINX
You can use OpenTelemetry with NGINX and IBM Instana Observability to enable distributed tracing and observability for your NGINX web server.
Supported operating systems
OpenTelemetry integration is supported and tested on Linux operating systems only.
Prerequisites
To integrate OpenTelemetry with Instana, you need the following components:
- NGINX web server (version 1.25.3 or later is recommended; earlier versions require building the NGINX OpenTelemetry module from source)
- OpenTelemetry NGINX module or instrumentation
- An active IBM Instana Observability account
Installing the OpenTelemetry NGINX module
The official NGINX OpenTelemetry module (ngx_otel_module) is available as a dynamic module. For more information about the module, see the NGINX OpenTelemetry module documentation.
-
Install the NGINX OpenTelemetry module.
For most Linux distributions, you can install the module from the NGINX repository:
# For Ubuntu or Debian sudo apt-get install nginx-module-otel # For RHEL or CentOS sudo yum install nginx-module-otelAlternatively, you can build NGINX with the OpenTelemetry module from source. For build instructions, see the NGINX documentation.
-
Configure NGINX to load the OpenTelemetry module by adding the following line to the top of your nginx.conf file:
load_module modules/ngx_otel_module.so;
Configuring NGINX with OpenTelemetry
After you install the OpenTelemetry NGINX module, configure your NGINX server to send OpenTelemetry data. The basic configuration structure includes the following directives:
| Configuration directive | Description |
|---|---|
otel_exporter |
Defines the OTLP endpoint where traces are sent |
otel_service_name |
Logical service name for infrastructure correlation (For more information, see Infrastructure correlation.) |
otel_trace |
Enables or disables OpenTelemetry tracing |
otel_trace_context |
Configures trace context propagation |
Trace context propagation
The native NGINX OpenTelemetry module uses the W3C Trace Context standard for distributed tracing. The module propagates trace context by using the following standard HTTP headers:
traceparent- Contains the trace ID, parent span ID, and trace flagstracestate- Carries vendor-specific trace information
The module does not support Instana-specific headers for trace propagation:
X-Instana-T(Trace ID)X-Instana-S(Span ID)X-Instana-L(Sampling level or decision)
When NGINX receives requests with W3C Trace Context headers, it automatically continues the trace. When NGINX makes upstream requests, it propagates the trace context by using the same W3C standard headers.
Setting up OpenTelemetry integration
You can use either of the following options to integrate OpenTelemetry with Instana:
- Instana Distribution of OpenTelemetry Collector (IDOT): Use the Instana-managed OpenTelemetry Collector to receive and process OpenTelemetry data.
- Instana OTLP endpoints: Configure the OpenTelemetry OTLP exporter to send OpenTelemetry data directly to the Instana agent or Instana backend OTLP endpoints.
Option 1: Using Instana Distribution of OpenTelemetry Collector
The Instana Distribution of OpenTelemetry Collector (IDOT) is a fully managed and preconfigured version of the OpenTelemetry Collector that seamlessly integrates with the Instana observability platform.
To collect telemetry data from your NGINX server, complete the following steps to set up and configure IDOT:
- Install the collector. Deploy the Instana Distribution of OpenTelemetry Collector as a sidecar, daemon, or gateway, depending on your infrastructure needs. For detailed instructions about configuring the IDOT collector, see the Instana Distribution of OpenTelemetry Collector documentation.
-
Configure your NGINX server to send OpenTelemetry data to the IDOT collector endpoint (default port 24317):
http { otel_exporter { endpoint localhost:24317; } otel_service_name nginx_service; otel_trace on; server { listen 80; server_name example.com; location / { otel_trace_context propagate; # Backend refers to the upstream application server(s) that NGINX forwards requests to proxy_pass http://backend; } } }
Option 2: Using Instana OTLP endpoints
You can integrate OpenTelemetry with NGINX by configuring the OpenTelemetry Protocol (OTLP) exporter to send the OpenTelemetry traces directly to the Instana agent or the Instana backend.
Sending data to agent OTLP endpoint
The Instana agent provides OTLP endpoints that can receive OpenTelemetry data directly from your NGINX server.
To send data to the Instana agent OTLP endpoint, configure your NGINX server as follows:
http {
otel_exporter {
endpoint localhost:4317;
}
otel_service_name your_service_name;
otel_trace on;
server {
listen 80;
location / {
otel_trace_context propagate;
proxy_pass http://backend;
}
}
}
Sending data to the backend OTLP endpoint
For environments where direct communication with the Instana agent is not possible, you can configure your NGINX server to send OpenTelemetry data directly to the Instana backend.
For more information, see Sending OpenTelemetry data to Instana.
To send data to the Instana backend OTLP endpoint, configure your NGINX server as follows:
http {
otel_exporter {
endpoint https://{instana-backend-otlp-acceptor-endpoint}:4317;
headers x-instana-key={agent-key};
}
otel_service_name your_service_name;
otel_trace on;
server {
listen 80;
location / {
otel_trace_context propagate;
proxy_pass http://backend;
}
}
}
Replace {instana-backend-otlp-acceptor-endpoint} with your Instana backend endpoint and {agent-key} with your Instana agent key.
Deploying NGINX Ingress Controller with OpenTelemetry on Kubernetes
For Kubernetes environments, you can deploy the NGINX Ingress Controller with OpenTelemetry support by using Helm. This deployment allows you to automatically instrument all ingress traffic with distributed tracing.
--set autotrace.ingress_nginx.enabled=false when you install or update the Instana agent.Prerequisites for Kubernetes deployment
Ensure that you have the following prerequisites:
- Kubernetes cluster (for example, Minikube, EKS, GKE, or AKS)
- Helm 3.x installed
- kubectl configured to access your cluster
- Instana backend OTLP endpoint and API key
Installing ingress-nginx with OpenTelemetry
-
Add the ingress-nginx Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update -
Create a Helm values file (for example, ingress-nginx-values.yaml) with OpenTelemetry configuration:
controller: config: http-snippet: | otel_exporter { endpoint https://{instana-backend-otlp-endpoint}:4317; header x-instana-key "{agent-key}"; } otel_service_name "{your-service-name}"; otel_resource_attr "service.namespace" "ingress-nginx"; otel_resource_attr "k8s.cluster.name" "{your-cluster-name}"; main-snippet: | load_module /etc/nginx/modules/ngx_otel_module.so; otel-span-attr: | http.request_id $request_id k8s.namespace $namespace k8s.ingress.name $ingress_name k8s.service.name $service_name otel-trace: "on" otel-trace-context: propagate image: digest: "" image: ingress-nginx-otel pullPolicy: Never registry: docker.io/library tag: "{version}"Replace the following values in the configuration:
- {instana-backend-otlp-endpoint} - Your Instana backend OTLP endpoint (for example, otlp-red-saas.instana.io)
- {agent-key} - Your Instana agent key
- {your-cluster-name} - Your Kubernetes cluster name
- {your-service-name} - Your service name
- {version} - The ingress-nginx image version with OpenTelemetry support
- Image settings - Configure according to your registry and image requirements
-
Install the ingress-nginx controller with the custom values:
helm install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --values ingress-nginx-values.yaml -
Verify the installation:
kubectl get pods -n ingress-nginx kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
Configuration options
The Helm values file includes several important configuration sections:
- http-snippet: Contains the main OpenTelemetry exporter configuration, including the Instana endpoint and authentication
- main-snippet: Loads the OpenTelemetry NGINX module
- otel-span-attr: Defines custom span attributes that are added to each trace, including Kubernetes-specific metadata
- otel-trace: Enables OpenTelemetry tracing
- otel-trace-context: Configures trace context propagation to downstream services
Custom span attributes
The configuration includes Kubernetes-specific attributes that provide valuable context in Instana:
| Attribute | Description |
|---|---|
http.request_id |
Unique identifier for each HTTP request |
k8s.namespace |
Kubernetes namespace of the service |
k8s.ingress.name |
Name of the ingress resource |
k8s.service.name |
Name of the Kubernetes service |
These attributes help correlate traces with specific Kubernetes resources in the Instana UI.
Infrastructure correlation
The otel_service_name directive is used for infrastructure correlation, which links your NGINX application traces with the underlying infrastructure entities (hosts, containers, Kubernetes pods, and processes) that are monitored by Instana.
When you set otel_service_name, Instana uses this service name to complete the following tasks:
- Correlate NGINX traces with infrastructure metrics (CPU, memory, and network)
- Map service dependencies and call relationships
- Enable bidirectional navigation between application and infrastructure views
- Provide complete context for root cause analysis
For more information about infrastructure correlation and OpenTelemetry integration, see the following topics: