Monitoring NGINX

nginx logo nginxplus logo

Instana can help you to collect both metrics and distributed traces of requests that pass through NGINX.

After you install the Instana host agent, NGINX sensor is automatically installed. You can view metrics that are related to NGINX in the Instana UI after you configure NGINX sensor as outlined in the Configuring section.

To use the Distributed Tracing feature, you need to complete the configuration steps in the Distributed Tracing section.

Supported information

Supported operating systems

NGINX sensor and NGINX tracing have different version and platform requirements.

For NGINX sensor, the supported operating systems are consistent with host agents' requirements, which can be checked in the "Supported operating systems" section of each host agent, such as Supported operating systems for Linux.

For NGINX tracing, the following operating systems are supported:

Operating system Architecture Bitness
Alpine Linux: edge, 3.19, 3.18, 3.17, 3.16, 3.15, 3.14, 3.13, 3.12, 3.11, 3.10 x86_64 64
Amazon Linux: 2, 2023, 2022 x86_64 64
CentOS: Centos 7, Stream 9, Stream 8 x86_64 64
Debian: 12, 11, 10, 9 x86_64 64
Ubuntu: LTS x86_64 64

Supported NGINX versions and platforms

NGINX sensor and NGINX tracing have different version and platform requirements. For more information, see Supported NGINX versions and platforms.

Other supported information

The following Docker container images are supported for NGINX tracing:

Container image Architecture Bitness
3scale openresty x86_64 64
ingress-nginx (us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller): v0.34.0..v1.9.6 x86_64 64
nginx: alpine, stable-alpine x86_64 64
openresty/openresty Debian based x86_64 64
openresty/openresty CentOS based x86_64 64

Configuring

Enabling metrics collection

To make Instana automatically collect and monitor your NGINX processes, you need to enable metric collection as follows:

Metrics for NGINX

For NGINX metrics collection, Instana uses ngx_http_stub_status_module for remote metrics collection. To enable this collection, make sure that the module is enabled or available, and add the following snippet at the beginning of your NGINX configuration:

location /nginx_status {
  stub_status  on;
  access_log   off;
  allow 127.0.0.1; # Or the Remote IP of the Instana Host Agent
  deny  all;
}

By default, the Instana agent searches for the location of the configuration file in any available process arguments; otherwise, it fallbacks to /etc/nginx/nginx.conf.

When the Instana agent runs in a Kubernetes cluster, make sure to allow the host IP address on which the agent is running by using the allow <host-ip-address> configuration.

Metrics for NGINX Plus

To enable NGINX Plus metric monitoring, make sure that ngx_http_api_module (*) is installed or available, and add the following block to enable the module:

location /api {
    api write=off;
    allow 127.0.0.1; # Or the Remote IP of the Instana Host Agent
    deny all;
}

Metrics for Kubernetes Ingress NGINX

From Kubernetes Ingress NGINX version 0.23.0 onwards, the server that was listening on port 18080 was disabled. For Instana to monitor this NGINX instance, restore the server by adding the following snippet to Configmap:

http-snippet: |
  server {
    listen 18080;

    location /nginx_status {
      stub_status on;
      access_log  off;
      allow 127.0.0.1; # Or the Remote IP of the Instana Host Agent
      deny  all;
    }

    location / {
      return 404;
    }
  }

For more information, see the NGINX Ingress Release Notes.

Viewing metrics

After you complete the configuration steps in the Configuring section, you can view metrics that are related to NGINX in the Instana UI.

To view the metrics, complete the following steps:

  1. In the sidebar of the Instana UI, select Infrastructure.
  2. Click a specific monitored host.

Then, you can see a host dashboard with all the collected metrics and monitored processes.

Configuration data

  • PID
  • Number of Worker Processes
  • Number of Worker Connections
  • Started at
  • Version
  • Build (*)
  • Address (*)
  • Generation (*)
  • PPID (*)

Performance metrics

  • Request
  • Connections
  • Processes (*)
  • SSL (*)
  • Caches (*)
  • Server zones (*)
  • Upstreams (*)

Health signatures

Each sensor has a curated knowledgebase of health signatures that are evaluated continuously against the incoming metrics and are used to raise issues or incidents that depend on user impact.

Built-in events trigger issues or incidents based on failing health signatures on entities, and custom events trigger issues or incidents based on the thresholds of an individual metric of any particular entity.

For more information about built-in events for the NGINX sensor, see the Built-in events reference.

NGINX tracing (Distributed Tracing for NGINX)

Configuring NGINX Ingress for Instana agent

To use NGINX tracing, you must specify the following configuration values:

  • Add the following items in the ConfigMap for NGINX Ingress:

      data:
        enable-opentracing: "true"
        zipkin-collector-host: $HOST_IP
        zipkin-collector-port: "42699"
    
  • Add environment variables as follows in the NGINX Pod Spec (it should have already POD_NAME and POD_NAMESPACE):

    env:
    - name: HOST_IP
        valueFrom:
        fieldRef:
            fieldPath: status.hostIP
    

Notes:

  • Ingress NGINX sets the POD_NAME and POD_NAMESPACE environment variables automatically. So you don't need to add the POD_NAME and POD_NAMESPACE environment variables to the NGINX Pod Spec.

  • This configuration uses the Kubernetes DownwardAPI to make the host IP available as environment variable (HOST_IP), and the ConfigMap picks this up.

  • The port can be fixed to 42699, which is the Instana agent port.

  • The service is named either as the default nginx or it needs to be overwritten by the parameter zipkin-service-name, which can be configured in the ConfigMap.

For more information about NGINX Ingress and OpenTracing, see the Kubernetes Ingress NGINX documentation.

Distributed Tracing for NGINX, NGINX Plus, and OpenResty

To install the NGINX tracing in your setup, complete the following steps:

  1. Get the right binary files for your NGINX version.
  2. Copy the binary files where your NGINX server can access them.
  3. Edit the NGINX configurations.
  4. Restart the NGINX process or trigger a configuration reload sending a reload command

1. Download the binary files

The NGINX HTTP tracing modules are based on the nginx-opentracing v0.22.1 module, with customizations that enable more functions and easier usage.

The download links for the Instana binary files for the supported distributions of NGINX are available on the NGINX Distributed Tracing Binaries page.

2. Copy the binary files

The two binary files that are downloaded and extracted in the previous step must be placed on a file system that the NGINX process can access, both in terms of locations and file permissions.

If NGINX is running directly on the operating system, as opposed to running in a container, it's usually a good choice to copy the two Instana binaries into the folder that contains the other NGINX modules. You can find where NGINX expects the modules to be located by running the nginx -V command and look for the --modules-path configuration option, see, e.g., this response on StackOverflow.

In a containerized environment, this practice might mean to add them to the container image, or mount the files as volumes into the container; for example, see bind mounts documentation of Docker or how to mount volumes to pods in Kubernetes.

3. Edit the NGINX configurations

Every supported version of NGINX has two separated Zipped Archives, one for GLIBC and one for MUSL. GLIBC is for all the Linux Distros except Alpine, which requires MUSL.

Before you configure NGINX as follows, you need to either rename the downloaded module as modules/ngx_http_opentracing_module.so or change the configuration line load_module modules/ngx_http_opentracing_module.so; in the nginx.conf file based on the downloaded module name. For example, change the configuration line load_module modules/ngx_http_opentracing_module.so; to load_module modules/musl-nginx-1.23.3-ngx_http_ot_module.so;.

# The following line adds the basic module Instana uses to get tracing data.
# It is required that you use the version of this module built by Instana,
# rather than the one shipped in many NGINX distros, as there are some
# modifications in the Instana version that are required for tracing to work
load_module modules/ngx_http_opentracing_module.so;

# Whitelists environment variables used for tracer configuration to avoid
# that NGINX wipes them. This is only needed if instana-config.json
# should contain an empty configuration with "{}" inside to do the
# configuration via these environment variables instead.
env INSTANA_SERVICE_NAME;
env INSTANA_AGENT_HOST;
env INSTANA_AGENT_PORT;
env INSTANA_MAX_BUFFERED_SPANS;
env INSTANA_DEV;

events {}

error_log /dev/stdout info;

http {
  error_log /dev/stdout info;

  # The following line loads the Instana libsinstana_sensor library, that
  # gets the tracing data from ngx_http_opentracing_module.so and converts
  # them to Instana AutoTrace tracing data.
  # The content of instana-config.json is discussed as follows.
  opentracing_load_tracer /usr/local/lib/libinstana_sensor.so /etc/instana-config.json;

  # Propagates the active span context for upstream requests.
  # Without this configuration, the Instana trace will end at
  # NGINX, and the systems downstream (those to which NGINX
  # routes the requests) monitored by Instana will generate
  # new, unrelated traces
  opentracing_propagate_context;

  # Optional: This logs subrequests like e.g. created by the `auth_request`
  # directive so that authorization requests can be traced.
  log_subrequest on;

  # If you use upstreams, Instana will automatically use them as endpoints,
  # and it is really cool :-)
  upstream backend {
    server server-app:8080;
  }

  server {
    error_log /dev/stdout info;
    listen 8080;
    server_name localhost;

    location /static {
      root /www/html;
    }

    location ^~ /api {
      proxy_pass http://backend;
    }

    location ^~ /other_api {
      proxy_set_header X-AWESOME-HEADER "truly_is_awesome";

      # Using the `proxy_set_header` directive voids for this
      # location the `opentracing_propagate_context` defined
      # at the `http` level, so here it needs to be set again.
      # It needs to be set for every block where `proxy_set_header`
      # is found. This can also be the case at `server` level.
      opentracing_propagate_context;

      proxy_pass http://backend;
    }
  }
}

Special case opentracing_propagate_context:

Besides on main (http) level, the opentracing_propagate_context directive needs to be added for every block (server or location) where a proxy_set_header directive is set as well. The reason is that OpenTracing context propagation is based on proxy_set_header internally and it gets void by it otherwise. This is a limitation of the NGINX module API.

The following is an example of instana-config.json:

{
  "service": "nginxtracing_nginx", # Change this line to give your NGINX service a different name in Instana
  "agent_host": <host_agent_address>, # Change this line with the IP address or DNS name of the Instana agent on the same host as your NGINX process
  "agent_port": 42699, # This is the default, and you should never change it unless instructed by the Instana support
  "max_buffered_spans": 1000
}

The configurations in the snippet above mean the following:

  • service: which name will be associated in the Instana backend with this NGINX process. If the service name is not specified, the service name is automatically generated. For more information, see Services
  • agent_host: the IP address or DNS name of the local host agent. You must change this configuration to match the network name of the Instana agent on the same host as the NGINX process.
  • agent_port: the port on which the NGINX tracing extension will try to contact the host agent. Notice that this port is not configurable agent side. The NGINX tracing extension allows you to configure it in case of settings requiring port forwarding or port mapping.
  • max_buffered_spans: The maximum amount of spans, one per request, that the NGINX tracing extension will keep locally before flushing them to the agent; the default is 1000. The NGINX tracing extension will always flush the locally-buffered spans every one second. This setting allows you to reduce the amount of local buffering when your NGINX server is serving more than 1000 requests per second and you want to reduce the memory footprint of your NGINX server by flushing the tracing data faster.

The alternative is to configure the tracer via environment variables. Those take precedence but the file instana-config.json is still required. So do the following:

  • put an empty configuration {} into instana-config.json
  • do the whitelisting of the environment variables in the NGINX configuration as shown above
  • set the environment variables before starting NGINX

This method is especially useful to set the Instana agent host to the host IP in a Kubernetes cluster.

The following example Kubernetes deployment YAML part shows this method:

        env:
        - name: INSTANA_SERVICE_NAME
          value: "nginxtracing_nginx"
        - name: INSTANA_AGENT_HOST
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP

For more information, see the Environment variable reference.

4. Restart or reload

Restart the NGINX process or trigger a configuration reload by sending a reload command.

Support for W3C Trace context

Support for the propagation of W3C Trace Context headers is available since NGINX Tracer 1.8.0.

Support for other NGINX OpenTracing module builds

Using builds of the NGINX OpenTracing module from 3rd parties, including those supported by NGINX itself, are not supported. The reasons for requiring the Instana build of the NGINX OpenTracing module are technical: Self-compilation is not supported (that is, you building your own version) as that would strain unduly Instana support to try and figure out what in the compilation process goes wrong in entirely different and unpredictable setups; similarly, the modules provided by F5 are not supported, because those lack functionality that Instana tracing needs and those use dynamic linking to the standard C++ library and that would lead in many cases to segfault. Indeed, to avoid segfault, the Instana NGINX OpenTracing module is built including a statically linked standard C++ library for unifying testing and for the benefit of modern C++ code even on older distributions.

Distributed Tracing for NGINX on Kubernetes and Red Hat OpenShift

The Instana AutoTrace WebHook can automatically configure distributed tracing for NGINX on Kubernetes and Red Hat OpenShift, with the following limitations:

  • Instana AutoTrace WebHook supports only NGINX 1.19 or later.
  • Instana AutoTrace WebHook does not support OpenResty yet.

Distributed Tracing for Kubernetes Ingress NGINX

The Instana AutoTrace WebHook can automatically configure distributed tracing for Ingress NGINX and NGINX on Kubernetes and Red Hat OpenShift. The WebHook automatically injects the previously mentioned configuration snippets and tracing binaries into the configuration of NGINX and Ingress NGINX containers when autotracing for NGINX is enabled.

Distributed Tracing for Kubernetes Ingress NGINX with Zipkin Tracer

The Kubernetes Ingress NGINX allows for distributed tracing via the OpenTracing project. As the Instana Agent is capable of ingesting also Jaeger and Zipkin traces, it is possible to configure the NGINX Ingress in such a way that traces are forwarded to Instana.

While this setup is supported, Instana will not be able to take over the trace-context from OpenTracing traces, meaning insight is limited to only NGINX spans presented in isolation. Only when all services are traced via OpenTracing is the context retained and will Instana show the full distributed trace.

Requires nginx-ingress version 0.23.0 or higher; earlier versions do not support variable expansion.

All limitations of the support for Jaeger or Zipkin apply.

Nginx Tracing example

Instana provides a public repository to preview the tracing function of Nginx sensor. For more information, see nginx-tracing.

Troubleshooting

Finding logs

The Instana tracer writes its log lines to standard error of NGINX. Those lines have the prefix [lis].

In the case of auto-instrumentation with autotrace-mutating-webhook, use the logs of the binary files that are involved in auto-tracing of NGINX to troubleshoot an issue. You can also open a support case with IBM Support with these logs for finer granularity and better context. Two binary files are involved in auto-instrumentation: a library libinstana_init and an executable watcher_nginx.

  • The log file for libinstana_sensor is at /tmp/instana/lii_logs/$PID.log.
  • The log file for watcher_nginx is at /tmp/instana/iwn_logs/$PID.log.

where $PID stands for the process ID of the process that is involved.

The log level is set to error by default. To deploy NGINX with a different logging level, change the log level by setting the environment variables.

For the libinstana_init library, set the following environment variable:

INSTANA_LII_LOG_LEVEL=debug

For the watcher_nginx executable, set the following environment variable:

INSTANA_IWN_LOG_LEVEL=debug

You can find warnings and errors of the NGINX sensor in the logs of the Instana agent service. To view the latest log entries, run the following command:

systemctl status instana-agent

To view the complete log of the Instana agent service, run the following command:

sudo journalctl -xeu instana-agent.service

Nginx API is not accessible

Monitoring issue type: nginx_api_not_accessible

To resolve this issue, refer to the steps on how to configure the Instana agent to collect all NGINX metrics as described in the Enabling metrics collection section.

Nginx status endpoint is not accessible

Monitoring issue type: nginx_status_not_accessible

To resolve this issue, refer to the steps on how to configure the Instana agent to collect all NGINX metrics as described in the Enabling metrics collection section.

Nginx API is not found

Monitoring issue type: nginx_api_not_found

To resolve this issue, refer to the steps on how to configure the Instana agent to collect all NGINX metrics as described in the Enabling metrics collection section.

NGINX status is not found

Monitoring issue type: nginx_status_not_found

To resolve this issue, refer to the steps on how to configure the Instana agent to collect all NGINX metrics as described in the Enabling metrics collection section.

NGINX config location not discovered

Monitoring issue type: nginx_config_location_not_discovered

To resolve this issue, refer to the steps on how to configure the Instana agent to collect all NGINX metrics as described in the Enabling metrics collection section.

Trace Continuity is broken for OpenResty Lua Code

Problem: Custom HTTP requests that are issued from Lua code cannot be traced automatically.

Solution: The outbound Instana headers need to be propagated from their respective NGINX OpenTracing variables. So add the headers X-INSTANA-T, X-INSTANA-S, and X-INSTANA-L from the NGINX variables opentracing_context_x_instana_t, opentracing_context_x_instana_s, and opentracing_context_x_instana_l to your outbound request headers Lua variable before you send HTTP request (such as by using the lua-resty-http function request_uri()). For more information, see the lua-nginx-module readme and lua-resty-http readme.

See the following example Lua code:

local http_c = http.new()
...
local req_headers = {}
req_headers["X-INSTANA-T"] = ngx.var.opentracing_context_x_instana_t
req_headers["X-INSTANA-S"] = ngx.var.opentracing_context_x_instana_s
req_headers["X-INSTANA-L"] = ngx.var.opentracing_context_x_instana_l

local response, error = http_c:request_uri(request_url, {
    method = "POST",
    headers = req_headers,
    body = req_body
  }
);

SELinux prevents the NGINX process from loading the OpenTracing module

Problem: Calls to NGINX are not traced, and the NGINX error file shows the following error:

/etc/nginx/modules/ngx_http_opentracing_module.so: failed to map segment from shared object: Permission denied

Solution: SELinux prevents the NGINX process from reading and mapping the memory from the OpenTracing module, which is a shared object. To verify that the SELinux is responsible for the error, you can do a smoke test as follows:

  1. Disable SELinux momentarily
  2. Restart NGINX

By disabling SELinux and restarting NGINX, the error message disappears from the NGINX error log and enables Instana to trace the calls.

Disabling SELinux is not a long-term solution. A right and safe approach is to create a SELinux policy that allows the NGINX process to read and map the memory from the OpenTracing module. You can locate the OpenTracing module by checking the NGINX configuration directory. If the NGINX configuration directory is /etc/nginx, then the module is located in the /etc/nginx/modules directory. Your DevOps or IT department must configure SELinux. For more information, see the following online resources that document the SELinux configuration for some Linux distributions that are supported by Instana:

For more information about the NGINX and SELinux integration, see Using NGINX and NGINX Plus with SELinux.

OpenTracing context propagation not working for FastCGI

Problem: OpenTracing context propagation is not working for FastCGI configured in NGINX.

Solution: Replace the opentracing_propagate_context directive with the opentracing_fastcgi_propagate_context directive in every FastCGI configuration block.

Known limitations

  • The tracing data that is collected from the NGINX tracer does not include stack traces in the Span details. The reason is that the NGINX tracer is a C/C++ sensor and currently no initiative exists to include stack traces for C/C++ tools. For meaningful data, this kind of tracing requires all debugging packages of the libraries that are involved in a C/C++ application. However, these packages are usually not installed in a production environment.