Prometheus

Collecting metrics data with Prometheus is becoming more popular. With Instana, it is easy to capture Prometheus metrics and correlate them by using the extensive knowledge graph. A typical example is custom business metrics.

After you install the Instana host agent, the Instana Prometheus sensor is automatically installed, but you need to configure the sensor as outlined in the Configuring section. Then, you can view metrics that are related to Prometheus in the Instana UI.

Instana provides a Prometheus Alertmanager Webhook alert channel to send HTTP POST requests with the payload format of the Prometheus Alertmanager Webhook as described in Prometheus Alertmanager Webhook Receiver configuration.

To send alert notifications from Instana to Prometheus in real time, create a Prometheus Alertmanager Webhook alert channel in the Instana UI. Then, the HTTP POST requests will be sent to Prometheus.

Introduction

The Instana Prometheus sensor doesn't require a Prometheus server. The sensor captures metrics directly from the endpoints that are exposed by the monitored systems.

For each Instana host agent, specify which Prometheus endpoints you want to poll and which metrics need to be collected from them by using regular expressions. For more information, see the Configuring section.

Note: If your metrics are exposed by a Java library (like Micrometer, Prometheus Java client library, or jmx_exporter), you don't need to expose them through HTTP because the Instana Java plug-in can collect metrics from these libraries directly:

Configuring

Configuration for Kubernetes environments

You need to define metric sources in the host agent configuration file <agent_install_dir>/etc/instana/configuration.yaml as a list of endpoints. When the Instana host agent runs in a Kubernetes environment, the agent automatically recognizes and gathers IPs and container ports from running pods.

See the following configuration example:

com.instana.plugin.prometheus:
  # Global polling interval in seconds (optional) 
  poll_rate: 15                              # Default is 1 second
  # Global (all) endpoints username/password configuration (optional)
  username: ''
  password: ''
  customMetricSources:
  - url: '/prometheus/endpoint/1'            # metrics endpoint, the IP and port are auto-discovered
    metricNameIncludeRegex: '.*'             # regular expression to filter metrics
    username: ''                             # endpoint specific username/password configuration
    password: ''
  - url: '/prometheus/endpoint/2'
    metricNameIncludeRegex: '.*'

Notes:

  • For Kubernetes environments, don't add the host and port information to the url field, and add only the metrics endpoint URL such as /prometheus/endpoint/metrics. The Prometheus sensor is not monitoring endpoints that are specified as full URLs, such as https://prod-myapp.server.com/prometheus/metrics.

  • By using a regular expression in the metricNameIncludeRegex field, you can define which metrics are to be captured for a specific metric endpoint.

  • The Prometheus sensor has a basic authentication support that can be defined on the global level (all endpoints) or per an endpoint.

Static configuration for non-Kubernetes environments

The static configuration is used for non-Kubernetes environments. If you want to capture Prometheus metrics from local or remote endpoints in a non-Kubernetes environment, configure the customMetricSources section in the <agent_install_dir>/etc/instana/configuration.yaml file as follows:

com.instana.plugin.prometheus:
  # Global polling interval in seconds (optional) 
  poll_rate: 15                              # Default is 1 second
  # Global (all) endpoints username/password configuration (optional)
  username: ''
  password: ''
  customMetricSources:
  - url: 'http://localhost:8080/metrics'
    username: ''      # endpoint specific username/password configuration
    password: ''
    metricNameIncludeRegex: '^sample_app_request'
  - url: 'http://223.58.1.10:9100/prometheus'
    metricNameIncludeRegex: '^sample_app_request'

If the metricNameIncludeRegex is not defined, the Prometheus sensor collects all metric types up to the defined limit of 600 metrics per metric type.

Remote write

Starting from Instana host agent bundle 1.1.587, the host agent includes support for the remote_write endpoint, which means it is able to ingest metrics and these metrics are displayed as either a Prometheus Entity or part of the Process Custom Metrics.

To enable the remote_write endpoint, configure the <agent_install_dir>/etc/instana/configuration.yaml file as follows:

com.instana.plugin.prometheus:
  remote_write:
    enabled: true

Notes:

  • Set up the sender (the component that sends the metrics) as described in the documentation for Prometheus remote_write.

  • The remote_write endpoint is available on port 42699 at the /prometheus/v1/receive path. Therefore, the URL that you need to configure in the Prometheus configuration is http://<agent_ip>:42699/prometheus/v1/receive.

  • To make Instana parse metrics correctly, the sender must send the metadata. The sender is the default in Prometheus, so be sure not to turn it off.

  • The optional Instana agent service, which is provided on Kubernetes by using the Instana agent Helm Chart, is very useful in combination with the remote_write API. By using the Instana agent service, the data is pushed to the Instana agent that runs on the same Kubernetes node, and thus the Instana agent can correctly fill in the infrastructure correlation data.

  • Currently, authentication is not supported for the remote_write endpoint, so do not configure the basic_auth and bearer_token (including bearer_token_file) options of Prometheus in the sender.

TLS encryption for remote write

You can enable TLS encryption on the host agent. Then, all data that is sent to the remote_write endpoint is TLS-encrypted.

For more information on how to set up TLS encryption, see Enabling TLS Encryption

Infrastructure correlation

For infrastructure correlation on Linux hosts, see agent's HTTP API endpoint.

Viewing metrics

To view the metrics, complete the following steps:

  1. In the sidebar of the Instana UI, select Infrastructure.
  2. Click a specific monitored host.

Then, you can see a host dashboard with all the collected metrics and monitored processes.

Prometheus metrics appear as "Prometheus Apps" associated with the host or with the process from where they are collected (in case of using the remote_write endpoint). You can query Prometheus custom metrics by using the "Dynamic Focus", "Event and Alerts", and the "Grafana plug-in" with entity.type:prometheus.

The Prometheus sensor collects all core metric types, up to 600 metrics per type:

  • Counters
  • Gauges
  • Histograms
  • Summaries
  • Untyped

Alerting

Creating a Prometheus Alertmanager Webhook alert channel

To create a Prometheus Alertmanager Webhook alert channel, click Settings > Team Settings > Events & Alerts > Alert Channels > Add Alert Channel in the Instana UI, and then click Prometheus Alertmanager Webhook.

Screenshot: Prometheus Alertmanager Webhook

Instana sends alerts through this alert channel as HTTP POST requests to the configured Prometheus Alertmanager Webhook Receiver, for example, Alert Snitch or SNMP Notifier.

SNMP notifier

The SNMP Notifier project relays Prometheus alerts as SNMP traps to any configured SNMP receiver.

Instana does not provide an image for the SNMP notifier in the Instana image registry. For more information, follow the instructions in SNMP Notifier. For an example on installing an SNMP notifier, see the Run the SNMP notifier in Kubernetes section.

Alert channel configuration

You need to configure the Prometheus Alertmanager Webhook alert channel that you created, and set the Prometheus Alertmanager Webhook Receiver URL field as http://{SNMP-Notifier-Host}:9464/alerts. If the SNMP notifier is installed in the same cluster with the Instana backend in a namespace called snmp-notifier, the Prometheus Alertmanager Webhook Receiver URL looks like http://snmp-notifier-alertmanager-snmp-notifier.snmp-notifier.svc:9464/alerts.

Example: Run the SNMP notifier in Kubernetes

  1. Create a namespace for the SNMP Notifier and a secret for the public Docker registry with your Docker hub credentials:

    kubectl create namespace snmp-notifier
    
    kubectl -n snmp-notifier create secret docker-registry image-pull-secret \
     --docker-server=docker.io \
     --docker-username=${YOUR_USERNAME} \
     --docker-password=${YOUR_PASSWORD}
    
  2. Install the SNMP notifier by using the IP address of a server that receives alerts as SNMP traps:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    
    helm install snmp-notifier prometheus-community/alertmanager-snmp-notifier \
       -n snmp-notifier \
       --set 'imagePullSecrets={image-pull-secret}' \
       --set 'snmpNotifier.snmpDestinations={IP_ADDRESS_OF_SNMP_TRAP_RECEIVER_SERVER:162}'
    

For more information about the SNMP notifier configuration, see SNMP Notifier. For information about all supported chart parameters and default values, see values.yaml.

Troubleshooting

Remote write high metric delay

Monitoring issue type: prometheus_remote_write_high_delay

Prometheus metrics that are being ingested from the remote_write endpoint, are being received with a high delay. This problem results in potentially delayed alerting and makes correlating metrics from other sources more difficult.

Tune the Prometheus remote_write configuration. Specifically, add the batch_send_deadline parameter as follows, which limits the delay to 1 second:

remote_write:
  - url: "http://xxx.xxx.xxx.xxx:42699/prometheus/v1/receive"
    queue_config:
      batch_send_deadline: 1s

For more configuration options, see Prometheus manual.