Configuring Prometheus integrations

To monitor the OpenShift Container Platform, install the Prometheus integration.

Gathering data

This integration is a remote-only sensor that connects to an OpenShift Container Platform instance and collects the following information.
  • Events
  • Host metrics: cpu, memory, disk, network

Verifying prerequisites

Installing

  1. Verify the public GA image path of the integration for Prometheus (for example: cp.icr.io/cp/cp4waiops/ibm-mm-cdc-conn:4.3-latest). Run the podman images command.
  2. Log in as a root user on a Linux® host machine that has network access to the OpenShift Container Platform instance. The Prometheus integration pulls information from the OpenShift Container Platform instance by using a remote TCP connection.
  3. To log in before you download the public image of integration for Prometheus, run the podman login <cdc-mm ga-image-path> command.
    podman login cp.icr.io/cp/cp4waiops/ibm-mm-cdc-conn:4.3-latest
    For more information about the username and password to use, see step 5 in the Preparing your cluster topic.
  4. Create a directory to store the integration-related configuration file and bash script.
    mkdir -p /root/cdc
    cd /root/cdc
  5. To define connection information to the Metric Manager API, create a Metric Manager backend configuration file with the name: com.instana.cdc.metricmanager.sender.MetricManagerBackend-1.cfg.
    # Metric Manager configuration file
    # Metric Manager's URL
    host=http://<metricManagerHost>.ibm.com
    
    # Metric Manager's port
    port=18080
    
    # Metric Manager's username for REST API
    username=system
    
    # Metric Manager's password for REST API 
    # password has been mask ****
    password=**********
    
    # Metric Manager's tenant id
    tenant_id=APM
  6. Create the configuration-.yaml sensor configuration file. Define the Prometheus endpoint, API key, and the metric entities information as in the following example configuration-prometheus.yaml file for a Prometheus sensor.
    com.instana.plugin.prometheus:
      poll_rate: 600
      customMetricSources:
      - url: 'https://worker0.better.cp.fyre.ibm.com:31523/metrics'
      - url: 'http://9.30.231.105:9100/metrics'
  7. If you want to use vault, complete the following steps:
    1. Add the app secret information to the vault server.
    2. Mount the vault PEM file in the image.
    3. Run the bootstrap script to start the docker image.
    4. Run the docker ps command to check the container ID and access to the container by the docker exec -ti <container_id> bash command.
    5. In the container, add the vault IP address into the /etc/hosts file.
      9.x.x.159 Vault
    6. Check the connection to the vault server.
      ping vault
      Note: If ping isn't available, run the dnf install iputils -y command.
    7. Go to the path where the Prometheus configuration YAML file is located.
    8. Edit the configuration.yaml to add the vault configuration.
      com.instana.configuration.integration.vault:
        connection_url: 'https://Vault:8200' # Mapping through hosts file since PEM ca cert does not contain hostname
        token: '<vault_token>'
        path_to_pem_file: '/root/agentdev/agent-installer/instana-agent/etc/instana/vault-ca.pem'
        secret_refresh_rate: 24
        kv_version: 2
    9. Modify the sensor configuration to use the vault type in the configuration-prometheus.yaml file.
    10. Restart the integration and check whether the Prometheus sensor can connect and receive metrics.
  8. Create a bash script with execution permission, as in the following example bash script for a Prometheus sensor.
    podman run \
      -itd \
      --name instana-agent-metric-manager-ga \
      --volume /var/run:/var/run \
      --volume /run:/run \
      --volume /dev:/dev:ro \
      --volume /sys:/sys:ro \
      --volume /var/log:/var/log \
      --volume <cdc-root-path>/configuration-prometheus.yaml:/opt/instana/agent/etc/instana/configuration-prometheus.yaml \
      --mount type=bind,source=<cdc-root-path>/com.instana.cdc.metricmanager.sender.MetricManagerBackend-1.cfg,target=/opt/instana/agent/etc/instana/com.instana.cdc.metricmanager.sender.MetricManagerBackend-1.cfg \
      --privileged \
      --net=host \
      --pid=host \
      --env INSTANA_PRODUCT_NAME="metric-manager" \
      --env AGENT_MAX_MEM=6G \
      <IBM-CDC-Public-GA-Image-Path>/ibm-mm-cdc-conn:4.5-latest
  9. Run the bash script to set up and configure the instance for the integration.
Note: If you don't want to monitor everything in your Prometheus integration, or if you have many management zones, you can specify the zones that you do want to monitor. Specify the zones to be monitored in your configuration file. If you have many zones, you might encounter an Out of Memory error when the integration reports on every one of your Prometheus zones. You can set the zones when you configure your integration by adding values to the zone field of your configuration. For more information about zones, or if you want to make other changes to the default configuration, see the Configuring section. For example, if you monitor approximately 200 hosts, you might not need to specify zones in your configuration. Conversely, if you monitor 5000 hosts that are grouped into hundreds of management zones, it's likely worthwhile to narrow them down.
The Prometheus integration is installed and set up on the Linux host.

Verifying the installation

  1. Verify whether the integration instance is up and running.
    $ podman ps
    CONTAINER ID   IMAGE                                                                                                                                 COMMAND                  CREATED        STATUS        PORTS     NAMES
    3c75a6d23ca8   cp.icr.io/cp/cp4waiops/ibm-mm-cdc-conn:4.3-latest   "/usr/local/bin/tini…"   2 weeks ago  Up 2 weeks ago             instana-agent-metric-manager-ga     
  2. Check the logs to confirm that Prometheus metrics are forwarded to Metric Manager.
    $ podman logs -f <container_id>
    Example logs, which show that the metrics are forwarded:
    2023-10-05T12:12:09.543+00:00 | INFO  | tana-agent-scheduler-thread-13-2 | icManagerBackend | cdc-metricmanager-sender - 1.0.0 | MetricManager : MetricManagerConfig{Host=http://test.ibm.com, Port=18080, Username=system 
    2023-10-05T12:12:09.544+00:00 | INFO  | tana-agent-scheduler-thread-13-2 | icManagerBackend | cdc-metricmanager-sender - 1.0.0 | MetricManager : metricManagerURL : http://test.ibm.com:18080/metrics/api/1.0/metrics
    2023-10-05T12:12:10.026+00:00 | INFO  | tana-agent-scheduler-thread-13-2 | icManagerBackend | cdc-metricmanager-sender - 1.0.0 | Successfully sent payload to Metric Manager
    2023-10-05T12:12:10.026+00:00 | WARN  | tana-agent-scheduler-thread-13-2 | SensorTicker     | com.instana.agent - 1.1.697 | Sending metrics with 1260411 chars took 255815 ms

Configuring

You can edit the configuration-.yaml file to further configure your Prometheus integration.
  1. Go to your configuration-.yaml file on the Linux host machine where you installed your Prometheus integration.
  2. Open the file with your preferred text editor and find the Prometheus section. By default, it looks like the following example but the optional fields are empty.
    com.instana.plugin.prometheus:
      poll_rate: 600                 # Required
      customMetricSources:
      - url: 'https://<source url>'  # Required
  3. Edit the values that you want to change, and save the file. The following table lists the variables that can be configured for Prometheus.
    Variable Description Type Default value Required or optional
    poll_rate The number of seconds between queries. The poll rate might need to be adjusted to account for any rate limits imposed by your endpoint. Number 30 Optional
    customMetricSources: url The URL of the source system to be monitored. String N/A Required