Configuring Dynatrace integrations

To collect CPU, memory, disk, network metrics, and events from Dynatrace, install the Dynatrace integration.

Gathering data

This integration is a remote-only sensor that connects to a Dynatrace instance and collects the following information.
  • Entities: hosts and processes
  • Metrics:
    • Host: cpu, memory, disk, network
    • Process: cpu, memory

Verifying prerequisites

You must have your access token for your Dynatrace endpoint handy to complete the integration. Also, your token must have the following scopes:,, and For more information about your access token, including how to generate an access token, see Dynatrace API - Tokens and authentication external icon.


  1. Verify the public GA image path of the integration for Dynatrace (for example: Run the podman images command.
  2. Log in as a root user on a Linux® host machine that has network access to Dynatrace. The Dynatrace integration pulls information from Dynatrace by using a remote TCP connection.
  3. To log in before you download the public image of integration for Dynatrace, run the podman login <cdc-mm ga-image-path> command.
    podman login
    For more information about the username and password to use, see step 5 in the Preparing your cluster topic.
  4. Create a directory to store the integration-related configuration file and bash script.
    mkdir -p /root/cdc
    cd /root/cdc
  5. To define connection information to the Metric Manager API, create a Metric Manager backend configuration file with the name: com.instana.cdc.metricmanager.sender.MetricManagerBackend-1.cfg.
    # Metric Manager configuration file
    # Metric Manager's URL
    # Metric Manager's port
    # Metric Manager's username for REST API
    # Metric Manager's password for REST API 
    # password has been mask ****
    # Metric Manager's tenant id
  6. Create the configuration-.yaml sensor configuration file. Define the Dynatrace endpoint, API key, and the metric entities information as in the following example configuration-dynatrace.yaml file for a Dynatrace sensor.
      key: <dynatrace_api_tokenkey>
      enabled: true
        enabled: true
        poll_rate: 60
        zone: Dynatrace_MZone
        tags: 'test'
              - builtin:kubernetes.container.oom_kills
              - builtin:host.cpu.load15m
              - builtin:host.mem.avail.bytes
              - builtin:host.mem.used
              - builtin:kubernetes.cluster.readyz
              - builtin:kubernetes.node.requests_cpu
              - builtin:kubernetes.node.requests_memory
  7. If you want to use vault, complete the following steps:
    1. Add the app secret information to the vault server.
    2. Mount the vault PEM file in the image.
    3. Run the bootstrap script to start the docker image.
    4. Run the docker ps command to check the container ID and access to the container by the docker exec -ti <container_id> bash command.
    5. In the container, add the vault IP address into the /etc/hosts file.
      9.x.x.159 Vault
    6. Check the connection to the vault server.
      ping vault
      Note: If ping isn't available, run the dnf install iputils -y command.
    7. Go to the path where the Dynatrace configuration YAML file is located.
    8. Edit the configuration.yaml to add the vault configuration.
        connection_url: 'https://Vault:8200' # Mapping through hosts file since PEM ca cert does not contain hostname
        token: '<vault_token>'
        path_to_pem_file: '/root/agentdev/agent-installer/instana-agent/etc/instana/vault-ca.pem'
        secret_refresh_rate: 24
        kv_version: 2
    9. Modify the sensor configuration to use the vault type in the configuration-dynatrace.yaml file.
            type: vault
              path: cem/dynatrace
              key: key
        enabled: true
    10. Restart the integration and check whether the Dynatrace sensor can connect and receive metrics.
  8. Create a bash script with execution permission, as in the following example bash script for a Dynatrace sensor.
    podman run \
      -itd \
      --name instana-agent-metric-manager-ga \
      --volume /var/run:/var/run \
      --volume /run:/run \
      --volume /dev:/dev:ro \
      --volume /sys:/sys:ro \
      --volume /var/log:/var/log \
      --volume <cdc-root-path>/configuration-dynatrace.yaml:/opt/instana/agent/etc/instana/configuration-dynatrace.yaml \
      --mount type=bind,source=<cdc-root-path>/com.instana.cdc.metricmanager.sender.MetricManagerBackend-1.cfg,target=/opt/instana/agent/etc/instana/com.instana.cdc.metricmanager.sender.MetricManagerBackend-1.cfg \
      --privileged \
      --net=host \
      --pid=host \
      --env INSTANA_PRODUCT_NAME="metric-manager" \
      --env AGENT_MAX_MEM=6G \
  9. Run the bash script to set up and configure the instance for the integration.
Note: If you don't want to monitor everything in your Dynatrace integration, or if you have many management zones, you can specify the zones that you do want to monitor. Specify the zones to be monitored in your configuration file. If you have many zones, you might encounter an Out of Memory error when the integration reports on every one of your Dynatrace zones. You can set the zones when you configure your integration by adding values to the zone field of your configuration. For more information about zones, or if you want to make other changes to the default configuration, see the Configuring section. For example, if you monitor approximately 200 hosts, you might not need to specify zones in your configuration. Conversely, if you monitor 5000 hosts that are grouped into hundreds of management zones, it's likely worthwhile to narrow them down.
The Dynatrace integration is installed and set up on the Linux host.

Verifying the installation

  1. Verify whether the integration instance is up and running.
    $ podman ps
    CONTAINER ID   IMAGE                                                                                                                                 COMMAND                  CREATED        STATUS        PORTS     NAMES
    3c75a6d23ca8   "/usr/local/bin/tini…"   2 weeks ago  Up 2 weeks ago             instana-agent-metric-manager-ga     
  2. Check the logs to confirm that Dynatrace metrics are forwarded to Metric Manager.
    $ podman logs -f <container_id>
    Example logs, which show that the metrics are forwarded:
    2023-10-05T12:12:09.543+00:00 | INFO  | tana-agent-scheduler-thread-13-2 | icManagerBackend | cdc-metricmanager-sender - 1.0.0 | MetricManager : MetricManagerConfig{Host=, Port=18080, Username=system 
    2023-10-05T12:12:09.544+00:00 | INFO  | tana-agent-scheduler-thread-13-2 | icManagerBackend | cdc-metricmanager-sender - 1.0.0 | MetricManager : metricManagerURL :
    2023-10-05T12:12:10.026+00:00 | INFO  | tana-agent-scheduler-thread-13-2 | icManagerBackend | cdc-metricmanager-sender - 1.0.0 | Successfully sent payload to Metric Manager
    2023-10-05T12:12:10.026+00:00 | WARN  | tana-agent-scheduler-thread-13-2 | SensorTicker     | com.instana.agent - 1.1.697 | Sending metrics with 1260411 chars took 255815 ms


You can edit the configuration-.yaml file to further configure your Dynatrace integration.
  1. Go to your configuration-.yaml file on the Linux host machine where you installed your Dynatrace integration.
  2. Open the file with your preferred text editor and find the Dynatrace section. By default, it looks like the following example but the optional fields are empty.
    # Dynatrace
      enabled: true                                    # Required
      endpoint: ''    # Required
      key: 'mykey'                                     # Required
      poll_rate: 30                                    # Optional
      zone: 'string'                                   # Optional
      tags: 'tag1,tag2'                                # Optional
      metrics:                                         # Required
        enabled: true
  3. Edit the values that you want to change, and save the file. The following table lists the variables that can be configured for Dynatrace.
    Variable Description Type Default value Required or optional
    enabled Set to true or false to enable or disable the integration. Boolean true Required
    endpoint The URL for the Dynatrace HTTPS API endpoint. String N/A Required
    key The access token to use for connecting to Dynatrace. The token must have the following scopes:,, and String N/A Required
    poll_rate The frequency (in seconds) with which the Dynatrace API endpoint is polled for metric data. Integer 30 Optional
    zone The name of the management zone in Dynatrace that you want to monitor. If this value is left empty, all management zones are monitored. String N/A Optional
    tags A list of the tags that can be used to filter entities from Dynatrace. String of the form tags:tag1,tag2 where the list of tag names is a comma-delimited list. N/A Optional
    metrics: enabled Set to true or false to enable or disable the metrics integration. Boolean true Optional
    metrics: entities A list of entities for metric integration. String N/A Required