Configuring Datadog Observer jobs
Using the Datadog Observer functionality, you can load monitored hosts and their associated processes, and then visualize this data as a topology view in the IBM Cloud Pak for Watson AIOps UI.
Before you start
Important: The Datadog Observer supports the cloud/SaaS Datadog version.
Ensure you have the Datadog server details to hand, such as the data tenant, API server URL, Auth API key and App key.
About this task
A Datadog Observer job extracts host and its associated processes from Datadog via REST API. The Observer loads and updates the resources and their relationships within the IBM Cloud Pak for Watson AIOps core topology service.
You define and start the following job.
Full Topology Upload job
By default, Load jobs are one-off, transient jobs that do a full upload of all requested topology data as soon as they are triggered.
You can also run these jobs (again) from the Observer UI, or schedule them to run at set times when configuring them.
Define or edit the following parameters, then click Run job to save and run the job.
Encryption requirement: See the Configuring observer jobs security topic for more information.
|Unique ID||Enter a unique name for the job||Required|
|Data Tenant||Enter the data tenant to track the Datadog topology||Required|
|API server URL||Enter the Datadog API base URL||Required|
|Auth API key||Specify the authorized API key to authorize with Datadog||Required|
|App key||Specify the application key to access the Datadog resources||Required|
|Configure Host API query||Specify the host API query. Each query parameters needs to be separated by ";" and its value needs to be inside "()". For parameters that support list, the values need to be separated by "," and API will treat them as AND operation||Optional|
|Configure Process API query||Specify the process API query. Each query parameters needs to be separated by ";" and its value needs to be inside "()". For parameters that support list, the values need to be separated by "," and API will treat them as AND operation||Optional|
|Datadog certificate||Specify the Datadog certificate to enable secure connection to the target system||Optional. For more information, see Configuring observer jobs security.|
|Certificate validation||Choose whether to use SSL validation ('true' or 'false'). Turning SSL validation off will use HTTPS without host verification.||Optional|
|Connection and read timeout (milliseconds)||Set the connection and read timeout value.||Optional|
|Trust all certificates||Trust all certificates without validating the certificate's contents. Choose 'true' or 'false'.||Optional|
|Number of connection retries||Set the number of times a connection should try again. The default value is 3.||Optional|
|Delay before a connection retry (milliseconds)||Specify the period of time in milliseconds to delay before a connection tries again.||Optional|
|Access Scope||Optional CSV String listing values which can be used to provide a scope for the resources. These can be used to aid the mapping of alerts to resources when resources in different scopes share the same matchTokens. Example of scope would be locations, project names, namespaces, etc.||Optional|
|Generate debug support file||Set this parameter to 'True' in order to capture the output of the next scheduled job run as a file. This file will be stored with an observer's log files and can be used to debug observer issues, for example at the request of your designated Support team, or while using a test environment. For one-off jobs (that is, Load jobs), this parameter reverts to 'False' after the next completed run. To examine the output produced, you can load the generated debug file using the File Observer.||Optional|
|Job schedule||Specify when the job runs||Optional. Load jobs only.|
|Observer job description||Enter additional information to describe the job||Optional|