Datadog integration probe
An IBM Tivoli Netcool/OMNIbus probe for integrating with Datadog is available. This probe integration sends HTTP GET requests to Datadog to retrieve events based on the time frame set by start and end mandatory query parameters in the URI.
Prerequisites
The following systems are prerequisites for this integration.
-
IBM Cloud Pak for AIOps version 4.8.0
-
Datadog: Cloud Monitoring as a Service
Note: This probe only supports the https://api.datadoghq.com/api/v1/events
v1 API to get events from Datadog.
Note: The IBM Netcool Operations Insight Event Integrations Operator must be deployed before deploying this integration. See Installing the IBM Netcool Operations Insight Event Integrations Operator
To deploy a new instance of the Generic Webhook Probe, review the custom resource YAML template supplied with the operator package, update the parameters if required, and save the configurable parameters in a file, for example probes_v1_webhookprobe_datadog_integration.yaml
.
Then deploy the custom resource using the updated file:
kubectl apply -f deploy/crs/probes_v1_webhookprobe_datadog_integration.yaml --namespace <NAMESPACE>
Note: If you are installing using the CLI, you can get the custom resource Yaml file from the
ibm-netcool-integrations/inventory/netcoolIntegrationsOperatorSetup/files/op-cli/deploy/crs
directory in our CASE archive. If you are installing using the Operator Lifecycle Manager Web console, the custom resource is provided
in the YAML view.
Pre-installation tasks
The Probe for SevOne, Probe for Turbonomics, and Probe for Datadog have the same pre-installation requirements. See Probe Common Tasks before proceeding with the following sections.
Configuring and installing the Probe for Datadog integration
To create the required secrets, complete the following steps, which configure the WebhookProbe
custom resource and install the probe in the namespace.
-
Create a secret with the ObjectServer credentials for the probe to authenticate with the ObjectServer.
PROBE_OMNI_SECRET=noi-probe-secret kubectl create secret generic $PROBE_OMNI_SECRET --from-literal=AuthUserName=$NOI_OMNI_USERNAME --from-literal=AuthPassword=$NOI_OMNI_PASSWORD --from-file=tls.crt=tls.crt
-
Create a secret for the Datadog certificate.
echo -n | openssl s_client -connect api.datadoghq.com:443 | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > datadog.pem PROBE_CLIENT_TLS_SECRET=datadog-probe-client-tls-cert kubectl create secret generic $PROBE_CLIENT_TLS_SECRET --from-file=server.crt=datadog.pem
-
Create a secret for the transport properties.
TRANSPORT_PROPS=restMultiChannelHttpTransport.json cat << EOF | tee > $TRANSPORT_PROPS { "GLOBAL": { "httpVersion":"1.1", "httpHeaders":"", "responseTimeout":"60", "securityProtocol":"TLSv1.2", "keepTokens":"", "autoReconnect":"OFF", "gatherSubsTopicInfo":"false" }, "RESYNC": { "RESYNC_CHILD_BY_FDN": { "uri":"https://api.datadoghq.com:443/api/v1/events?start=++date_happened:<initial-start-epoch-time>++&end=++now++&page=0", "method":"GET", "headers":"Accept=application/json,DD-API-KEY=<datadog-api-key>,DD-APPLICATION-KEY=<datadog-application-key>", "content":"{}", "contentFile" : "", "interval":"60", "attempts":"0", "requireSSL":"true", "asEventStream" : "false", "enablePagination" : "true" } } } EOF PROBE_HTTP_REQUEST=http-requests kubectl create secret generic $PROBE_HTTP_REQUEST --from-file=httpRequests=$TRANSPORT_PROPS
Where:
-
++date_happened:<initial-start-epoch-time>++
allows the probe to assign the value of RecordData.date_happened in the probe data backup file to the start query parameter to continue from previous resynchronization if RecordData.date_happened is available. Otherwise, the probe assigns<initial-start-epoch-time>
to the start query parameter. -
<datadog-api-key>
is the Datadog API key. -
<datadog-application-key>
is Datadog Application key.
Note: The end query parameter can be set to an epoch time or ++now++ which represents the current epoch time.
-
-
Create a Network Policy in the IBM Tivoli Netcool/OMNIbus namespace to allow the probe to access the TLS Proxy service and port in IBM Tivoli Netcool/OMNIbus namespace. Note: Review any other Network Policy that may be denying access to the ObjectServer TLS Proxy pod and update the policy to allow ingress connection to the TLS Proxy pod.
cat << EOF | tee >(kubectl apply -f -) | cat kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: noi-allow-proxy namespace: ${NOI_NAMESPACE} spec: podSelector: matchLabels: app.kubernetes.io/instance: ${NOI_INSTANCE} app.kubernetes.io/name: proxy ingress: - ports: - protocol: TCP port: ${NOI_PROXY_PRIMARY_PORT} - protocol: TCP port: ${NOI_PROXY_BACKUP_PORT} policyTypes: - Ingress EOF
-
Create a Probe for Datadog Integrations with the WebhookProbe custom resource by running the following commands.
PROBE_DATADOG_INSTANCE=datadog-probe cat << EOF | tee >(kubectl apply -f -) | cat kind: WebhookProbe apiVersion: probes.integrations.noi.ibm.com/v1 metadata: name: webhookprobe labels: app.kubernetes.io/name: ${PROBE_DATADOG_INSTANCE} app.kubernetes.io/managed-by: netcool-integrations-operator app.kubernetes.io/instance: ${PROBE_DATADOG_INSTANCE} namespace: ${NAMESPACE} spec: license: accept: true version: 3.3.0 helmValues: global: image: secretName: '' serviceAccountName: '' enableNetworkPolicy: true image: useDefaultOperandImages: true probe: >- cp.icr.io/cp/noi-int/netcool-probe-messagebus@sha256:745a7dae41f1a0c8f1517a9dc2368d227e39be4798b74175f903bc5ac940b95b utility: >- cp.icr.io/cp/noi-int/netcool-integration-util@sha256:6e0b9883a34810c468b5386b4b6d9ed216f68df015f6622b83656d400d01dcd2 pullPolicy: IfNotPresent netcool: connectionMode: SSLAndAuth primaryServer: 'AGGP' primaryHost: '${NOI_PROXY_SVC}.${NOI_NAMESPACE}.svc' primaryPort: 4100 backupServer: 'AGGB' backupHost: '${NOI_PROXY_SVC}.${NOI_NAMESPACE}.svc' backupPort: 4100 secretName: '${PROBE_OMNI_SECRET}' probe: integration: datadog messageLevel: warn setUIDandGID: false sslServerCommonName: '' locale: en_US.utf8 enableTransportDebugLog: false heartbeatInterval: 10 recordData: 'date_happened' configPVC: name: '' rulesFile: '' parserConfigFile: '' dataBackupFile: '' rulesConfigmap: '' jsonParserConfig: notification: messagePayload: json messageHeader: '' jsonNestedPayload: '' jsonNestedHeader: '' messageDepth: 3 resync: messagePayload: 'json.events' messageHeader: '' jsonNestedPayload: '' jsonNestedHeader: '' messageDepth: 3 httpClientCredentialsSecretName: '' initialResync: true resyncInterval: 120 poddisruptionbudget: enabled: false minAvailable: 1 selfMonitoring: discardHeartbeatEvent: false populateMasterProbeStat: false logProbeStat: false generateThresholdEvents: false configs: 'ProbeWatchHeartbeatInterval:0 EventLoadProfiling:''true''' ingress: enabled: false host: '' webhook: uri: /probe httpVersion: '1.1' respondWithContent: 'OFF' validateBodySyntax: 'ON' validateRequestURI: 'ON' idleTimeout: 180 tls: enabled: true secretName: '' serverBasicAuthenticationCredentialsSecretName: '' replicaCount: 1 autoscaling: enabled: false minReplicas: 1 maxReplicas: 3 cpuUtil: 60 httpClient: enabled: true host: '' port: 80 sslSecretName: '$PROBE_CLIENT_TLS_SECRET' httpRequestsSecretName: '$PROBE_HTTP_REQUEST' resources: requests: cpu: 100m memory: 128Mi ephemeral-storage: 100Mi limits: cpu: 200m memory: 256Mi ephemeral-storage: 200Mi arch: amd64 EOF
-
Verify that the probe pod is running.
kubectl get pods -l app.kubernetes.io/instance=$PROBE_DATADOG_INSTANCE
-
If you want to store the probe data backup file in Persistent Volume Claim (PVC), follow the instructions in Providing custom Message Bus Probe configuration files in a persistent volume. Then set configPVC.name to a pre-created PVC name and configPVC.dataBackupFile to the data backup filename.