Prerequisites for loading data from domain-specific log data sources
If you configure a log integration to load data from domain-specific log data sources, such as IBM MQ or WebSphere, you must meet specific prerequisites that are different from other log systems.
Domain-specific logs must be sourced from a log management system. You must create an incoming log data integration to collect this data. The type of integration to create depends on the following factors:
- Whether you prefer to provide the data in pull or push mode
- Which log management system stores the domain-specific log data
Modes
Consider the following issues when you are deciding whether to provide the data in pull or push mode.
-
Pull
- Positives:
- If your log management system has a REST API then all you must do is to create the data integration and Cloud Pak for AIOps does the rest.
- You can also use the Cloud Pak for AIOps built-in integrations, which can cover commonly used options.
- Negatives: you have no control over data load. When Cloud Pak for AIOps initiates the pull operation, the operation might create excessive load on your log management systems.
- Positives:
-
Push
- Positives: you have full control over data load.
- Negatives: you must create code to push your log data to the Kafka topic.
Pull mode
For log data from systems that are not Falcon Logscale, Mezmo, or Splunk, you can make use of either the Custom, ELK, or Kafka data integration to ingest your log data.
| Domain-specific log management system | Data integration | Link |
|---|---|---|
| Falcon LogScale | Falcon LogScale | Falcon LogScale integrations |
| Mezmo | Mezmo | Mezmo integrations |
| Splunk | Splunk | Splunk integrations |
| Log management system that uses an ELK stack | ELK | Custom integrations |
| Any other log management system | Custom | Custom integrations |
Push mode
For log data from systems that are not Falcon Logscale, Mezmo, or Splunk, you can use either the Custom, ELK, or Kafka data integration to ingest your log data.
| Domain-specific log management system | Data integration | Link |
|---|---|---|
| Any system | Kafka | Custom integrations |
Other requirements
Log entries must be in JSON format, and not in plain text.
When you are preparing domain-specific log data for loading, ensure that the ibm_messageId and loglevel fields are at the top level. The module field is optional. These fields are used by the statistical
baseline log anomaly detection algorithm to perform statistical analysis and detect log anomalies.