Overview of IBM Operations Analytics - Log Analysis extension options
This section describes how data is ingested by IBM® Operations Analytics - Log Analysis, the processes that are used to ingest the data, and the aspects of those processes that can be customized to create an Insight Pack.
Data collection
- Data collector
- Use the Data Collector to ingest data in batch mode. This is the easiest method to ingest a small number of log files or to test your IBM Operations Analytics - Log Analysis configuration.
- IBM Tivoli® Monitoring Log File Agent
- Use the IBM Tivoli Monitoring Log File Agent to batch load larger numbers of log files and for scenarios where you want to stream log data from your production environment.
- logstash
- Use logstash, which is an open source tool for managing events and logs, to collect logs, parse them, and send them to IBM Operations Analytics - Log Analysis. For more information about configuring data collection, see the logstash Integration Toolkit topic in the Configuring IBM Operations Analytics - Log Analysis section of the Information Center.
Annotation
- Split/Annotate
- There are two steps to the annotation process, split and annotate. During the split stage, specific logic, that is composed of rules or custom logic, is started to determine the logical beginning and end of an input data record. For example, if the logic is written to split log records by timestamp, then all physical records without a timestamp that follow the first physical record with a timestamp are considered part of the current logical record until the presence of the next timestamp is detected. After a complete logical record is established, it is forwarded on to the annotate stage where more logic is executed. This additional logic annotates or extracts the key pieces of information that are to be indexed. The fields that are annotated and then indexed are those that provide the most insight for searches or other higher-level operations that are performed on the indexed data.
- AQL
- Annotation Query Language (AQL) rules can be used to split input data records based on some known boundary and also used to annotate data from each record so that the records can be indexed. AQL rules included in an Insight Pack are installed into the IBM Operations Analytics - Log Analysis server when the Insight Pack is installed. Tools are provided to assist you with the development of AQL rules.
- Custom
- You can write custom logic, in Java or Python script, to perform the split and annotate functions. This is useful when you do not want to use or write AQL rules. You can include custom logic in an Insight Pack.
- none
- You can choose to exclude split and annotation logic from your Insight Pack. If you choose this option, any data records processed by Collections that are defined in the Insight Pack are indexed based on the indexing configuration only. In this case, only free form searches can be performed on the indexed data records.
Index configuration
To allow the fields that are extracted by the annotation logic to be indexed by IBM Operations Analytics - Log Analysis, you must supply an indexing configuration. The index configuration determines what is indexed, and how indexed data can be used in subsequent retrievals. After the data is indexed, you can perform searches and other higher-level operations to gain greater insight into the data for better problem determination. Tools are provided to enable you to develop an indexing configuration.
Administrative configuration
IBM Operations Analytics - Log Analysis provides a REST API to enable you to create configuration artifacts. As an Insight Pack developer, you can include definitions for various configuration artifacts such as Source Types, Collections, and Rule Sets. These artifacts are created when the content Insight Pack is installed. Tools are available to assist you with creating the configuration artifacts.