Log forwarding

You can run Transaction Analysis Workbench in batch jobs to forward logs in CSV or JSON format to analytics platforms off z/OS®. The jobs can either stream data over a network to a TCP socket, or write to files on z/OS, and then transfer the files.

Figure 1. Forwarding logs to analytics platforms
Figure that shows Transaction Analysis Workbench converting a log to CSV or JSON format with corresponding metadata and configuration files for use in analytics platforms.

Data format: CSV, DSV, JSON, or JSON Lines

You can forward logs in common formats supported by many analytics platforms: comma-separated values (CSV), other delimiter-separated values (DSV) formats, JavaScript Object Notation (JSON), or JSON Lines.

Note: For brevity, in this Transaction Analysis Workbench documentation, the term CSV includes DSV, and JSON includes JSON Lines, except where these different terms are required to distinguish between these related formats.

Batch forwarding

To convert logs to CSV or JSON, you submit a z/OS batch job that runs the CSV or JSON command of the Transaction Analysis Workbench report and extract utility.

Streaming versus file transfer

The batch job that converts logs to CSV or JSON can either use the STREAM command of the report and extract utility to stream the output to a TCP socket on a remote system, or write to staging files on z/OS, and then, in a subsequent job step, transfer the files to a remote system.

JCL: Write your own or use the ISPF dialog

You can either write your own JCL to forward logs, or use the Transaction Analysis Workbench ISPF dialog (option 5 Analytics) to create JCL tailored for destinations such as Elastic, Splunk, DB2®, or Hadoop.

Simple, self-contained JCL

You can extract logs in CSV or JSON format in less than a dozen lines of self-contained JCL. All you need to know is:

  • Where Transaction Analysis Workbench is installed on your z/OS system.
  • The location of a log that contains the records you want to extract.
  • The record types that you want to extract.

You can specify the CSV or JSON command in an in-stream SYSIN data set in the JCL. Streaming to a TCP socket instead of writing to a file simply involves adding a STREAM command to the SYSIN data set.

Forward the fields you want, from the records you want, when you want

Transaction Analysis Workbench puts you in control of what you forward and when you forward it.

Transaction Analysis Workbench does not limit you to forwarding a fixed subset of fields from each record type. You can select as many or as few fields as you want.

You can filter records for forwarding based on combinations of field values. For example, you might want to only forward records from particular subsystem IDs and user IDs; perhaps also limited to records with abnormal metrics, such as high CPU or long response times.

Transaction Analysis Workbench log forwarding is not real-time. To forward logs with Transaction Analysis Workbench, you run batch jobs on z/OS. You decide when to run those jobs, and how often.

Platform-specific metadata or configuration files

In addition to converting logs to CSV or JSON, the report and extract utility can also create the following files for getting data onto analytics platforms:

Logstash configuration file for Elasticsearch
Forwards JSON or CSV data to Elasticsearch.

Transaction Analysis Workbench creates Logstash configs for output to Elasticsearch. You can edit the configs to specify different outputs supported by Logstash.

HCatalog table schema for Hadoop
Contains a Hive DDL CREATE TABLE statement that creates a catalog table; specifically, an external table that refers to an HDFS directory that contains one or more corresponding CSV files. You can open the catalog table in various Hadoop-based applications, such as IBM® BigSheets.

Many applications can use CSV data directly. However, HCatalog table schemas offer several benefits: for example, they explicitly specify the data type of each column. Without a schema, applications must infer data types from the CSV data.

DB2 table schema
Contains a DB2 DDL CREATE TABLE statement.
DB2 load utility control statement
A LOAD control statement for the DB2 load utility that loads CSV data into DB2.

Pulling logs

Instead of forwarding, or pushing, logs to a remote system, you can use various techniques to pull them from z/OS. For example, if you write CSV or JSON files to a z/OS UNIX directory, you can use NFS or SFTP to pull the files to a remote system.