Output formats for message flow accounting and statistics data

When you collect message flow statistics, you can choose the output destination for the data.

You can select one or more of the following destinations, by setting the outputFormat property in the configuration file for your integration node (node.conf.yaml) or integration server (server.conf.yaml): If no format is specified, accounting and statistics data is sent to the user trace log by default.
You can change the output format and destination of statistics data (snapshot, archive, or both) by setting the outputFormat property in the configuration file for your integration node (node.conf.yaml) or integration server (server.conf.yaml. You can set the output format of snapshot statistics data to one or more of the following values (separated by commas):
  • csv
  • json
  • xml
  • usertrace
You can set the output format of archive statistics data to one or more of the following values (separated by commas):
  • csv
  • xml
  • usertrace
For more information about configuring the collection and publishing of message flow accounting and statistics data, see Configuring the collection of message flow accounting and statistics data.

Before message flow accounting and statistics can be collected, you must ensure that the publication of events has been enabled and a pub/sub broker has been configured. For more information, see Configuring the publication of event messages and Configuring the built-in MQTT pub/sub broker.

If you start the collection of message flow statistics data by using the web user interface, the statistics are emitted in JSON format in addition to any other formats that are already being emitted. If the output format was previously not specified and therefore defaulted to the user trace, the newly specified format replaces the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.

If you use the mqsichangeflowstats command to explicitly specify the required output formats, the formats specified by the command replace the formats that are currently being emitted for the message flow (they are not added to them).

If you stop statistics collection from the web user interface, all output formats are turned off. If statistics collection is subsequently restarted by using the mqsichangeflowstats command, the output format is reset to the default value of user trace, unless other formats are specified on the command. However, if statistics collection is restarted by using the web user interface, data is collected in JSON format.

Statistics data is written to the specified output location in the following circumstances:

  • When the archive data interval expires.
  • When the snapshot interval expires.
  • When the integration node shuts down. Any data that has been collected by the integration node, but has not yet been written to the specified output destination, is written during shutdown. It might therefore represent data for an incomplete interval.
  • When any part of the integration node configuration is redeployed. Redeployed configuration data might contain an updated configuration that is not consistent with the existing record structure (for example, a message flow might include an additional node, or an integration server might include a new message flow). Therefore the current data, which might represent an incomplete interval, is written to the output destination. Data collection continues for the redeployed configuration until you change data collection parameters or stop data collection.
  • When data collection parameters are modified. If you update the parameters that you have set for data collection, all data that is collected for the message flow (or message flows) is written to the output destination to retain data integrity. Statistics collection is restarted according to the new parameters.
  • When an error occurs that terminates data collection. You must restart data collection yourself in this case.

User trace entries

You can specify that the data that is collected is written to the user trace log. The data is written even when trace is switched off.

If no output destination is specified for accounting and statistics, the default is the user trace log. If one or more output formats are subsequently specified, the specified formats replace the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.

The data is written to one of the following locations:

Windows platformWindows
If you set the work path by using the -w parameter of the mqsicreatebroker command, the location is workpath\Common\log.
If you have not specified the integration node work path, the location is:
  • On Windows:C:\ProgramData\IBM\MQSI\Common\log.
Linux platformUNIX platformLinux®
/var/mqsi/common/log

For information about the user trace entries, see User trace entries for message flow accounting and statistics data.

JSON publication

You can specify that the data that is collected is published in JSON format, which is available for viewing in the web user interface. If statistics collection is started through the web user interface, statistics data is emitted in JSON format in addition to any other formats that are already being emitted.

The topic on which the data is published has the following structure:
  • For publications on an MQ pub/sub broker:
    $SYS/Broker/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
    /libraries/library_name/messageflows/message_flow_name
  • For publications on an MQTT pub/sub broker:
    IBM/IntegrationBus/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
    /libraries/library_name/messageflows/message_flow_name
The variables correspond to the following values:
integrationNodeName
The name of the integration node for which statistics are collected. For an integration server that is managed by an integration node, integrationNodeName is the name of the integration node that manages the integration server. For an independent integration server, specify the literal string integration_server in place of an integration node name.
integration_server_name
The name of the integration server for which statistics are collected
application_name
The name of the application for which statistics are collected
library_name
The name of the library for which statistics are collected
message_flow_name
The name of the message flow for which statistics are collected

For information about the JSON publication, see JSON publication for message flow accounting and statistics data.

XML publication

You can specify that the data that is collected is published in XML format and is available to subscribers registered in the integration node network that subscribe to the correct topic.

The topic on which the data is published has the following structure:
  • For publications on an MQ pub/sub broker:
    $SYS/Broker/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
  • For publications on an MQTT pub/sub broker:
    IBM/IntegrationBus/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
The variables correspond to the following values:
integrationNodeName
The name of the integration node for which statistics are collected. For an integration server that is managed by an integration node, integrationNodeName is the name of the integration node that manages the integration server. For an independent integration server, specify the literal string integration_server in place of an integration node name.
record_type
Set to SnapShot or Archive, depending on the type of data to which you are subscribing. Alternatively, use + to register for both snapshot and archive data if it is being produced. This value is case sensitive and must be entered as SnapShot.
integrationServerName
The name of the integration server for which statistics are collected.
message_flow_label
The label on the message flow for which statistics are collected.

Subscribers can include filter expressions to limit the publications that they receive. For example, they can choose to see only snapshot data, or to see data that is collected for a single integration node. Subscribers can specify wild cards (+ and #) to receive publications that refer to multiple resources. Use + to receive resources on one topic level, and # to receive resources across multiple topic levels.

The following examples show the topic with which a subscriber registers to receive different sorts of data:
  • Register the following topic for the subscriber to receive data for all message flows running on an integration node named INODE:
    $SYS/Broker/INODE/StatisticsAccounting/#
    or
    IBM/IntegrationBus/INODE/StatisticsAccounting/#
  • Register the following topic for the subscriber to receive only archive statistics that relate to a message flow Flow1 running on integration server default on integration node INODE:
    $SYS/Broker/INODE/StatisticsAccounting/Archive/default/Flow1
    or
    IBM/IntegrationBus/INODE/StatisticsAccounting/Archive/default/Flow1
  • Register the following topic for the subscriber to receive both snapshot and archive data for message flow Flow1 running on integration server default on integration node INODE:
    $SYS/Broker/INODE/StatisticsAccounting/+/default/Flow1
    or
    IBM/IntegrationBus/INODE/StatisticsAccounting/+/default/Flow1

For help with registering your subscriber, see Message display, test and performance utilities SupportPac (IH03).

For information about the XML publication, see XML publication for message flow accounting and statistics data.

CSV records

You can specify that the data that is collected is published in comma-separated value (.csv) format. Snapshot and archive data records are written to output files, which include a header with the field name. The fields for averages are optional, and are written only if the averages property of the statistics file writer is set to true.

One line is written for each message flow that is producing data for the time period that you choose. For example, if MessageFlowA and MessageFlowB are both producing archive data over a period of 60 minutes, both message flows produce a line of statistics data every 60 minutes.

For more information about the CSV records, see CSV file format for message flow accounting and statistics data.