Message flow statistics and accounting data

Message flow statistics and accounting data can be collected to record performance and operating details of one or more message flows.

Message flow statistics and accounting data captures dynamic information about the runtime behavior of a message flow. For example, it indicates how many messages are processed and how large those messages are, as well as processor usage and elapsed processing times. The statistical data is collected and recorded in a specified location when an event occurs, such as when a snapshot interval expires or when the integration server that you are collecting information about stops.

You can use the statistics generated for the following purposes:
  • You can use snapshot data to assess the execution of a message flow to determine why it, or a node within it, is not performing as you expect.
  • You can determine the route that messages are taking through a message flow. For example, you might find that an error path is taken more frequently than you expect and you can use the statistics to understand when the messages are routed to this error path.

    Check the information provided by snapshot data for routing information; if this is insufficient for your needs, use archive data.

  • You can record the load that applications, trading partners, or other users put on the integration node. This allows you to record the relative use that different users make of the integration node, and perhaps to charge them accordingly. For example, you could levy a nominal charge on every message that is processed by an integration node, or by a specific message flow.

    You can use archive data to carry out an assessment of this kind.

The integration node takes information about statistics and accounting from the operating system. On some operating systems, such as Windows, Linux®, and UNIX, rounding can occur because the system calls that are used to determine the processor times are not sufficiently granular. This rounding might affect the accuracy of the data.

Data relating to the size of messages is not collected for WebSphere® Adapters nodes (for example, the SAPInput node), the FileInput node, the JMSInput node, or any user-defined input node that does not create a message tree from a bit stream.

Collecting message flow accounting and statistics data is optional; by default it is switched off. To use this facility, request it on a message flow or integration server basis. The settings for accounting and statistics data collection are reset to the defaults when an integration server is redeployed. Previous settings for message flows in an integration server are not passed on to the new message flows deployed to that integration server.

Before you start data collection, ensure that the publication of events has been enabled and that the pub/sub broker has been configured. For more information, see Configuring the publication of event messages and Configuring the built-in MQTT pub/sub broker.

You can start and stop data collection by using the mqsichangeflowstats command or the web user interface; you do not need to modify the integration node or the message flow, or redeploy the message flow, to request statistics collection.

You can activate data collection on both your production and test systems. If you collect the default level of statistics (message flow), the effect on integration node performance is minimal. However, collecting more data than the default message flow statistics can generate high volumes of report data that might affect performance slightly.

When you plan data collection, consider the following points:

The following topic contains reference information that you might find helpful when analyzing and tuning the performance of your message flows: Message flow accounting and statistics records

The topic for each message has the following structure:
  • For XML format:
    • For an MQ pub/sub broker:
      $SYS/Broker/integrationNodeName/StatisticsAccounting/integration_server_name
    • For an MQTT pub/sub broker:
      IBM/IntegrationBus/integrationNodeName/StatisticsAccounting/integration_server_name
  • For JSON format:
    • For an MQ pub/sub broker:
      $SYS/Broker/integrationNodeName/Statistics/JSON/integration_server_name
    • For an MQTT pub/sub broker:
      IBM/IntegrationBus/integrationNodeName/Statistics/JSON/integration_server_name
You can set up subscriptions for a specific integration server on a specific integration node. For example:
  • For XML format:
    • For an MQ pub/sub broker:
      $SYS/Broker/IBNODE/StatisticsAccounting/default
    • For an MQTT pub/sub broker:
      IBM/IntegrationBus/IBNODE/StatisticsAccounting/default
  • For JSON format:
    • For an MQ pub/sub broker:
      $SYS/Broker/IBNODE/Statistics/JSON/default
    • For an MQTT pub/sub broker:
      IBM/IntegrationBus/IBNODE/Statistics/JSON/default
You can also use wildcards in the subscriptions to broaden the scope of what is returned. For example, to subscribe to reports for all integration servers on all integration nodes, use the following values:
  • For XML format:
    • For an MQ pub/sub broker:
      $SYS/Broker/+/StatisticsAccounting/#
    • For an MQTT pub/sub broker:
      IBM/IntegrationBus/+/StatisticsAccounting/#
  • For JSON format:
    • For an MQ pub/sub broker:
      $SYS/Broker/+/Statistics/JSON/#
    • For an MQTT pub/sub broker:
      IBM/IntegrationBus/+/Statistics/JSON/#