Event monitor named pipe management

With some event monitors, you can have event data written to named pipes. What follows are some guidelines on how to use named pipe event monitors more effectively.

A pipe event monitor enables the processing of the event monitor data stream through a named pipe. Using a pipe event monitor is desirable if you need to process event records in real time. Another important advantage is that your application can ignore unwanted data as it is read off the pipe, giving the opportunity to considerably reduce storage requirements.

On AIX®, you can create named pipes by using the mkfifo command. On Linux® and other UNIX types use the pipe() routine. On Windows, you can create named pipes by using the CreateNamedPipe() routine.

When you direct data to a pipe, I/O is always blocked and the only buffering is that performed by the pipe. It is the responsibility of the monitoring application to promptly read the data from the pipe as the event monitor writes the event data. If the event monitor is unable to write the data to the pipe (for example, because the pipe is full), monitor data will be lost.

In addition, there must be enough space in the named pipe to handle incoming event records. If the application does not read the data from the named pipe fast enough, the pipe will fill up and overflow. The smaller the pipe buffer, the greater the chance of an overflow.

When a pipe overflow occurs, the monitor creates overflow event records indicating that an overflow has occurred. The event monitor is not turned off, but monitor data is lost. If there are outstanding overflow event records when the monitor is deactivated, a diagnostic message will be logged. Otherwise, the overflow event records will be written to the pipe when possible.

The amount of data that can be written to a pipe at any one time is determined by the underlying operating system. If your operating system allows you to define the size of the pipe buffer, use a pipe buffer of at least 32K. For high-volume event monitors, you should set the monitoring application's process priority equal to or higher than the agent process priority.

It is possible for the data stream coming from a single write operation of an activities or statistics event monitor to contain more data than can be written to the named pipe. In these situations, the data stream is split into blocks that can fit into the buffer, and each block is identified with a header: The first block is identified by a logical header with the element ID SQLM_ELM_EVENT_STARTPIPEBLOCK. The last block is identified by a logical header with element ID SQLM_ELM_EVENT_ENDPIPEBLOCK. All blocks in between are identified by logical headers with element ID SQLM_ELM_EVENT_MIDPIPEBLOCK. The monitoring application that is reading the pipe must be aware of these headers, and reassemble the blocks back into the complete data stream, stripping off the block headers as needed and reassembling the blocks to form a complete, valid data stream. The db2evmon tool provides this capability; it provides formatted output for all events generated by an event monitor that writes to a named pipe, reassembling the blocks as needed. If you want to process only selected events or monitor elements, you can write your own application to do so.