Monitoring in DB2 9.7, Part 1
Emulating data reset with the new DB2 9.7 monitoring table functions
This content is part # of # in the series: Monitoring in DB2 9.7, Part 1
This content is part of the series:Monitoring in DB2 9.7, Part 1
Stay tuned for additional content in this series.
Background and motivation
DB2 for Linux, UNIX, and Windows Version 9.7 introduced a number of new
monitoring SQL table functions that start with the prefix
MON_ (for example,
which report monitoring data for connections and tables respectively).
These new table functions are a more efficient and light-weight alternative to the snapshot monitor table functions that
existed prior to Version 9.7.
In addition, they report a large set of new monitoring elements not available through the system
monitor interfaces, including an entire category of time-spent elements that break down where the database server is
spending time processing requests.
These new monitoring interfaces report data for the system as a whole (aggregated to
transactions, connections, service classes, and workloads) as well as for specific objects
(for example, indexes and tables).
Monitoring using SQL table functions provides a number of significant advantages over other DB2 monitoring interfaces
such as the snapshot commands (for example,
GET SNAPSHOT) or the snapshot APIs (for example,
An SQL interface can be used in any programming language that supports SQL, and
because the monitoring data is returned as rows in a table, the
interface supports the rich query capabilities of the SQL language.
For example, you can easily search for the applications
consuming the highest amount of CPU by simply adding a
clause to a query that lists application data from a monitoring table function.
One disadvantage of the new monitoring SQL table functions, which is also shared by the existing system monitor snapshot SQL interfaces, is that neither has a per-session reset capability. The monitoring data they report is always relative to the activation of the database. That is, counters start at 0 on database activation at each database partition and are strictly increasing until deactivation. Both the command line and C API interfaces to snapshot data have the ability to perform a per-session reset of monitoring data. Per-session means that the reset is local to the application performing the reset. After the application performs the reset, it will see monitoring data relative to the reset time (counters have restarted at 0). With a per-session reset, other applications performing monitoring are unaffected and continue to access values relative to the activation of the database. Per-session reset allows different monitoring applications to have different views of the monitoring data, because not all monitoring applications would necessarily want to reset data at the same time.
The ability to reset monitoring data can be useful in a number of scenarios. For example, suppose you want to see how many table scans are done on the TEST.EMPLOYEE table between the hours of 1:00 p.m. and 2:00 p.m. Using the snapshot monitor command line interface, you could do something similar to the following:
- At 1:00 p.m. attach to the database server and reset the monitoring data:
RESET MONITOR FOR DATABASE <dbalias>
- At 2:00 p.m., on the same attachment, issue the following command to query monitoring data for all tables.
Because the same attachment is used and a reset was performed at 1:00 p.m., the
monitoring data covers the time period from 1:00 p.m. to 2:00 p.m.
GET SNAPSHOT FOR TABLES ON <dbalias>
- Search the returned monitoring data (the data is returned as a text report) and find the data for the TEST.EMPLOYEE table. The number of table scans reported represents the scans on the table between 1:00 p.m. and 2:00 p.m.
Per-session reset of monitoring data returned from SQL table functions
You can easily implement per-session reset of data returned from the monitoring SQL table functions with a few additional user-defined functions (UDFs) and procedures that extend the basic functionality shipped with DB2 Version 9.7. All that is required to implement per-session reset is:
- Per-session storage of baseline data for each function of interest. Baseline data consists of the values of monitoring counters captured at the time a reset occurs.
- An interface to capture baseline data (that is, perform the "reset").
- An interface to take deltas against the baseline.
Following is a suggested method you can use to satisfy these requirements:
- Use a created global temporary table (CGTT) for per-session storage of the baseline data (use one CGTT per table function). The definition of a CGTT is stored in the catalogs like a regular table, and the CGTT is instantiated on the first use by a connection. Each connection using the CGTT has its own copy of the CGTT and only sees the data in its copy.
- Use a stored procedure to capture baseline data. The procedure simply selects from the monitoring table function and inserts the data returned into the CGTT in order to take a baseline.
Create a table function that wraps the underlying monitoring table function and takes deltas against the baseline data stored in the
The wrapper function takes as input the same values passed as input to the underlying monitoring table function, so it has the same
capability to filter the data that is returned.
The output of the wrapper table function matches exactly the output from the monitoring table
function with one additional column. The additional column contains a timestamp that
indicates when the baseline
data was taken (that is, the monitoring interval reported by the wrapper table function is from the baseline timestamp until the current timestamp).
Any output columns that do not report counter values (for example, state based output)
are returned directly from the underlying monitoring table function with no modification by the wrapper.
Note: If baseline data does not exist in the CGTT, the wrapper function simply returns the data directly from the underlying monitor table function.
The following scripts illustrate the method described above
for performing a per-session reset of monitor data returned by the
function, which reports monitoring information for individual tables in the database.
Listing 1 shows a simple UDF for handling cases where baseline data for an object does not exist, or where counter rollover has occurred (an unlikely scenario, but it is handled).
Listing 1. UDF for handling no baseline cases
CREATE FUNCTION MONDELTA.METRIC_DELTA( CURR BIGINT, BASELINE BIGINT ) RETURNS BIGINT LANGUAGE SQL CONTAINS SQL DETERMINISTIC RETURN CASE WHEN BASELINE IS NULL THEN CURR WHEN BASELINE <= CURR THEN CURR - BASELINE ELSE CURR + (9223372036854775807 - BASELINE) END
Listing 2 defines a CGTT for storing the baseline monitoring data for tables.
Listing 2. CGTT for baseline monitoring data
CREATE GLOBAL TEMPORARY TABLE MONDELTA.TABLE_BASELINE AS (SELECT CURRENT TIMESTAMP AS BASELINE_TIMESTAMP, TABSCHEMA, TABNAME, MEMBER, DATA_PARTITION_ID, TABLE_SCANS, ROWS_READ, ROWS_INSERTED, ROWS_UPDATED, ROWS_DELETED, OVERFLOW_ACCESSES, OVERFLOW_CREATES, PAGE_REORGS FROM TABLE(MON_GET_TABLE('','',-1)) AS T) DEFINITION ONLY ON COMMIT PRESERVE ROWS NOT LOGGED ON ROLLBACK PRESERVE ROWS
Listing 3 provides a procedure to capture baseline monitoring data for tables.
Listing 3. Procedure for capturing baseline monitoring data
CREATE PROCEDURE MONDELTA.GET_TABLE_BASELINE() LANGUAGE SQL BEGIN DELETE FROM MONDELTA.TABLE_BASELINE; INSERT INTO MONDELTA.TABLE_BASELINE SELECT CURRENT TIMESTAMP, TABSCHEMA, TABNAME, MEMBER, DATA_PARTITION_ID, TABLE_SCANS, ROWS_READ, ROWS_INSERTED, ROWS_UPDATED, ROWS_DELETED, OVERFLOW_ACCESSES, OVERFLOW_CREATES, PAGE_REORGS FROM TABLE(MON_GET_TABLE('','',-2)) AS T; END
Listing 4 defines a table function that wraps the
MON_GET_TABLE table function.
As mentioned above, the wrapper function takes the same inputs and returns the same output
(plus a baseline timestamp column) as the
All counter values returned are relative to the baseline, if one exists.
If no baseline exists, this table function is equivalent to
Listing 4. Wrapper function for
MON_GET_TABLE, which returns monitor elements relative to the baseline monitoring data
CREATE FUNCTION MONDELTA.GET_TABLE_DELTA( TABSCHEMA VARCHAR(128), TABNAME VARCHAR(128), MEMBER INTEGER) RETURNS TABLE ( BASELINE_TIMESTAMP TIMESTAMP, TABSCHEMA VARCHAR(128), TABNAME VARCHAR(128), MEMBER SMALLINT, TAB_TYPE VARCHAR(14), TAB_FILE_ID BIGINT, DATA_PARTITION_ID INTEGER, TBSP_ID BIGINT, INDEX_TBSP_ID BIGINT, LONG_TBSP_ID BIGINT, TABLE_SCANS BIGINT, ROWS_READ BIGINT, ROWS_INSERTED BIGINT, ROWS_UPDATED BIGINT, ROWS_DELETED BIGINT, OVERFLOW_ACCESSES BIGINT, OVERFLOW_CREATES BIGINT, PAGE_REORGS BIGINT) LANGUAGE SQL NOT DETERMINISTIC NO EXTERNAL ACTION READS SQL DATA RETURN SELECT B.BASELINE_TIMESTAMP, A.TABSCHEMA, A.TABNAME, A.MEMBER, A.TAB_TYPE, A.TAB_FILE_ID, A.DATA_PARTITION_ID, A.TBSP_ID, A.INDEX_TBSP_ID, A.LONG_TBSP_ID, MONDELTA.METRIC_DELTA(A.TABLE_SCANS, B.TABLE_SCANS), MONDELTA.METRIC_DELTA(A.ROWS_READ, B.ROWS_READ), MONDELTA.METRIC_DELTA(A.ROWS_INSERTED, B.ROWS_INSERTED), MONDELTA.METRIC_DELTA(A.ROWS_UPDATED, B.ROWS_UPDATED), MONDELTA.METRIC_DELTA(A.ROWS_DELETED, B.ROWS_DELETED), MONDELTA.METRIC_DELTA(A.OVERFLOW_ACCESSES, B.OVERFLOW_ACCESSES), MONDELTA.METRIC_DELTA(A.OVERFLOW_CREATES, B.OVERFLOW_CREATES), MONDELTA.METRIC_DELTA(A.PAGE_REORGS, B.PAGE_REORGS) FROM TABLE(MON_GET_TABLE(TABSCHEMA,TABNAME,MEMBER)) A LEFT OUTER JOIN MONDELTA.TABLE_BASELINE B ON (A.TABSCHEMA = B.TABSCHEMA AND A.TABNAME = B.TABNAME AND ((A.DATA_PARTITION_ID = B.DATA_PARTITION_ID) OR (A.DATA_PARTITION_ID IS NULL AND B.DATA_PARTITION_ID IS NULL)) AND A.MEMBER = B.MEMBER)
Listing 5 is a procedure to reset/clear the baseline monitoring data for tables. You can use this procedure to clear the baseline data for a table function, so that the data returned by the corresponding delta function is once again relative to database activation time (that is, delta from 0).
Listing 5. Procedure to reset/clear baseline monitoring data
CREATE PROCEDURE MONDELTA.RESET_TABLE_BASELINE() LANGUAGE SQL BEGIN DELETE FROM MONDELTA.TABLE_BASELINE; END
Now, let's revisit the sample scenario from the previous section where you want to view table scans for the TEST.EMPLOYEE table between 1:00 p.m. and 2:00 p.m. Using the samples above, you can now do the following:
- At 1:00 p.m. connect to the database and take a baseline of table data:
CONNECT TO <dbalias> CALL MONDELTA.GET_TABLE_BASELINE()
- At 2:00 p.m., on the same connection, query the monitoring data for the
TEST.EMPLOYEE table using the
GET_TABLE_DELTAtable function. The monitoring data returned by this table function is relative to the baseline, taken at 1:00 p.m.
SELECT TABLE_SCANS FROM TABLE(MONDELTA.GET_TABLE_DELTA( 'TEST', 'EMPLOYEE', -2)) AS T
Notice in the above example how much simpler it is to extract just the data for the table of interest using the table function rather than having to search for the data in the snapshot output.
The Download section of this article contains a
link to a zip file that contains two DB2 scripts:
mondelta.db2 script applies the wrapping method described above to the following monitor table functions:
mondelta.db2 script, creates the following objects in the database:
- Helper functions:
- CGTTs for storing baseline monitoring data:
- Procedures to capture baseline monitoring data:
- Wrapper table functions that report deltas relative to the baseline
Note: Refer to the DB2 Version 9.7 Information Center (link contained in Related topics section) for a description of all input parameters and output columns of the corresponding monitor interfaces (for example,
MONDELTA.GET_TABLE_DELTA). Each wrapper table function adds a single new output column named
BASELINE_TIMESTAMPto the output, otherwise the output columns are identical.
- Procedures to reset baseline monitoring data:
mondeltadrop.db2 script drops the objects created by the
Use the following commands to execute the scripts:
db2 –td@ –f mondelta.db2 db2 –td@ –f mondeltadrop.db2
You must define a user temporary table space in order to successfully run the
mondelta.db2script. If you do not, the script will be unable to define the CGTTs used by the delta table functions.
- The sample delta table functions and CGTTs are based on the DB2 9.7 Fix Pack 1 versions of the monitor table functions. New monitor elements may be added to future Fix Packs or releases. To add support for reporting deltas of any new monitor elements that are added after DB2 9.7 Fix Pack 1, you will need to manually update the sample delta table functions and CGTTs.
You use the objects created by these DB2 scripts in the same way as described in the example in the previous section:
- Capture baseline data by invoking the
GET_<object>_BASELINEprocedure. Invoking the baseline procedure after a baseline has already been captured replaces the old baseline with a new one.
- Select from the corresponding
GET_<object>_DELTAfunction to retrieve monitoring counters relative to when the last baseline was captured. If no baseline exists, counters are relative to database activation.
Reset baseline data by using the corresponding
Global reset of monitoring data returned from SQL table functions
You can easily modify the method described in this article to support a global reset capability rather than per-session resets. Global reset simply means that the baseline monitoring data is shared across all sessions. So when you take a baseline, monitoring data is reset for all sessions rather than just the current session. To adapt the method to support global reset, simply change the table creation statements to create regular tables rather than CGTTs.
Listing 6 continues with the example described previously. It shows how to modify the table creation statement for a global reset.
Listing 6. Table creation example for global reset
CREATE TABLE MONDELTA.TABLE_BASELINE AS (SELECT CURRENT TIMESTAMP AS BASELINE_TIMESTAMP, TABSCHEMA, TABNAME, MEMBER, DATA_PARTITION_ID, TABLE_SCANS, ROWS_READ, ROWS_INSERTED, ROWS_UPDATED, ROWS_DELETED, OVERFLOW_ACCESSES, OVERFLOW_CREATES, PAGE_REORGS FROM TABLE(MON_GET_TABLE('','',-1)) AS T)
Global reset capability is advantageous when the monitoring application does not maintain a persistent connection to the database server.
For example, there would not be a persistent connection to the
database server in a scenario where you use
cron to run a script every hour that captures and resets monitoring data
(in other words, the monitoring application is capturing an hour by hour history of monitoring data for the system).
In a usage scenario like this, per-session reset does not help because the monitoring application does not remain connected to the database server.
So you would require that the baseline data have a lifetime greater than the connection.
The primary disadvantage of global reset is the unintended side effects it can cause when there are multiple monitoring applications connected to the database server. It is possible that different applications (from potentially different vendors) may have different requirements. For example, one application may be monitoring data hour-by-hour, while another is gathering monitoring data day-by-day. If the application performing hourly monitoring were to perform a global reset, the application querying monitoring data once every day would only see the data from the last hour.
This article has described a simple method for performing a per-session reset of monitoring data returned from various monitor table functions. You can download scripts that implement this method for a number of the new monitoring table functions introduced in DB2 Version 9.7. This per-session reset is functionally equivalent to the reset capability available in the DB2 system monitor snapshot APIs. The article also describes a modification you can make to the scripts to support a global reset of monitoring data rather than a per-session reset.
The author would like to acknowledge and thank Paul Bird and David Kalmuk for reviewing this article.
- IBM DB2 Database for Linux, UNIX, and Windows Information Center (DB2 9.7): Find all the information you need to use the DB2 family of products.
- In the DB2 for Linux, UNIX, and Windows area on developerWorks, get the resources you need to advance your DB2 skills.