IBM Support

Weekly Tips from DB2 Experts: Maintaining 'mon_get_pkg_cache_stmt' result into a table

Technical Blog Post


Abstract

Weekly Tips from DB2 Experts: Maintaining 'mon_get_pkg_cache_stmt' result into a table

Body

How do you keep the result of overall SQL performance data in your DB2 system ?

I could see many DB2 mates have used our long last friend DB2 dynamic SQL snapshot for checking performance statistics of SQL statements.

And we have another friend 'mon_get_pkg_cache_stmt' who shows more performance aspect especially in elapsed time perspective.

Some smart DBAs regularly gather this result and keep in a table to see the changes and trend.
It's also good idea to load the data into other database if you have. Somebody call it as Performance Data Warehouse(PDW).

There's some consideration points for doing this work including identifying timestamp for each iteration, calculating delta value and so on.

In this blog, I would like to suggest and share some SQL statement scripts for gathering 1 time iteration example and load into a table.
The next thing you can proceed after this work would be doing this work in some intervals using crontab, ftp and whatever.

1. The script to get 'mon_get_pkg_cache_stmt' function output into export file.
   - It does not use 'select *' and gather most columns except COMP_ENV_DESC and ADDITIONAL_DETAILS which are not used mostly.
   - And added 
CURRENT TIMESTAMP AS INPUT_TIMESTAMP for fill in the time it is taken.
   - Added 'DBNAME' to mark the database name in case you have multiple databases.

 

########################################################
#!/bin/ksh

db2 connect to DBNAME
DATE=`date +%H%M%S`
db2 "export to MON_GET_PKG_CACHE_STMT_$DATE.ixf of ixf messages MON_GET_PKG_CACHE_STMT_$DATE.export.msg
SELECT CURRENT TIMESTAMP AS INPUT_TIMESTAMP
      ,EXECUTABLE_ID
      ,MEMBER
      ,SECTION_TYPE
      ,INSERT_TIMESTAMP
      ,PACKAGE_SCHEMA
      ,PACKAGE_NAME
      ,PACKAGE_VERSION_ID
      ,SECTION_NUMBER
      ,EFFECTIVE_ISOLATION
      ,NUM_EXECUTIONS
      ,NUM_EXEC_WITH_METRICS
      ,PREP_TIME
      ,TOTAL_ACT_TIME
      ,TOTAL_ACT_WAIT_TIME
      ,TOTAL_CPU_TIME
      ,POOL_READ_TIME
      ,POOL_WRITE_TIME
      ,DIRECT_READ_TIME
      ,DIRECT_WRITE_TIME
      ,LOCK_WAIT_TIME
      ,TOTAL_SECTION_SORT_TIME
      ,TOTAL_SECTION_SORT_PROC_TIME
      ,TOTAL_SECTION_SORTS
      ,LOCK_ESCALS
      ,LOCK_WAITS
      ,ROWS_MODIFIED
      ,ROWS_READ
      ,ROWS_RETURNED
      ,DIRECT_READS
      ,DIRECT_READ_REQS
      ,DIRECT_WRITES
      ,DIRECT_WRITE_REQS
      ,POOL_DATA_L_READS
      ,POOL_TEMP_DATA_L_READS
      ,POOL_XDA_L_READS
      ,POOL_TEMP_XDA_L_READS
      ,POOL_INDEX_L_READS
      ,POOL_TEMP_INDEX_L_READS
      ,POOL_DATA_P_READS
      ,POOL_TEMP_DATA_P_READS
      ,POOL_XDA_P_READS
      ,POOL_TEMP_XDA_P_READS
      ,POOL_INDEX_P_READS
      ,POOL_TEMP_INDEX_P_READS
      ,POOL_DATA_WRITES
      ,POOL_XDA_WRITES
      ,POOL_INDEX_WRITES
      ,TOTAL_SORTS
      ,POST_THRESHOLD_SORTS
      ,POST_SHRTHRESHOLD_SORTS
      ,SORT_OVERFLOWS
      ,WLM_QUEUE_TIME_TOTAL
      ,WLM_QUEUE_ASSIGNMENTS_TOTAL
      ,DEADLOCKS
      ,FCM_RECV_VOLUME
      ,FCM_RECVS_TOTAL
      ,FCM_SEND_VOLUME
      ,FCM_SENDS_TOTAL
      ,FCM_RECV_WAIT_TIME
      ,FCM_SEND_WAIT_TIME
      ,LOCK_TIMEOUTS
      ,LOG_BUFFER_WAIT_TIME
      ,NUM_LOG_BUFFER_FULL
      ,LOG_DISK_WAIT_TIME
      ,LOG_DISK_WAITS_TOTAL
      ,LAST_METRICS_UPDATE
      ,NUM_COORD_EXEC
      ,NUM_COORD_EXEC_WITH_METRICS
      ,VALID
      ,TOTAL_ROUTINE_TIME
      ,TOTAL_ROUTINE_INVOCATIONS
      ,ROUTINE_ID
      ,STMT_TYPE_ID
      ,QUERY_COST_ESTIMATE
      ,STMT_PKG_CACHE_ID
      ,COORD_STMT_EXEC_TIME
      ,STMT_EXEC_TIME
      ,TOTAL_SECTION_TIME
      ,TOTAL_SECTION_PROC_TIME
      ,TOTAL_ROUTINE_NON_SECT_TIME
      ,TOTAL_ROUTINE_NON_SECT_PROC_TIME
      ,VARCHAR(STMT_TEXT,30000) AS STMT_TEXT
      ,'MYDBNAME' AS DBNAME
FROM TABLE(MON_GET_PKG_CACHE_STMT(NULL,NULL,NULL,-1))"

db2 terminate

 

2. For the table to load this export file, we need 32k page size tablespace.

    Therefore you may need to create a 32 k bufferpool and tablespace or use existing ones if you have already.
    Followings are examples.

 $ db2 "create bufferpool buff1_32k immediate size 10000 pagesize 32k"

 $ db2 "create tablespace ts32k pagesize 32 k managed by database using (file '/Users/db2inst1/JSDB/dbpath1/ts32k' 1000) autoresize yes  bufferpool buff1_32k"

 

3. Create table to load the mon_get_pkg_cache_stmt export result file using following DDL.
    You can copy and edit for your environment.

$ cat mon_get_pkg_cache_stmt.ddl 
CONNECT TO JSDB;

DROP TABLE JUNSU.MON_GET_PKG_CACHE_STMT ;

CREATE TABLE JUNSU.MON_GET_PKG_CACHE_STMT  (
                  "INPUT_TIMESTAMP" TIMESTAMP NOT NULL,
                  "EXECUTABLE_ID" VARCHAR(32) FOR BIT DATA NOT NULL, 
                  "MEMBER" SMALLINT , 
                  "SECTION_TYPE" CHAR(1) , 
                  "INSERT_TIMESTAMP" TIMESTAMP , 
                  "PACKAGE_SCHEMA" VARCHAR(128) , 
                  "PACKAGE_NAME" VARCHAR(128) , 
                  "PACKAGE_VERSION_ID" VARCHAR(64) , 
                  "SECTION_NUMBER" BIGINT , 
                  "EFFECTIVE_ISOLATION" CHAR(2) , 
                  "NUM_EXECUTIONS" BIGINT , 
                  "NUM_EXEC_WITH_METRICS" BIGINT , 
                  "PREP_TIME" BIGINT , 
                  "TOTAL_ACT_TIME" BIGINT , 
                  "TOTAL_ACT_WAIT_TIME" BIGINT , 
                  "TOTAL_CPU_TIME" BIGINT , 
                  "POOL_READ_TIME" BIGINT , 
                  "POOL_WRITE_TIME" BIGINT , 
                  "DIRECT_READ_TIME" BIGINT , 
                  "DIRECT_WRITE_TIME" BIGINT , 
                  "LOCK_WAIT_TIME" BIGINT , 
                  "TOTAL_SECTION_SORT_TIME" BIGINT , 
                  "TOTAL_SECTION_SORT_PROC_TIME" BIGINT , 
                  "TOTAL_SECTION_SORTS" BIGINT , 
                  "LOCK_ESCALS" BIGINT , 
                  "LOCK_WAITS" BIGINT , 
                  "ROWS_MODIFIED" BIGINT , 
                  "ROWS_READ" BIGINT , 
                  "ROWS_RETURNED" BIGINT , 
                  "DIRECT_READS" BIGINT , 
                  "DIRECT_READ_REQS" BIGINT , 
                  "DIRECT_WRITES" BIGINT , 
                  "DIRECT_WRITE_REQS" BIGINT , 
                  "POOL_DATA_L_READS" BIGINT , 
                  "POOL_TEMP_DATA_L_READS" BIGINT , 
                  "POOL_XDA_L_READS" BIGINT , 
                  "POOL_TEMP_XDA_L_READS" BIGINT , 
                  "POOL_INDEX_L_READS" BIGINT , 
                  "POOL_TEMP_INDEX_L_READS" BIGINT , 
                  "POOL_DATA_P_READS" BIGINT , 
                  "POOL_TEMP_DATA_P_READS" BIGINT , 
                  "POOL_XDA_P_READS" BIGINT , 
                  "POOL_TEMP_XDA_P_READS" BIGINT , 
                  "POOL_INDEX_P_READS" BIGINT , 
                  "POOL_TEMP_INDEX_P_READS" BIGINT , 
                  "POOL_DATA_WRITES" BIGINT , 
                  "POOL_XDA_WRITES" BIGINT , 
                  "POOL_INDEX_WRITES" BIGINT , 
                  "TOTAL_SORTS" BIGINT , 
                  "POST_THRESHOLD_SORTS" BIGINT , 
                  "POST_SHRTHRESHOLD_SORTS" BIGINT , 
                  "SORT_OVERFLOWS" BIGINT , 
                  "WLM_QUEUE_TIME_TOTAL" BIGINT , 
                  "WLM_QUEUE_ASSIGNMENTS_TOTAL" BIGINT , 
                  "DEADLOCKS" BIGINT , 
                  "FCM_RECV_VOLUME" BIGINT , 
                  "FCM_RECVS_TOTAL" BIGINT , 
                  "FCM_SEND_VOLUME" BIGINT , 
                  "FCM_SENDS_TOTAL" BIGINT , 
                  "FCM_RECV_WAIT_TIME" BIGINT , 
                  "FCM_SEND_WAIT_TIME" BIGINT , 
                  "LOCK_TIMEOUTS" BIGINT , 
                  "LOG_BUFFER_WAIT_TIME" BIGINT , 
                  "NUM_LOG_BUFFER_FULL" BIGINT , 
                  "LOG_DISK_WAIT_TIME" BIGINT , 
                  "LOG_DISK_WAITS_TOTAL" BIGINT , 
                  "LAST_METRICS_UPDATE" TIMESTAMP , 
                  "NUM_COORD_EXEC" BIGINT , 
                  "NUM_COORD_EXEC_WITH_METRICS" BIGINT , 
                  "VALID" CHAR(1) , 
                  "TOTAL_ROUTINE_TIME" BIGINT , 
                  "TOTAL_ROUTINE_INVOCATIONS" BIGINT , 
                  "ROUTINE_ID" BIGINT , 
                  "STMT_TYPE_ID" VARCHAR(32) , 
                  "QUERY_COST_ESTIMATE" BIGINT , 
                  "STMT_PKG_CACHE_ID" BIGINT , 
                  "COORD_STMT_EXEC_TIME" BIGINT , 
                  "STMT_EXEC_TIME" BIGINT , 
                  "TOTAL_SECTION_TIME" BIGINT , 
                  "TOTAL_SECTION_PROC_TIME" BIGINT , 
                  "TOTAL_ROUTINE_NON_SECT_TIME" BIGINT , 
                  "TOTAL_ROUTINE_NON_SECT_PROC_TIME" BIGINT , 
                  "STMT_TEXT" VARCHAR(30000),
                  "DBNAME" VARCHAR(25))
         IN "TS32K" ; 

CREATE UNIQUE INDEX JUNSU.MON_GET_PKG_CACHE_STMT_I1 ON JUNSU.MON_GET_PKG_CACHE_STMT
(INPUT_TIMESTAMP, EXECUTABLE_ID)
ALLOW REVERSE SCANS
PCTFREE 10
CLUSTER ;

ALTER TABLE JUNSU.MON_GET_PKG_CACHE_STMT ADD CONSTRAINT MON_GET_PKG_CACHE_STMT_PK PRIMARY KEY 
(INPUT_TIMESTAMP, EXECUTABLE_ID) ;

TERMINATE;

 

4. For the case you may execute a SQL for analysis against the table,
    I would like to recommend to create 32k system temporary tablespace for using sort sqls. 

$ db2 "create system temporary tablespace jssystemp32 pagesize 32 k managed by database using (file '/Users/db2inst1/JSDB/tbspath1/jssystemp32' 10000) bufferpool buff1_32k"

 

5. load the export file for each iteration

$ db2 "load from MON_GET_PKG_CACHE_STMT_103000.ixf of ixf replace into JUNSU.MON_GET_PKG_CACHE_STMT nonrecoverable"

 


Then, remained thing is enjoying analysis whatever SQL you want to use. 

Following is a example to check a DB2 system which has I/O Subsystem bottleneck.
In this example, we could see most SQLs are taking most time to wait physical IOs as it can be seen from 'TOTAL_ACT_WAIT_TIME' and 'POOL_READ_TIME'.

select 
NUM_EXECUTIONS,
PREP_TIME,
TOTAL_ACT_TIME,
TOTAL_ACT_WAIT_TIME,
POOL_READ_TIME,
POOL_DATA_L_READS,
POOL_DATA_P_READS,
(CASE WHEN POOL_DATA_L_READS <> 0
THEN int(decimal(POOL_DATA_L_READS - POOL_DATA_P_READS)/decimal(POOL_DATA_L_READS)*100) END) this_sql_data_buffhitf,

POOL_INDEX_L_READS,
POOL_INDEX_P_READS,
(CASE WHEN POOL_INDEX_L_READS <> 0
THEN int(decimal(POOL_INDEX_L_READS - POOL_INDEX_P_READS)/decimal(POOL_INDEX_L_READS)*100) END) this_sql_index_buffhitf
,LOCK_WAITS

,STMT_TEXT
from junsu.mon_get_pkg_cache_stmt
order by num_executions desc;

image

 

 

These steps are not complex ones and just basic SQLs,

however I hope you guys get benefit and hint as a way to maintain SQL performance data for your system.

Thank you for reading this article.

 

 

[{"Business Unit":{"code":"BU029","label":"Data and AI"}, "Product":{"code":"SSEPGG","label":"DB2 for Linux- UNIX and Windows"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":""}]

UID

ibm11140040