Users may monitor their database applications and transactions and store SQL statements detected during certain monitoring intervals into a private or proprietary SQL repository. Several non-IBM® monitoring tools in the market let users capture monitored data and store it in a repository for further analysis. This customized SQL repository can then be archived to the subsystem or data sharing group it was captured on. The format of this SQL repository might not fit into a fixed and predefined data source type that InfoSphere® Optim Query Workload Tuner recognizes. IBM Optim Query Workload Tuner for z/OS (IOQWT) offers a generic user-defined data source type that lets users capture and extract the SQL statements from their customized SQL repository for tuning.
This article describes how IOQWT provides a user-interface that helps users to map the attributes of their customized SQL repository to a template so that the SQL statements, runtime metrics, and accessed objects can be fetched. Therefore, users are not restricted to a predefined data source format or criteria when capturing query workloads from their SQL repositories. SQL statements can be freely transferred from a monitored system to another test environment for tuning.
Using this generic user-defined data source type, you can:
- Capture both dynamic and static SQL statements from your private SQL repositories with customized filters.
- Select a single query from the captured statements for tuning.
- Save the captured statements to a workload for tuning.
First, you will learn how to prepare a user-defined SQL repository. A detailed example describes both the set-up and the query capture processes. Then, you'll look at several advanced examples to understand the flexibility and capability of this feature in Query Workload Tuner.
Preparing a user-defined SQL repository
Your SQL repository might have multiple data collection points such as SQL statements, SQL metrics, and database objects. To pull the appropriate information from your SQL repository, there are a few things that you must consider, such as:
- The SQL repository must reside in one or more DB2 tables.
- If the information is stored in multiple tables, it is recommended that you create a view to join the necessary tables and to return the columns needed to extract the SQL statements and the associated runtime attributes.
- You can collect the following runtime metrics:
- Statement execution count
- Total CPU time of the statement
- Total elapse time of the statement
- For each statement with unqualified objects, you must provide the default schema value so that the unqualified objects can be resolved during tuning analysis.
- If you want to filter the SQL statements by database objects you must
store the referenced object information in the repository or create a
view to get the referenced object information from the DB2 catalog,
- Table name
- Index name
- Database name
- Table space names
- If you want to capture both dynamic and static statements, it is best to create DB2 views for dynamic and static statements. The capture of static statements may need to access the relevant DB2 catalog tables, while the capture of dynamic statements does not.
- When you define the capture filter criteria, you can choose if the static statements should be extracted from your SQL repository or from the DB2 catalog tables. The latter is recommended, and you do not need to include the static SQL statement in your SQL repository.
- If you choose to extract the static statements from the DB2 catalog tables, your SQL repository must reside in the same DB2 subsystem as the DB2 catalog tables so that IOQWT can look up the corresponding static statement text from the DB2 catalog tables.
- If you want to aggregate statement runtime metrics or consolidate statements, you can customize your DB2 view to perform the aggregation and consolidation using DB2 built-in functions like SUM, AVG, GROUP BY, and so on.
- You can manage the access to these views using DB2 authorities and privileges.
- Note: Only the following statement types are captured
for this source type:
- INSERT INTO
- SET INTEGRITY
Example of setting up an SQL repository and capturing SQL statements
Say you have monitored and collected the SQL statements in a table called STMT_TEXT_TABLE and the corresponding runtime metrics in a table called STMT_METRICS_TABLE.
For storage optimization, you can have long SQL statements stored in fragments joined by sequence numbers or you can store unique SQL statements in a separate table. These unique SQL statements may be referenced by multiple monitored intervals.
The metrics table may contain individual or summarized SQL metrics for the monitored time periods.
You may have also collected the database objects being accessed for each time interval and referenced SQL statements.
- xSQL statement text (SQLTEXT)
- Unique SQL statement ID (SQLTEXT_ID)
- Sequence number of the SQL statement fragment if the SQL statement is stored in fragment (SEQ_NUM)
- The start timestamp of the monitoring interval of the runtime metrics for the SQL statement (START_TIME)
- The end timestamp of the monitoring interval of the runtime metrics for the SQL statements (END_TIME)
- SQL authorization ID that runs the SQL statement (AUTHID)
- Default schema to be used to qualify any unqualified objects referenced in the statement (DEFAULT_SCHEMA)
- DB2 subsystem ID where the SQL statement was run (DB2_SUBSYSTEM)
- The start timestamp of the monitoring interval of the runtime metrics for the SQL statement (START_TIME)
- The end timestamp of the monitoring interval of the runtime metrics for the SQL statements (END_TIME)
- Unique ID of the SQL statement (SQLTEXT_ID)
- Type of the SQL statement, such as SELECT, INSERT, and so on (STMT_TYPE)
- Runtime metrics values collected for the SQL statement during the
- Unique runtime metrics ID (METRICS_ID)
- The number of times that the SQL statement was run (EXEC_COUNT)
- Total number of CPU time that was required to run the SQL statement for the number of times recorded (TOTAL_CPU)
- Total elapsed time that was required to run the SQL statement for the number of times recorded (TOTAL_ELAP_TIME)
- Total GETPAGES that was issued (TOTAL_GETPAGES)
- DB2 plan and package information for static SQL statement type
- Plan name (PLAN_NAME) DB2 plan and package information for static SQL statement type
- Package name (PACKAGE_NAME)
- Collection ID for the plan or package name (COLLECTION_ID)
- Package consistency token for the DBRM or DB2 package (CONSISTENCY_TOKEN)
- Statement number within the DB2 package (STMT_NUM)
- Section number in the DB2 package where static SQL statement resides (SECT_NUM)
Steps to capture SQL statements from a view
The following steps explain how to capture SQL statements from a view.
- Create a view
Listing 1 gives an example of how to define a simple view to join the SQL statements and SQL metrics in your SQL repository.
Listing 1. Define a simple view to join the SQL statements and SQL metrics in your SQL repository
CREATE VIEW WU.MY_APP_STMT_VIEW ( DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN, AUTHID, DEFAULT_SCHEMA, METRICS_TOKEN, START_TIME, END_TIME, EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES, SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM ) AS SELECT M.DB2_SUBSYSTEM, M.PLAN_NAME, M.COLLECTION_ID, M.PACKAGE_NAME, M.CONSISTENCY_TOKEN, M.AUTHID, M.DEFAULT_SCHEMA, M.METRICS_TOKEN, M.START_TIME, M.END_TIME, M.EXEC_COUNT, M.TOTAL_CPU, M.TOTAL_ELAP_TIME, M.TOTAL_GETPAGES, T.SQLTEXT_ID, T.SQLTEXT, T.SEQ_NUM, M.SECT_NUM, M.STMT_NUM FROM WU.STMT_TEXT_TABLE T, WU.STMT_METRICS_TABLE M WHERE T.SQLTEXT_ID = M.SQLTEXT_ID;
- Launching Query Tuner workflow assistant
After configuring the database where the SQL repository resides for tuning, you can click the Start Tuning menu from the Data Source Explorer's connection profile to launch the Query Tuner workflow assistant.
Figure 1. Launch Query Tuner workflow assistant
- Selecting source type
The Capture section opens. Select the User-defined SQL Repository source type in the action palette on the left. The Capture SQL from the User-defined SQL Repository view page on the right is displayed. You can click New… in the Filter section to create a new filter before capturing the SQL statements.
Figure 2. Select user-defined SQL repository source
- Create capture filter and specify SQL repository table or
The capture filter wizard is displayed. You can specify the name of the filter to be saved and you can edit it later. You can also specify the upper limit of how many returned statements for each capture invocation.
You must specify the table or view name of your SQL repository where the SQL statement and runtime metrics are stored. If your SQL repository contains information to locate the corresponding DB2 plan and package without the actual statement text, or you want to get the stored static SQL statements from DB2 catalog tables, you must check the checkbox Captured statements are static SQL residing in DB2 catalog and the mapped statement text column must be casted to HEX string.
If you want to filter SQL statements by database objects, specify the table or view where the database object information is stored.
Figure 3. Specify source table or view
After you have entered the proper values, click Next to proceed.
- Using the capture filter wizard to map columns and filter
The next window guides you through the mapping of your private SQL repository to the information that IOQWT needs to capture statements. Click on the drop-down box in each input field to select the available columns in the table or view that you specified in the previous window. The IOQWT user-defined SQL repository has a template with required information to capture the SQL statements. The mapped columns in the template are used to construct a
SELECTstatement at run-time to capture the SQL statements from the specified source table or view.
If you want to get a short description for each input field, move your mouse over the field label to view a tooltip.
Figure 4. Column mapping
Note: The field Unique metric ID is required if you need to join the statement runtime metrics table or view with the referenced object table or view for database object filtering.
After you finish mapping the columns, click Next to specify the filter conditions for the SQL statements to be captured.
For each column that you want to include in the filter, select the appropriate operator by clicking on the drop-down box and then enter the value on which you want to filter. For example, for the column DB2_SUBSYSTEM in my METRICS table, I left the operator as
=and set the value to
V1A. Only SQL statements that ran on subsystem V91A will be captured.
The mapped column names in your SQL repository are displayed in the Description column. You can change the filter operator and value after you save the capture filter.
Figure 5. Specify filter conditions
The start and end interval timestamp fields have a validator to guide you for the correct format.
Figure 6. Input value validator
Note: All the conditions specified for the Filter by runtime metrics will be
Note: If you return to the previous page of the wizard and change the name of a table or view, you must redo the column mapping on this page.
- Saving the capture filter
Click on Finish to save the capture filter. The Capture SQL from User-defined SQL Repository view page is displayed with the name of the saved filter showing in the Filter name field.
- Capturing statements
Click on Capture to capture the SQL statements that match your filter conditions.
If you have problems with capturing statements, turn on Query Tuner trace using the Query Tuner Eclipse global preferences to look for capture filter conditions and the SQL statements constructed to perform the capture. Ensure that the correct table and view names are used and the filter conditions are as specified.
- Reviewing captured statements
The captured statements are displayed in the Captured Statements section. You can start analyzing the statements with the collected runtime metrics. You can click on a table column to sort by the selected column.
For example, you may want to view the statement with the highest total CPU cost. Scroll to the right to find the column STAT_CPU and then click on the column title.
Figure 7. Captured statement
Figure 8. Evaluate captured statements and take actions
You can click on Save All to Workload to save all the captured statements as a query workload so that you can tune them together as a whole.
Figure 9. Save all queries to a query workload
The Manage section is displayed with the saved workload showing on the top of the Workload list. You can start tuning the workload containing the SQL statements captured from your private SQL repository.
Figure 10. Review saved workload
More examples of defining views for SQL capture
If your SQL repository contains static SQL residing in a DB2 catalog, you can use a view similar to the one in Listing 2 to get the SQL statement text.
If you want to consolidate SQL metrics, you can use a view similar to the one in Listing 3 to aggregate the runtime metrics.
Listing 4 describes how to collect SQL statements within the specified collection intervals.
Listing 2. Define a view to include static SQL statements from DB2 catalog using statement metrics and SQL statement info in your SQL repository
CREATE VIEW WU.MY_APP_STATIC_STMT_VIEW ( DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN, AUTHID, DEFAULT_SCHEMA, METRICS_TOKEN, START_TIME, END_TIME, EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES, SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM ) AS SELECT M.DB2_SUBSYSTEM, M.PLAN_NAME, M.COLLECTION_ID, M.PACKAGE_NAME, M.CONSISTENCY_TOKEN, M.AUTHID, M.DEFAULT_SCHEMA, M.METRICS_TOKEN, M.START_TIME, M.END_TIME, M.EXEC_COUNT, M.TOTAL_CPU, M.TOTAL_ELAP_TIME, M.TOTAL_GETPAGES, M.SQLTEXT_ID, HEX( ST.SQLTEXT ), ST.SEQ_NUM, M.SECT_NUM, M.STMT_NUM ) FROM WU.STMT_METRICS_TABLE M, SYSIBM.SYSPACKAGE PK, SYSIBM.SYSPACKSTMT ST WHERE PK.LOCATION = ' ' AND ST.LOCATION = PK.LOCATION AND PK.COLLID = M.COLLID AND ST.COLLID = M.COLLID AND PK.NAME = M.PROGRAM AND ST.NAME = M.PROGRAM AND HEX( PK.CONTOKEN ) = M.CONTOKEN AND HEX( ST.CONTOKEN ) = M.CONTOKEN AND PK.VALID IN ( 'A', 'H', 'Y' ) AND PK.OPERATIVE = 'Y' AND ST.EXPLAINABLE IN ( 'Y', ' ' ) AND ST.VERSION = PK.VERSION AND ST.QUERYNO > -1 AND ST.SECTNO = M.SECTNO;
Listing 3. Define a nested view to aggregate runtime metrics for certain statement types
CREATE VIEW WU.MY_APP_STMT_VIEW_BASE ( DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN, AUTHID, DEFAULT_SCHEMA, METRICS_TOKEN, START_TIME, END_TIME, EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES, SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM ) AS SELECT M.DB2_SUBSYSTEM, M.PLAN_NAME, M.COLLECTION_ID, M.PACKAGE_NAME, M.CONSISTENCY_TOKEN, M.AUTHID, M.DEFAULT_SCHEMA, M.METRICS_TOKEN, M.START_TIME, M.END_TIME, CASE WHEN M.STMT_TYPE = ‘INSERT' THEN M.EXEC_COUNT WHEN M.STMT_TYPE = ‘SELECT' THEN M.EXEC_COUNT WHEN M.STMT_TYPE = ‘UPDATE' THEN M.EXEC_COUNT ELSE 0 END, M.TOTAL_CPU, M.TOTAL_ELAP_TIME, M.TOTAL_GETPAGES, T.SQLTEXT_ID, T.SQLTEXT, T.SEQ_NUM, M.SECT_NUM, M.STMT_NUM FROM WU.STMT_TEXT_TABLE T, WU.STMT_METRICS_TABLE M WHERE T.SQLTEXT_ID = M.SQLTEXT_ID; CREATE VIEW WU.MY_APP_STMT_VIEW_AGGREGATED ( DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN, AUTHID, DEFAULT_SCHEMA, METRICS_TOKEN, START_TIME, END_TIME, SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM, EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES ) AS SELECT > DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN, AUTHID, DEFAULT_SCHEMA, METRICS_TOKEN, START_TIME, END_TIME, SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM, SUM( M.EXEC_COUNT ) AS AGG_EXEC_COUNT, SUM( M.TOTAL_ELAP_TIME ) AS AGG_ELAP_TIME, SUM( M.TOTAL_CPU ) AS AGG_CPU, SUM( M.TOTAL_GETPAGES ) AS AGG_GETPAGE FROM MY_APP_STMT_VIEW_BASE GROUP BY DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN, AUTHID, DEFAULT_SCHEMA, METRICS_TOKEN, START_TIME, END_TIME, SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM;
Listing 4. Define a view to capture statements from Query Monitor offload tables: SQL text, metrics summary, and monitoring intervals tables
CREATE VIEW QM_STMT_VIEW ( DB2_SUBSYSTEM, PLAN, COLLECTION, PROGRAM, CONSISTENCY_TOKEN, AUTHID, QUALIFIER, METRICS_TOKEN, TEXT_TOKEN, SQLTEXT, SEQNO, INTERVAL_START, INTERVAL_END, SECTION, STMT, SQL_CALLS, DB2_CPU, DB2_ELAP, GETPAGES ) AS SELECT A.DB2_SUBSYSTEM, A.PLAN, A.COLLECTION, A.PROGRAM, A.CONSISTENCY_TOKEN, A.AUTHID, A.QUALIFIER, A.METRICS_TOKEN, A.TEXT_TOKEN, B.SQLTEXT, 0, A.INTERVAL_START, CURRENT TIMESTAMP AS INTERVAL_END_TS, A.SECTION, A.STMT, A.SQL_CALLS, A.DB2_CPU, A.DB2_ELAPSED, A.GETPAGES FROM MY_SQL_REPOSITORY.CQM31_SUMM_METRICS A, MY_SQL_REPOSITORY.CQM31_SUMM_TEXT B WHERE A.TEXT_TOKEN = B.TEXT_TOKEN AND A.SMFID = B.SMFID AND A.CQM_SUBSYSTEM = B.CQM_SUBSYSTEM AND A.INTERVAL_NUMBER = B.INTERVAL_NUMBER AND A.INTERVAL_START = B.INTERVAL_START AND A.INTERVAL_NUMBER IN ( SELECT DISTINCT A.INTERVAL_NUMBER FROM MY_SQL_REPOSITORY.CQM31_INTERVALS A, MY_SQL_REPOSITORY.CQM31_SUMM_METRICS B WHERE A.SMFID = B.SMFID AND A.CQM_SUBSYSTEM = B.CQM_SUBSYSTEM AND A.INTERVAL_NUMBER = B.INTERVAL_NUMBER AND A.INTERVAL_START = B.INTERVAL_START);
Listing 5. Example 5: Define a view to capture statements from a table containing only SQL statement text and statement execution count
CREATE VIEW WU.myStmtView1 ( DB2_SUBSYSTEM, START_TIME, END_TIME, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN, STMT_NUM, SECT_NUM, SEQ_NUM, SQLTEXT, SQL_TEXT_LENGTH, VERSION, EXEC_COUNT , TOTAL_ELAP_TIME, TOTAL_CPU, TOTAL_GETPAGES, PRIMARY_AUTHID, STMT_TOKEN ) AS SELECT ' ' ,CURRENT TIMESTAMP , CURRENT TIMESTAMP ,' ' ,' ' ,' ' ,' ' ,0 ,0 ,0 ,SQL_STMT_TEXT ,LENGTH(SQL_STMT_TEXT) ,0 ,EXEC_COUNT ,0 ,0 ,0 ,' ' ,' ' FROM WU.MY_STMTS;
Listing 6. Example 6: Define a view to capture statements static SQL from DB2 catalog and with aggregated runtime metrics
CREATE VIEW RAYW.STANCAT_VIEW2 ( DB2_SUBSYSTEM, START_TIME, END_TIME, PLAN, COLLID, PACKAGE_NAME , CONTOKEN, STMTNO, SECTNO, SEQNO, STMTTEXT, VERSION, EXECUTION_COUNT , ELAP_TIME, CPU_TIME, GETPAGE, PRIMAUTH, STMT_TOKN, QUAL ) AS SELECT S.DB2_SUBSYSTEM , TIMESTAMP(CHAR(DATE(S.INTERVAL_START)) ||' 00:00:00.000000') AS INTERVAL_START , TIMESTAMP(CHAR(DATE(S.INTERVAL_END)) ||' 23:59:59.999999') AS INTERVAL_END , S.PLANNAME, S.COLLID, S.PKG_NAME, S.CONTOKEN , S.STMTNO, S.SECTNO, ST.SEQNO , HEX(ST.STMT) AS STMT , S.VERSION, SUM(S.EXECUTION_COUNT) , SUM(S.INDB2_TIME), SUM(S.INDB2_CPU) , SUM(S.GETPAGE), 'NO USER' AS PRIMAUTH , S.DYN_TEXT_TOKEN, PK.QUALIFIER FROM RAYW.STMT_HIST_STANDARD S , RAYW.SYSPACKAGE PK , RAYW.SYSPACKSTMT ST WHERE PK.LOCATION = ' ' AND ST.LOCATION = PK.LOCATION AND PK.COLLID = S.COLLID AND ST.COLLID = S.COLLID AND PK.NAME = S.PKG_NAME AND ST.NAME = S.PKG_NAME AND HEX(PK.CONTOKEN) = S.CONTOKEN AND HEX(ST.CONTOKEN) = S.CONTOKEN AND PK.VALID IN ('A','H','Y') AND PK.OPERATIVE = 'Y' AND ST.EXPLAINABLE IN ('Y',' ') AND ST.VERSION = PK.VERSION AND ST.QUERYNO > -1 AND ST.SECTNO = S.SECTNO GROUP BY S.DB2_SUBSYSTEM , TIMESTAMP(CHAR(DATE(S.INTERVAL_START)) ||' 00:00:00.000000') , TIMESTAMP(CHAR(DATE(S.INTERVAL_END)) ||' 23:59:59.999999') , S.PLANNAME, S.COLLID, S.PKG_NAME , S.CONTOKEN, S.STMTNO, S.SECTNO , ST.SEQNO, ST.STMT, S.VERSION , 'NO USER', S.DYN_TEXT_TOKEN, PK.QUALIFIER ;
This article has shown you how to set up your SQL repository so that you can use the generic user-defined data source type to capture both dynamic and static SQL statements from your SQL repositories with customized filters. You can follow the examples for further customization on defining the views against your SQL repository. After you have extracted the SQL statements, you can start tuning by selecting a single-query or save all the queries into workload.
We would like to thank Robert Heath for editing this paper and Raymond A. Willoughby for his input to the article.
- Learn more about Optim Query Workload Tuner from the Information Center.
- Learn more about OQWT 3.2 system requirements.
- Review the IBM InfoSphere Optim Query Workload Tuner, Version 3.2 release notes.
- Stay current with developerWorks technical events and webcasts.
- Attend a free developerWorks Live! briefing to get up to speed quickly on IBM products and tools as well as IT industry trends.
- Follow developerWorks on Twitter.
Get products and technologies
- Build your next development project with IBM trial software, available for download directly from developerWorks.
- Participate in the discussion forum.
- Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
Dig deeper into Information management on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.