Optim Query Tuner: Capture and tune DB2 z/OS SQL statements using a user-defined SQL repository

Learn how to create query workloads directly from your own SQL repository, without the overhead of archiving or staging your workload data. Leverage the Query Tuner user interface to define dynamic and static SQL statement filters, and tune the saved workload without implementing your own client. This article describes how you can define database views against your existing SQL repository that stores dynamic or static SQL statements in your environment. The Optim™ Query Tuner user-defined SQL repository can use these views to filter and capture SQL statements directly from your SQL repository for single-query tuning or workload tuning.

Share:

Cheung-Yuk Wu (wu@us.ibm.com), Senior Software Engineer, IBM

Author photoCheung-Yuk Wu is a Senior Software Developer in the IBM InfoSphere Optim Query Workload Tuner team. She has over 20 years of relational database tools development experience on DB2, Oracle, Sybase, Microsoft SQL Server, and Informix on Windows and UNIX platforms. She developed IBM software products including Data Warehouse Edition SQW Admin Console and Design Studio, DB2 Content Manager and OnDemand, Tivoli for DB2, Data Hub for UNIX, and QMF. She was a DBA for DB2, CICS and IMS at the IBM San Jose Manufacturing Data Center.



Leif Pedersen (pedersen@dk.ibm.com), Solutions Architect, IBM China

Photo of Leif pedersenLeif Pedersen is a solutions architect working for the Optim Enablement Team located at the SVL lab in San Jose, California. Leif has worked with database technology for more than 25 years. One of his main areas of expertise is DB2 for z/OS performance and Query optimization. Today, Leif helps customers with the Optim products such as Query Workload Tuner, pure Query, Optim Performance Manager, Optim configuration Manager, and Optim Query Capture Replay.



Cliff Leung (cleung@us.ibm.com), Distinguished Engineer, IBM  

Photo of Cliff LeungCliff Leung is a distinguished engineer in Optim Data Studio at the Silicon Valley Laboratory in San Jose, California. Cliff is the chief architect of Optim performance solution and is responsible for the Optim Data Studio portfolio strategy and technical direction. Prior to this role, Cliff was the architect for Optim Query Workload Tuner. Cliff has more than 20 years of experience in query compilation, optimization, and performance areas--both in DB2 for Linux, UNIX, and Windows, and DB2 for z/OS.



09 May 2013

Introduction

Users may monitor their database applications and transactions and store SQL statements detected during certain monitoring intervals into a private or proprietary SQL repository. Several non-IBM® monitoring tools in the market let users capture monitored data and store it in a repository for further analysis. This customized SQL repository can then be archived to the subsystem or data sharing group it was captured on. The format of this SQL repository might not fit into a fixed and predefined data source type that InfoSphere® Optim Query Workload Tuner recognizes. IBM Optim Query Workload Tuner for z/OS (IOQWT) offers a generic user-defined data source type that lets users capture and extract the SQL statements from their customized SQL repository for tuning.

This article describes how IOQWT provides a user-interface that helps users to map the attributes of their customized SQL repository to a template so that the SQL statements, runtime metrics, and accessed objects can be fetched. Therefore, users are not restricted to a predefined data source format or criteria when capturing query workloads from their SQL repositories. SQL statements can be freely transferred from a monitored system to another test environment for tuning.

Using this generic user-defined data source type, you can:

  1. Capture both dynamic and static SQL statements from your private SQL repositories with customized filters.
  2. Select a single query from the captured statements for tuning.
  3. Save the captured statements to a workload for tuning.

First, you will learn how to prepare a user-defined SQL repository. A detailed example describes both the set-up and the query capture processes. Then, you'll look at several advanced examples to understand the flexibility and capability of this feature in Query Workload Tuner.


Preparing a user-defined SQL repository

Your SQL repository might have multiple data collection points such as SQL statements, SQL metrics, and database objects. To pull the appropriate information from your SQL repository, there are a few things that you must consider, such as:

  • The SQL repository must reside in one or more DB2 tables.
  • If the information is stored in multiple tables, it is recommended that you create a view to join the necessary tables and to return the columns needed to extract the SQL statements and the associated runtime attributes.
  • You can collect the following runtime metrics:
    • Statement execution count
    • Total CPU time of the statement
    • Total elapse time of the statement
    • GETPAGES
  • For each statement with unqualified objects, you must provide the default schema value so that the unqualified objects can be resolved during tuning analysis.
  • If you want to filter the SQL statements by database objects you must store the referenced object information in the repository or create a view to get the referenced object information from the DB2 catalog, such as:
    • Table name
    • Index name
    • Database name
    • Table space names
  • If you want to capture both dynamic and static statements, it is best to create DB2 views for dynamic and static statements. The capture of static statements may need to access the relevant DB2 catalog tables, while the capture of dynamic statements does not.
  • When you define the capture filter criteria, you can choose if the static statements should be extracted from your SQL repository or from the DB2 catalog tables. The latter is recommended, and you do not need to include the static SQL statement in your SQL repository.
  • If you choose to extract the static statements from the DB2 catalog tables, your SQL repository must reside in the same DB2 subsystem as the DB2 catalog tables so that IOQWT can look up the corresponding static statement text from the DB2 catalog tables.
  • If you want to aggregate statement runtime metrics or consolidate statements, you can customize your DB2 view to perform the aggregation and consolidation using DB2 built-in functions like SUM, AVG, GROUP BY, and so on.
  • You can manage the access to these views using DB2 authorities and privileges.
  • Note: Only the following statement types are captured for this source type:
    • SELECT
    • UPDATE
    • INSERT
    • INSERT INTO
    • MERGE
    • DELETE
    • WITH
    • VALUES
    • DECLARE
    • SET INTEGRITY

Example of setting up an SQL repository and capturing SQL statements

Say you have monitored and collected the SQL statements in a table called STMT_TEXT_TABLE and the corresponding runtime metrics in a table called STMT_METRICS_TABLE.

For storage optimization, you can have long SQL statements stored in fragments joined by sequence numbers or you can store unique SQL statements in a separate table. These unique SQL statements may be referenced by multiple monitored intervals.

The metrics table may contain individual or summarized SQL metrics for the monitored time periods.

You may have also collected the database objects being accessed for each time interval and referenced SQL statements.

STMT_TEXT_TABLE

  1. xSQL statement text (SQLTEXT)
  2. Unique SQL statement ID (SQLTEXT_ID)
  3. Sequence number of the SQL statement fragment if the SQL statement is stored in fragment (SEQ_NUM)
  4. The start timestamp of the monitoring interval of the runtime metrics for the SQL statement (START_TIME)
  5. The end timestamp of the monitoring interval of the runtime metrics for the SQL statements (END_TIME)

STMT_METRICS_TABLE

  1. SQL authorization ID that runs the SQL statement (AUTHID)
  2. Default schema to be used to qualify any unqualified objects referenced in the statement (DEFAULT_SCHEMA)
  3. DB2 subsystem ID where the SQL statement was run (DB2_SUBSYSTEM)
  4. The start timestamp of the monitoring interval of the runtime metrics for the SQL statement (START_TIME)
  5. The end timestamp of the monitoring interval of the runtime metrics for the SQL statements (END_TIME)
  6. Unique ID of the SQL statement (SQLTEXT_ID)
  7. Type of the SQL statement, such as SELECT, INSERT, and so on (STMT_TYPE)
  8. Runtime metrics values collected for the SQL statement during the monitoring interval
    • Unique runtime metrics ID (METRICS_ID)
    • The number of times that the SQL statement was run (EXEC_COUNT)
    • Total number of CPU time that was required to run the SQL statement for the number of times recorded (TOTAL_CPU)
    • Total elapsed time that was required to run the SQL statement for the number of times recorded (TOTAL_ELAP_TIME)
    • Total GETPAGES that was issued (TOTAL_GETPAGES)
  9. DB2 plan and package information for static SQL statement type
    • Plan name (PLAN_NAME) DB2 plan and package information for static SQL statement type
    • Package name (PACKAGE_NAME)
    • Collection ID for the plan or package name (COLLECTION_ID)
    • Package consistency token for the DBRM or DB2 package (CONSISTENCY_TOKEN)
    • Statement number within the DB2 package (STMT_NUM)
    • Section number in the DB2 package where static SQL statement resides (SECT_NUM)

Steps to capture SQL statements from a view

The following steps explain how to capture SQL statements from a view.

  1. Create a view

    Listing 1 gives an example of how to define a simple view to join the SQL statements and SQL metrics in your SQL repository.

    Listing 1. Define a simple view to join the SQL statements and SQL metrics in your SQL repository
    CREATE VIEW WU.MY_APP_STMT_VIEW (
    	DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN,   
    	AUTHID, DEFAULT_SCHEMA,  
    	METRICS_TOKEN, START_TIME, END_TIME, 
    	EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES, 
    	SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM 
    ) 
    AS
    SELECT   
    	M.DB2_SUBSYSTEM, M.PLAN_NAME, M.COLLECTION_ID, M.PACKAGE_NAME, 
    	M.CONSISTENCY_TOKEN,  
    	M.AUTHID, M.DEFAULT_SCHEMA,  
    	M.METRICS_TOKEN, M.START_TIME, M.END_TIME,  
    	M.EXEC_COUNT, M.TOTAL_CPU, M.TOTAL_ELAP_TIME, M.TOTAL_GETPAGES, 
    	T.SQLTEXT_ID, T.SQLTEXT, T.SEQ_NUM, M.SECT_NUM, M.STMT_NUM 
    FROM 
    	WU.STMT_TEXT_TABLE T,
    	WU.STMT_METRICS_TABLE M  
    WHERE
    	T.SQLTEXT_ID = M.SQLTEXT_ID;
  2. Launching Query Tuner workflow assistant

    After configuring the database where the SQL repository resides for tuning, you can click the Start Tuning menu from the Data Source Explorer's connection profile to launch the Query Tuner workflow assistant.

    Figure 1. Launch Query Tuner workflow assistant
    Launch Optim Query Tuner workload assistant Capture tab to filter and capture SQL statements from the user-defined SQL Repository.
  3. Selecting source type

    The Capture section opens. Select the User-defined SQL Repository source type in the action palette on the left. The Capture SQL from the User-defined SQL Repository view page on the right is displayed. You can click New… in the Filter section to create a new filter before capturing the SQL statements.

    Figure 2. Select user-defined SQL repository source
    From the Optim Query Tuner workload assistant Capture tab, select the source type user-defined SQL repository to define filter conditions and then capture the SQL statements.
  4. Create capture filter and specify SQL repository table or view

    The capture filter wizard is displayed. You can specify the name of the filter to be saved and you can edit it later. You can also specify the upper limit of how many returned statements for each capture invocation.

    You must specify the table or view name of your SQL repository where the SQL statement and runtime metrics are stored. If your SQL repository contains information to locate the corresponding DB2 plan and package without the actual statement text, or you want to get the stored static SQL statements from DB2 catalog tables, you must check the checkbox Captured statements are static SQL residing in DB2 catalog and the mapped statement text column must be casted to HEX string.

    If you want to filter SQL statements by database objects, specify the table or view where the database object information is stored.

    Figure 3. Specify source table or view
    Enter the source table or view name of your SQL repository.

    After you have entered the proper values, click Next to proceed.

  5. Using the capture filter wizard to map columns and filter statements

    The next window guides you through the mapping of your private SQL repository to the information that IOQWT needs to capture statements. Click on the drop-down box in each input field to select the available columns in the table or view that you specified in the previous window. The IOQWT user-defined SQL repository has a template with required information to capture the SQL statements. The mapped columns in the template are used to construct a SELECT statement at run-time to capture the SQL statements from the specified source table or view.

    If you want to get a short description for each input field, move your mouse over the field label to view a tooltip.

    Figure 4. Column mapping
    Use the column mapping page to map the columns of your source table or view to the Query Tuner source columns so that a query can be construct at runtime to extract the matching SQL statements from your source table or view.

    Note: The field Unique metric ID is required if you need to join the statement runtime metrics table or view with the referenced object table or view for database object filtering.

    After you finish mapping the columns, click Next to specify the filter conditions for the SQL statements to be captured.

    For each column that you want to include in the filter, select the appropriate operator by clicking on the drop-down box and then enter the value on which you want to filter. For example, for the column DB2_SUBSYSTEM in my METRICS table, I left the operator as = and set the value to V1A. Only SQL statements that ran on subsystem V91A will be captured.

    The mapped column names in your SQL repository are displayed in the Description column. You can change the filter operator and value after you save the capture filter.

    Figure 5. Specify filter conditions
    Specify the filter conditions to extract SQL statements from your source table or view.

    The start and end interval timestamp fields have a validator to guide you for the correct format.

    Figure 6. Input value validator
    A validator that checks the input value for correctness.

    Note: All the conditions specified for the Filter by runtime metrics will be AND'ed.

    Note: If you return to the previous page of the wizard and change the name of a table or view, you must redo the column mapping on this page.

  6. Saving the capture filter

    Click on Finish to save the capture filter. The Capture SQL from User-defined SQL Repository view page is displayed with the name of the saved filter showing in the Filter name field.

  7. Capturing statements

    Click on Capture to capture the SQL statements that match your filter conditions.

    If you have problems with capturing statements, turn on Query Tuner trace using the Query Tuner Eclipse global preferences to look for capture filter conditions and the SQL statements constructed to perform the capture. Ensure that the correct table and view names are used and the filter conditions are as specified.

  8. Reviewing captured statements

    The captured statements are displayed in the Captured Statements section. You can start analyzing the statements with the collected runtime metrics. You can click on a table column to sort by the selected column.

    For example, you may want to view the statement with the highest total CPU cost. Scroll to the right to find the column STAT_CPU and then click on the column title.

    Figure 7. Captured statement
    Output SQL statements that match the filter conditions.
    Figure 8. Evaluate captured statements and take actions
    Review the output SQL statements. You can tune a query or workload

    You can click on Save All to Workload to save all the captured statements as a query workload so that you can tune them together as a whole.

    Figure 9. Save all queries to a query workload
    Save all the output SQL statement to a workload and then tune the workload for recommendations.

    The Manage section is displayed with the saved workload showing on the top of the Workload list. You can start tuning the workload containing the SQL statements captured from your private SQL repository.

    Figure 10. Review saved workload
    The saved workload can be viewed in the Query Tuner workflow assistant Manage tab. You can view the workload statements, tune a workload, and view the workload recommendations.

More examples of defining views for SQL capture

If your SQL repository contains static SQL residing in a DB2 catalog, you can use a view similar to the one in Listing 2 to get the SQL statement text.

If you want to consolidate SQL metrics, you can use a view similar to the one in Listing 3 to aggregate the runtime metrics.

Listing 4 describes how to collect SQL statements within the specified collection intervals.

Listing 2. Define a view to include static SQL statements from DB2 catalog using statement metrics and SQL statement info in your SQL repository
CREATE VIEW WU.MY_APP_STATIC_STMT_VIEW (
	DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN,  
	AUTHID, DEFAULT_SCHEMA, 
	METRICS_TOKEN, START_TIME, END_TIME, 
	EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES,
	SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM
) 
AS
SELECT
	M.DB2_SUBSYSTEM, M.PLAN_NAME, M.COLLECTION_ID, M.PACKAGE_NAME, 
	M.CONSISTENCY_TOKEN,  
	M.AUTHID, M.DEFAULT_SCHEMA, 
	M.METRICS_TOKEN, M.START_TIME, M.END_TIME, 
	M.EXEC_COUNT, M.TOTAL_CPU, M.TOTAL_ELAP_TIME, M.TOTAL_GETPAGES,
	M.SQLTEXT_ID, HEX( ST.SQLTEXT ), ST.SEQ_NUM, M.SECT_NUM, M.STMT_NUM
) 
FROM 
	WU.STMT_METRICS_TABLE M,
	SYSIBM.SYSPACKAGE PK,
	SYSIBM.SYSPACKSTMT ST
WHERE
	PK.LOCATION = ' '
	AND     ST.LOCATION = PK.LOCATION
	AND     PK.COLLID = M.COLLID
	AND     ST.COLLID = M.COLLID
	AND     PK.NAME = M.PROGRAM
	AND     ST.NAME = M.PROGRAM
	AND     HEX( PK.CONTOKEN ) = M.CONTOKEN
	AND     HEX( ST.CONTOKEN ) = M.CONTOKEN
	AND     PK.VALID IN ( 'A', 'H', 'Y' )
	AND     PK.OPERATIVE = 'Y'
	AND     ST.EXPLAINABLE IN ( 'Y', ' ' )
	AND     ST.VERSION = PK.VERSION
	AND     ST.QUERYNO > -1
	AND     ST.SECTNO = M.SECTNO;
Listing 3. Define a nested view to aggregate runtime metrics for certain statement types
CREATE VIEW WU.MY_APP_STMT_VIEW_BASE (
	DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN,  
	AUTHID, DEFAULT_SCHEMA, 
	METRICS_TOKEN, START_TIME, END_TIME, 
	EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES,
	SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM
)
AS
SELECT 
	M.DB2_SUBSYSTEM, M.PLAN_NAME, M.COLLECTION_ID, M.PACKAGE_NAME,
	M.CONSISTENCY_TOKEN,  
	M.AUTHID, M.DEFAULT_SCHEMA, 
	M.METRICS_TOKEN, M.START_TIME, M.END_TIME, 
	CASE             
		WHEN M.STMT_TYPE = ‘INSERT'  THEN M.EXEC_COUNT             
	    WHEN M.STMT_TYPE = ‘SELECT'  THEN M.EXEC_COUNT                 
	    WHEN M.STMT_TYPE = ‘UPDATE' THEN M.EXEC_COUNT                 
	    ELSE 0               
	END,                
	M.TOTAL_CPU, M.TOTAL_ELAP_TIME, M.TOTAL_GETPAGES,
	T.SQLTEXT_ID, T.SQLTEXT, T.SEQ_NUM, M.SECT_NUM, M.STMT_NUM
FROM 
	WU.STMT_TEXT_TABLE T,
	WU.STMT_METRICS_TABLE M  
WHERE
	T.SQLTEXT_ID = M.SQLTEXT_ID;

CREATE VIEW WU.MY_APP_STMT_VIEW_AGGREGATED (
	DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN,  
	AUTHID, DEFAULT_SCHEMA, 
	METRICS_TOKEN, START_TIME, END_TIME, 
	SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM,
	EXEC_COUNT, TOTAL_CPU, TOTAL_ELAP_TIME, TOTAL_GETPAGES   
) 
AS
SELECT > 
	DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN,  
	AUTHID, DEFAULT_SCHEMA, 
	METRICS_TOKEN, START_TIME, END_TIME, 
	SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM,
	SUM( M.EXEC_COUNT ) AS AGG_EXEC_COUNT,
	SUM( M.TOTAL_ELAP_TIME )    AS AGG_ELAP_TIME,
	SUM( M.TOTAL_CPU )     AS AGG_CPU,
	SUM( M.TOTAL_GETPAGES )    AS AGG_GETPAGE
FROM
	MY_APP_STMT_VIEW_BASE
GROUP BY 
	DB2_SUBSYSTEM, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME, CONSISTENCY_TOKEN,  
	AUTHID, DEFAULT_SCHEMA, 
	METRICS_TOKEN, START_TIME, END_TIME, 
	SQLTEXT_ID, SQLTEXT, SEQ_NUM, SECT_NUM, STMT_NUM;
Listing 4. Define a view to capture statements from Query Monitor offload tables: SQL text, metrics summary, and monitoring intervals tables
CREATE VIEW QM_STMT_VIEW (
	DB2_SUBSYSTEM, PLAN, COLLECTION, PROGRAM,
	CONSISTENCY_TOKEN, AUTHID, QUALIFIER, METRICS_TOKEN, TEXT_TOKEN, SQLTEXT, SEQNO,
	INTERVAL_START, INTERVAL_END, SECTION, STMT, SQL_CALLS, DB2_CPU,
	DB2_ELAP, GETPAGES
)
AS
SELECT 
	A.DB2_SUBSYSTEM, A.PLAN, A.COLLECTION, A.PROGRAM, 
	A.CONSISTENCY_TOKEN, A.AUTHID, A.QUALIFIER, A.METRICS_TOKEN, A.TEXT_TOKEN, 
	B.SQLTEXT, 0, A.INTERVAL_START, CURRENT TIMESTAMP AS INTERVAL_END_TS, 
	A.SECTION, A.STMT, A.SQL_CALLS, A.DB2_CPU, A.DB2_ELAPSED, A.GETPAGES
FROM 
	MY_SQL_REPOSITORY.CQM31_SUMM_METRICS A,  
	MY_SQL_REPOSITORY.CQM31_SUMM_TEXT B
WHERE 
	A.TEXT_TOKEN = B.TEXT_TOKEN
	AND A.SMFID = B.SMFID
	AND A.CQM_SUBSYSTEM = B.CQM_SUBSYSTEM    
	AND A.INTERVAL_NUMBER = B.INTERVAL_NUMBER
	AND A.INTERVAL_START = B.INTERVAL_START  
	AND A.INTERVAL_NUMBER IN (       
SELECT 
	DISTINCT A.INTERVAL_NUMBER
FROM 
	MY_SQL_REPOSITORY.CQM31_INTERVALS A,           
	MY_SQL_REPOSITORY.CQM31_SUMM_METRICS B        
WHERE 
	A.SMFID = B.SMFID
	AND A.CQM_SUBSYSTEM = B.CQM_SUBSYSTEM           
	AND A.INTERVAL_NUMBER = B.INTERVAL_NUMBER  
	AND A.INTERVAL_START = B.INTERVAL_START);
Listing 5. Example 5: Define a view to capture statements from a table containing only SQL statement text and statement execution count
CREATE VIEW WU.myStmtView1
(
   DB2_SUBSYSTEM, START_TIME, END_TIME, PLAN_NAME, COLLECTION_ID, PACKAGE_NAME,  
   CONSISTENCY_TOKEN, STMT_NUM, SECT_NUM, SEQ_NUM, SQLTEXT, SQL_TEXT_LENGTH,
   VERSION,  EXEC_COUNT , TOTAL_ELAP_TIME, TOTAL_CPU, TOTAL_GETPAGES,
   PRIMARY_AUTHID, STMT_TOKEN
) AS
SELECT
        ' '
       ,CURRENT TIMESTAMP
       , CURRENT TIMESTAMP
       ,' '
       ,' '
       ,' '
       ,' '
       ,0
       ,0
       ,0
       ,SQL_STMT_TEXT
       ,LENGTH(SQL_STMT_TEXT)
       ,0
       ,EXEC_COUNT
       ,0
       ,0
       ,0
       ,' '
       ,' '
FROM
        WU.MY_STMTS;
Listing 6. Example 6: Define a view to capture statements static SQL from DB2 catalog and with aggregated runtime metrics
CREATE VIEW RAYW.STANCAT_VIEW2
 (
    DB2_SUBSYSTEM, START_TIME, END_TIME, PLAN, COLLID, PACKAGE_NAME
  , CONTOKEN, STMTNO, SECTNO, SEQNO, STMTTEXT, VERSION, EXECUTION_COUNT
  , ELAP_TIME, CPU_TIME, GETPAGE, PRIMAUTH, STMT_TOKN, QUAL
 ) AS
 SELECT   
          S.DB2_SUBSYSTEM
        , TIMESTAMP(CHAR(DATE(S.INTERVAL_START))
                 ||' 00:00:00.000000') AS INTERVAL_START
        , TIMESTAMP(CHAR(DATE(S.INTERVAL_END))
                 ||' 23:59:59.999999') AS INTERVAL_END
        , S.PLANNAME, S.COLLID, S.PKG_NAME, S.CONTOKEN
        , S.STMTNO, S.SECTNO, ST.SEQNO
        , HEX(ST.STMT) AS STMT
        , S.VERSION, SUM(S.EXECUTION_COUNT)
        , SUM(S.INDB2_TIME), SUM(S.INDB2_CPU)
        , SUM(S.GETPAGE), 'NO USER' AS PRIMAUTH
        , S.DYN_TEXT_TOKEN, PK.QUALIFIER
 FROM     
          RAYW.STMT_HIST_STANDARD  S
        , RAYW.SYSPACKAGE          PK
        , RAYW.SYSPACKSTMT         ST
 WHERE    
         PK.LOCATION         = ' '
  AND    ST.LOCATION         = PK.LOCATION
  AND    PK.COLLID           = S.COLLID
  AND    ST.COLLID           = S.COLLID
  AND    PK.NAME             = S.PKG_NAME
  AND    ST.NAME             = S.PKG_NAME
  AND    HEX(PK.CONTOKEN)    = S.CONTOKEN
  AND    HEX(ST.CONTOKEN)    = S.CONTOKEN
  AND    PK.VALID IN ('A','H','Y')
  AND    PK.OPERATIVE        = 'Y'
  AND    ST.EXPLAINABLE IN ('Y',' ')
  AND    ST.VERSION          = PK.VERSION
  AND    ST.QUERYNO > -1
  AND    ST.SECTNO           = S.SECTNO
 GROUP BY 
           S.DB2_SUBSYSTEM
         , TIMESTAMP(CHAR(DATE(S.INTERVAL_START))
                  ||' 00:00:00.000000')
         , TIMESTAMP(CHAR(DATE(S.INTERVAL_END))
                  ||' 23:59:59.999999')
         , S.PLANNAME, S.COLLID, S.PKG_NAME
         , S.CONTOKEN, S.STMTNO, S.SECTNO
         , ST.SEQNO, ST.STMT, S.VERSION
         , 'NO USER', S.DYN_TEXT_TOKEN, PK.QUALIFIER
  ;

Conclusion

This article has shown you how to set up your SQL repository so that you can use the generic user-defined data source type to capture both dynamic and static SQL statements from your SQL repositories with customized filters. You can follow the examples for further customization on defining the views against your SQL repository. After you have extracted the SQL statements, you can start tuning by selecting a single-query or save all the queries into workload.


Acknowledgements

We would like to thank Robert Heath for editing this paper and Raymond A. Willoughby for his input to the article.

Resources

Learn

Get products and technologies

  • Build your next development project with IBM trial software, available for download directly from developerWorks.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=928977
ArticleTitle=Optim Query Tuner: Capture and tune DB2 z/OS SQL statements using a user-defined SQL repository
publish-date=05092013