In Part 2, a methodology for single-query tuning was introduced. With the support of a query tuning tool, such as IBM InfoSphere Optim Query Workload Tuner (IOQWT), application developers or database administrators can analyze the access path of a single query and improve the performance by collecting additional statistics, rewriting the query, or changing the database design.
The purpose of workload performance tuning is to ensure that the application is meeting service-level agreements and to ensure optimal total cost of ownership (TCO) for the system. This article will provide a methodology for workload tuning using IBM InfoSphere Optim Query Workload Tuner (IOQWT).
Single-query tuning concerns the performance of a specific query, whereas workload tuning focuses on the performance of all queries in the workload. Regardless of whether workload tuning or single-query tuning is undertaken, the goal is the same: to improve performance. Workload tuning has numerous advantages over single query tuning:
- Improving the performance of all queries will reduce TCO and increase the chances of meeting business service-level requirements. However, an application may consist of thousands of queries or more, and it is not practical to perform single-query tuning for each query.
- There could be significant duplication of effort spent on determining what statistics will benefit individual queries and subsequent collection of those statistics. As a DBA, a consolidated RUNSTATS recommendation is beneficial to avoid repeated RUNSTATS execution.
- Query tuning may identify the need for additional indices or changes to existing indices. Analyzing queries in isolation does not account for the effect of index changes to other queries and may result in too many indices, which affects data maintenance and management.
- Identification and collection of additional statistics for a single query may result in improvement for one query and create an imbalance for others. The adage that "two wrongs don't make a right" is often untrue in customer workloads. And correcting one estimation error may expose other errors for queries that were not analyzed.
Despite the stated drawbacks to single query tuning, it does allow intense focus on improving performance for the most important queries. When analyzing from the perspective of an entire workload, not every query may share the same degree of importance to the business.
It is possible to assign a different weight to each query based upon criteria such as execution count, accumulated elapsed or CPU time, and average elapsed or CPU time. An alternative may be to capture some top consuming SQLs for tuning, ensuring that the sample size is large enough to overcome the limitations associated with single-query tuning.
Whatever criteria are employed, a general tuning methodology contains at least these four logical steps:
- Identify the sample workload to be tuned.
- Tune the workload.
- Review recommendations and apply them.
- Validate and compare before and after.
Steps 2-4 compose a tuning cycle that may be executed iteratively. Each section will be covered in the article. The next section introduces some best practices for capturing and tuning a workload using IOQWT.
For the context of this article, a workload consists of all identified SQL statements of the application, the corresponding execution data and the EXPLAIN information. The execution data contains the elapsed time, CPU time, query execution count, etc. And the EXPLAIN information refers to the access path used by DB2 to execute the SQL, for example, the join sequence, join method, access method, etc.
Many users may choose to capture a dynamic SQL workload from the dynamic statement cache and from the DB2 catalog for a static SQL workload. DB2 will temporarily store dynamic SQLs in the dynamic statement cache, so that repeat executions of the same SQL can avoid a new prepare. Once DB2 finds the identical SQL instance in the cache, it will not regenerate the execution plan for the SQL but just reuse the cached plan. Users can enable the trace for the dynamic statement cache and allow DB2 to store the execution data. Following is an example using IOQWT.
A database administrator launches IOQWT and clicks the Start Tuning menu in Data Source Explorer, as shown in Figure 1.
Figure 1. Start IOQWT for workload capturing and tuning
The Query Tuner Workflow Assistant will be opened and the user can start capturing SQLs from many sources, such as Statement Cache, Catalog Plan or Package, QMF, etc. This example is from the statement cache.
Figure 2. Capture SQLs from Statement Cache
The above example demonstrates an application running in WebSphere Application Server and using JDBC to connect to DB2 9 for z/OS®. Let's say the user wants to find all queries in the system that consume greater than 1 second of CPU time and perform further analysis of those queries. First, the user selects the source for the workload (on the left); Statement Cache is circled and highlighted as the source chosen. The user defines Accumulated CPU above. This is used to capture all SQLs with accumulated CPU time larger than 1 sec and the result will be sorted by the CPU time, as well. This example uses a previously saved filter, although first-time usage would require a new filter to be created. Setting the filter criteria will be covered in an upcoming example. Directly below the filter in the screenshot, the user clicks the button to enable collection of statistics, which means the trace of dynamic statement cache is enabled. Finally, the user clicks Capture Now and captures all statements executed in the next hour having an accumulated CPU time larger than 1 sec.
Figure 3. Captured statements listed in a table
Finally, the user captures the four top statements from the application to begin analysis on those statements with the greatest impact on the application performance.
In another example, a DBA is working on a banking application project to support personal online electronic transactions. The DBA finds that some queries in the application are not performing efficiently, and the response time is unacceptable. With the use of other monitoring tools, the DBA already finds that those queries are all from a specific user and the monitor data shows high IO time. The DBA defines a filter that the Primary authority ID (PRIMAUTH) is USER1 and their average getpages (AVG_STAT_GPAG) is larger than the value displayed in the monitor tool, say 10000. The result is sorted by the average elapsed time and the user can start reviewing and tuning them further.
Figure 4. Define the filtering conditions
Figure 5. Choose the sort columns
In the dynamic statement cache, it is possible to set the STMT_TOKEN in your
application so you can differentiate all statements from this particular
application only. In Java™ technology, for example, the client can use a JDBC API to set the
STMT_TOKEN such as:
conn).setDB2ClientProgramId("Your application identifier");.
Other programming languages may have similar APIs to set this application identifier. This can be useful to avoid missing dynamic statements or capturing redundant dynamic statements. The drawback is that using this API requires a code change.
Many applications use static SQL stored in the DB2 catalog. Static SQLs are already bound in the system, and the corresponding access path is generated during bind/rebind. When capturing SQL statements from a plan or package, the result may contain SQL text and EXPLAIN data (if the bind options contain EXPLAIN YES, DB2 will store the EXPLAIN data in PLAN_TABLE and other explain tables). Similar to capturing from the dynamic statement cache, IOQWT also provides convenient filters to help users capture the desired queries for static packages.
In this scenario, the user's application is recently deployed in the production system. Before the application goes online, he wants to check all static SQLs in a specific package and find those statements whose access plan is likely suboptimal because of suspicious access path pattern, such as table space scan, non-matching index scan, sort, etc.
Figure 6. Define the filter for capturing from catalog
Figure 7. Define the filtering conditions
Figure 8. Define the criteria of access pattern
The above scenario is typical for capturing and finding suboptimal queries from plans/packages. However, users often want to use this feature to focus their attention on all static SQLs related to a single table to find tuning opportunities, such as indexing specific to that table. There are two slightly different scenarios: One is to capture all packages dependent on a specific table; another is to capture only the static SQLs referencing a specific table and not all SQLs from the package that has a dependency on that table. If users want to focus on the entire application, they can type in the criteria as below.
Figure 9. Define filter by objects that packages reference
The filter will capture all packages that depend on an object, such as a table, view, stored procedure, etc. All static SQLs in those packages will be fetched even if those statements are not related to the object directly. If the user wants to capture only the related static SQLs, he needs to have his packages bound with EXPLAIN YES and define the filter on another page that allows filtering by cost or object.
Figure 10. Define filter by objects that SQL statements reference
IOQWT will go through the EXPLAIN data for all static SQLs, and if the access plan of one SQL contains access to this specific table, the SQL will be fetched into the capturing result.
After the workload is captured, the second step is to attempt to tune the workload. IOQWT provides several workload advisers, such as statistics adviser, index adviser, query adviser, and access path adviser. These analyze the SQL statements, EXPLAIN data, and runtime data, then provide recommendations based on the expert knowledge built in the tools. For example, the statistics adviser may provide recommendations to collect missing or conflicting statistics, such as table cardinality or column cardinality, which are used by the DB2 optimizer to generate an efficient access plan. Before running the advisers against a workload, the user must ensure that the EXPLAIN data is available. Often, if the workload is captured from dynamic statement cache or plan/package that was not bound with bind option EXPLAIN YES, users have to explain the workload first.
It should be noted that issuing an EXPLAIN STMTCACHE for queries in the dynamic statement will result in the existing access plan information being externalized. This means that the EXPLAIN for the dynamic statement cache will extract the existing access plan. For static SQL, issuing an EXPLAIN PLAN or EXPLAIN ALL will result in a new access plan being generated, and this may not be the plan saved in the catalog. DB2 10 for z/OS adds the EXPLAIN PACKAGE statement that will extract the existing access plan information if the package was bound on DB2 9 or later. Either approach will populate the PLAN_TABLE to enable further access path analysis.
Figure 11. EXPLAIN options for keeping explain data efficiently
If only some of the statements are missing the EXPLAIN data, users can choose to explain parts of statements or proceed with analysis without those statements. If a workload has been explained, and the user wants to re-explain due to statistics change, the user can choose to keep, replace, or consolidate the EXPLAIN data. The above screenshot highlights these options.
Before running the advisers, the user should pay attention to the sequence the advisers are executed. Generally, statistics adviser is the first choice to ensure that the DB2 optimizer has sufficient information for its access plan selection. Almost half, if not more, SQL performance problems are caused by insufficient statistics. Query adviser is suggested second to execute because the query revision, especially predicate revision, may significantly affect the index selection and options available to the optimizer. For example, a stage 2 predicate re-written to an indexable predicate could now utilize an existing index. The remaining two advisers can be run after the statistics and query advisers. However, running the advisers can be an iterative process, as creating a new index may require statistics adviser to be rerun to identify and collect missing index statistics. Otherwise, DB2 may have misleading information about the new index due to missing statistics. A typical example is that missing the cluster ratio for an index can cause the wrong estimation of random I/O cost to the data pages via the index.
Below is an example of a workload captured for tuning. The user captures the application SQL statements from the dynamic statement cache and clicks Save All to Workload to create the workload.
Figure 12. The captured workload with some high-cost statements
The second step is to explain the workload and invoke statistics adviser.
Figure 13. Recommendations from statistics adviser
Statistics adviser shows 11 tables in this workload that have potential statistics issues and RUNSTATS statements are also provided. If users want to see the detailed reason for recommending specific statistics, they can check the detailed report below.
Figure 14. Detailed report of statistics adviser
If the users have questions with regard to the recommendations for a specific table, or column, etc., they can easily search the names in the report. The detailed reasons are outlined in the report, as shown above. Sometimes the major reasons are caused by missing, or conflicting or obsolete statistics. If statistics adviser finds potential data skew or correlation, it suggests collecting distribution statistics for these interesting columns or column groups.
The third step is to collect the statistics at a convenient time, given other work on the system. After collecting statistics, the user captures the application again using the method in the first step to validate the potential improvement. In the screenshot below, the workload DW3_B is captured before the user applies the statistics adviser recommendations, and the workload DW3_Sn is captured after the user executes the RUNSTATS job. We can see the elapsed time has decreased from 471.20s to 383.40s, and the CPU time has decreased from 127.21s to 93.17s.
Figure 15. Comparison of tuning result by statistics adviser
The fourth step is to re-explain the workload and run index adviser to check if any index can be recommended.
Figure 16. Recommendations from index adviser
The user can see the estimated performance improvement and estimated disk space consumption if creating the recommended indices. Some workload characteristics — such as table cardinality, cumulative total cost, and table referenced count — are shown in order to review index recommendations. Below is the panel of index recommendations.
Figure 17. Deep-dive into recommendation details
When users review each index recommendation, they can review the affected statements and recommended reasons to validate that the index recommendation is reasonable. Additionally, users can also choose for focus on important tables for review by sorting the tables as highlighted below.
Figure 18. Choose recommendations by table importance
The user can choose to sort the tables by the cumulative total cost, which means the sum of the total cost of statements which refer to the table. Often, those tables should be considered for additional index creation to improve overall performance. Given that one focus of this article has been to identify ways to reduce the scope of tuning effort necessary, targeting queries by object in this manner is another alternative in achieving that goal. In the above example, the user only selects three tables as they account for the largest percentage of the total cost in the workload.
The fifth step is to re-explain the workload and invoke statistics adviser again. The purpose is to collect statistics for newly added indices. The user can see the RUNSTATS for the new indices.
Figure 19. Collect statistics for new added indices
The final step is to validate that the application performance is improved. The newly captured runtime data below shows that the performance was improved significantly after creating the new indices.
Figure 20. Comparison of result from index adviser
The newest captured workload is DW3_In after the user creates all recommended indices and collects statistics for new indices. The elapsed time is reduced from 383.40s to 8.99s, and the CPU time is reduced from 93.17s to 8.13s.
For a database administrator, one of the biggest challenges is to manage access paths for thousands of queries to maintain stability of their application performance. Access path regression is one of the most common causes of application performance degradation. There are various reasons for access paths change, such as:
- DB2 release migration or new maintenance deployment.
- Statistics updates to reflect a change in data volume.
- Hardware upgrade.
- System parameter changes.
- Physical database design change.
In the vast majority of cases, provided that complete and accurate statistics are available, the DB2 optimizer can produce an identical or better access path for each new bind/rebind. However, there are still chances for an access path to regress.
This article has been focusing on proactive analysis for improving application performance, and providing optimal indices and statistics to provide the DB2 optimizer with a stable foundation for its access path selection. And these steps will likely lead also to a reduction in regressions that result when statistics or indexing are not optimal.
Not all DBAs will have the time or skill to perform this proactive analysis. And even if they do, an access path regression is still a possibility. Therefore, it is beneficial to understand what other solutions are available to reduce exposure to access path regression, or to perform impact analysis and find a solution to get back to the prior good performance quickly.
DB2 9 and 10 for z/OS offer a set of useful features to help stabilize access paths across BIND and REBIND. DB2 9 for z/OS provided the basic plan management feature to allow a backup and recovery (switch) of static access paths across REBIND. And DB2 extended this further to support reuse of the prior plan across BIND/REBIND with the APREUSE option.
Database administrators can utilize these features and IOQWT solutions to manage access paths and achieve stable application performance.
To perform an application performance risk assessment, you should be able to predict what access path changes might happen after a significant environment change. IOQWT provides a solution to compare access paths at the package level or workload level to identify changes across thousands of queries.
To manage the risk of performance regression after REBIND, it is suggested to make a risk assessment before REBIND. You can perform the following tasks to do the assessment:
- Keep the current access path information of the interested packages in the EXPLAIN tables.
- BIND the interesting packages into a different collection ID and populate the new access path information into EXPLAIN tables.
- Compare the current access paths and the access paths generated by the new BIND operation and identify the access path changes.
- Perform analysis on the access path changes and perform tuning on the regressed access paths.
In the subsequent sections, an example is used to show the process how to make access path comparison for a sample package OQTPACK4. The sample package OQTPACK4 is a package of five queries bound under collection ID "PROD."
It is a good practice to specify EXPLAIN(YES) on BIND/REBIND so the access path information could be kept in the EXPLAIN tables. In DB2 10 for z/OS, the EXPLAIN PACKAGE statement can be executed to extract the PLAN_TABLE if you did not originally save the explain information with EXPLAIN(YES). However, the package must have been bound on DB2 9 or DB2 10 for z/OS, and only the PLAN_TABLE information is saved and not the other detailed explain table data.
For example, the access path information will be populated by the following statement: EXPLAIN PACKAGE COLLECTION PROD PACKAGE OQTPACK4.
You can get the access path information after REBIND by binding the package under a different collection ID. For example, you can bind the package into a collection ID "TEST": BIND PACKAGE(TEST.OQTPACK4) MEMEBER(OQTPROG4).
IOQWT offers a package-level access path information comparison function. You can start the function from a workload tuning project.
Figure 21. Start workload comparison in IOQWT
IOQWT will create a workload for comparison.
Figure 22. Create workload for comparison
With the input of the source collection ID and target collection ID, IOQWT will generate a report for the access path comparison result.
Figure 23. View comparison results
The queries with access path change or estimated cost change will be highlighted in the report. You should review these access path changes even if the report shows the cost of the access path generated by the new REBIND is lower than prior access path because the estimated cost of the new access path may be computed based on inaccurate statistics. In IOWQT, you can check the access path of a query in detail.
Figure 24. Compare the details of access plan
If there are no outstanding access path changes found, you can safely process the REBIND to the collection PROD. If any access path regression is identified, you can perform individual query tuning or lockdown/fallback the regressed query with plan management functions. Please refer to the sections "Lock down access paths with DB2 10 plan management" and "Fallback with DB2 9 classic plan management" for how to prevent access path regression.
Preparing for access path comparison of dynamic queries requires more preparation than the package-level comparison for static, since the access paths of dynamic queries are not stored in DB2 catalog. You can follow the process below to prepare for the access path comparison of dynamic queries:
- Create a workload for dynamic queries.
- EXPLAIN the workload before undertaking any activities (RUNSTATS, creating new indices, applying PTF, etc.) that may trigger access path change.
- After activities that may trigger access path change are done, EXPLAIN the workload again.
- Compare the latest access paths of this workload and the prior access paths.
- Perform analysis on the access path changes and perform tuning on the regressed access paths.
In general, after creating the workload for dynamic queries, the subsequent process to compare the actual access paths at the workload level in IOQWT is similar with the process for package level comparison. For instance, you can create a workload from the dynamic statement cache at one point in time, then compare the EXPLAIN result generated at a different time.
Figure 25. Compare EXPLAIN snapshots
The soon-to-be discussed DB2 for z/OS plan management features do not apply to dynamic queries. Therefore, there is increased likelihood that single-query tuning will be required to overcome regressions for dynamic queries or the use of optimization hints to lock down its access path.
Given the business requirement for application stability, a customer may not want to take any risk of an access path change across a static BIND or REBIND.
DB2 10 for z/OS offers a new REBIND (and BIND) option APREUSE. Under APREUSE(ERROR)
option, you accept REBINDs only when all access paths for a package can be reused.
This is the most cautious approach for preventing access path changes with
REBIND. For example, you can REBIND the package PROD.OQTPACK4 as below if you want to reuse
the access paths from the previous bind:
It should be noted that while APREUSE can reduce your exposure to access path regressions, it also limits the opportunity for access path improvement. However, this is a viable approach when the customer is not yet willing to explore opportunities for access path improvement. These opportunities may be deferred to a time when more proactive analysis is possible.
APREUSE is only available upon first BIND/REBIND in DB2 10 if the package was last bound on DB2 9. After the first BIND/REBIND is performed on DB2 10, subsequent BIND/REBINDs may use APREUSE.
When access path regression does occur, it is good to know that you may have a backup you can recover quickly, especially if there is insufficient time to perform a deep analysis. For such a scenario, you can utilize the DB2 9 plan management feature to fall back to the previous good performance quickly, provided a new REBIND can break in to succeed against the package.
In order to enable the ability to fall back with plan management, you must specify
PLANMGMT(BASIC) or PLANMGMT(EXTENDED) bind option, or set the ZPARM PLANMGMT to BASIC
or EXTENDED. In DB2 9, the default for PLANMGMT is OFF, but for DB2 10, the default
is EXTENDED. For example, if the previous BIND/REBIND was done with PLANMGMT=BASIC, you can
fall back the runtime structures to previous BIND/REBIND by the following REBIND
This plan management switch capability is available across a DB2 8 to 9 migration, 8 to 10, or 9 to 10 migration.
This article has highlighted the benefit associated with workload tuning and how IBM InfoSphere Optim Query Workload Tuner can be an integral part of the effort to simplify and automate many steps to succeed in improving total cost of ownership for your applications.
- Read "Tuning SQL with Optim Query Tuner, Part 1: Understanding access
paths" for details about understanding access paths.
- "Tuning SQL with Optim Query Tuner, Part 2: Tuning individual queries"
can help you learn how to tune an individual query.
- Watch the demo titled "Improve performance and reduce costs for DB2 for Linux, UNIX, and Windows with IBM InfoSphere Optim" to see
how one fictional company uses Optim solutions to resolve problems before they
affect the business, including use of Optim Query Tuner for DB2 for Linux, UNIX, and
- Go to the Optim
Query Workload Tuner for DB2 for z/OS or the Optim Query Tuner
for DB2 for Linux, UNIX, and Windows product page for more information,
including how to purchase the product.
- See a detailed introduction about how to launch the various functions in OQT in the Information Center.
- Read the DB2 for Linux, UNIX, and Windows product information on predicate types in
the DB2 Information Center.
- Read the DB2 for
z/OS product information on nested loop joins in the DB2 for z/OS
- Read the DB2 for
z/OS product information on merge scan joins in the DB2 for z/OS Information
- Read the DB2 for
z/OS product information on stage 1 and stage 2 predicates in the DB2 for
z/OS Information Center.
- Read the "Best practices when using Data Studio and Optim Development Studio with DB2 for
z/OS" article contains information to help you
configure Data Studio for query tuning on DB2 for z/OS.
- Explore the developerWorks
Optim family page to learn more about Optim solutions. Find technical
documentation, how-to articles, education, downloads, product information, and more.
- Stay current with developerWorks
technical events and webcasts. See this
page for a schedule of virtual technical briefings around the Optim
integrated data management portfolio.
- Learn more about Information Management at the developerWorks Information Management
zone. Find technical documentation,
how-to articles, education, downloads, product information, and
- Stay current with
developerWorks technical events and webcasts.
- Follow developerWorks on
Get products and technologies
- Build your next
development project with
IBM trial software,
available for download directly from developerWorks.
- Now you can use
DB2 for free. Download DB2 Express-C, a no-charge
version of DB2 Express Edition for the community that offers the same core
data features as DB2 Express Edition and provides a solid base to build
and deploy applications.
Terry Purcell is a senior technical staff member with the IBM Silicon Valley Lab, where he is lead designer for the DB2 for z/OS Optimizer. Terry has two decades of experience with DB2 database administration and application development as a customer, consultant and DB2 developer.
Kendrick Ren is technical lead for the IBM Optim Query Tuner and Optim Query Workload Tuner products, working out of the IBM Toronto lab. He has worked with these products in their previous incarnations as the DB2 optimization expert and Optimization Service Center products since the team was founded in 2004. He works closely with customers and business partners using these products, assisting them in the area of query optimization. Previously, he worked on the IBM WebSphere Commerce Server product.
Jun Liu is one of the technical leads of the IBM Optim Query Tuner and Optim Query Workload Tuner products in IBM China lab. He joined the team in 2005 and has worked with these products in their previous incarnations as the DB2 Optimization Expert and Optimization Service Center products. He now focuses on the development of tools related to database index optimization and SQL access path tuning areas.