As I've been visiting various customers and presenting at conferences, I'm finding that there is a lot of interest in our new performance monitoring capabilities in Optim Performance Manager. One of hottest topics in that area is the new statement-level performance metrics. These were significantly enhanced in DB2 LUW 9.7 and are planned for DB2 for z/OS version 10, currently in beta. There are two aspects of this technology that people find exciting:
- You get details on the cost of individual SQL statements rather than seeing a rollup of the costs for an entire package or plan.
- The cost of collecting this data is very low -- in the range of 3% overhead or less.
That last bullet is really the part that excites people. In the past, you had to run an expensive SQL trace to get this kind of data, and most customers found the overhead was too high to have the trace on all the time. The new DB2 technology gives us statement cost histograms for short time intervals during the day (typically 60 seconds or so). Armed with this data, Optim Performance Manager can show us how the cost of an individual SQL statement changes during the day, week, or month.
The histograms can also allow us to easily identify statements that have volatile cost due to data skew. The combination of this function with the Optim Performance Manager's end-to-end monitoring, which allows to account each SQL to the individual workload it originates from (end user, application, client machines etc.), provides a pretty powerful tool. We believe this will be an important new capability in DB2 and our tools, since it holds the promise of allowing us to review performance problems after the fact without having to recreate the problem scenario. That will save all of us a lot of time, since in many cases it isn't easy to reconstruct the conditions that caused the performance problem.