One of the issues that users have raised to me is the impedance mismatch between our web-based tools and our eclipse-based tools, in particular, between InfoSphere Optim Performance Manager (Performance Manager) and InfoSphere Optim Query Workload Tuner (Query Tuner). Query Tuner availability on the Eclipse platform grew out of our rich investment in Eclipse and our strategy to drive improved query performance early in the development cycle. Thus, key integration with Data Studio and Rational development features were, and continue to be, highly valued.
Key use cases for performance management involve using Performance Manager to identify high cost queries and transfer them to Query Tuner for analysis and advice. Performance Manager makes it easy to identify high cost queries (either long running queries or short queries that are costly in aggregate) or to identify query workloads that are not meeting their response time targets. The handshake between Performance Manager and Query Tuner provides the full problem identification through problem resolution support. However, Performance Manager provides a web console and Query Tuner is an Eclipse-based client, so there is a transition that can be unexpected, and for some, unwelcome. With the new releases coming in June, we are embarking on delivering query tuning features directly in the web console! This release is a first step, so all features are not available, but it is indicative of our direction to make the cycle of problem identification through problem resolution a seamless experience. When the Eclipse client is installed and open, we will still transfer the queries to the Eclipse client since it has the full query tuning function.
Features available from the web will now include the ability to:
Re-explain a query or workload
Format and annotate a SQL statement
Get statistics advice for a query or workload
Get index advice for a query or workload
Generate, store, and share reports containing the advice
The screen capture below previews the query tuning tab right inside the Performance Manager web console. The embedded job manager in Performance Manager is used to execute the query tuning jobs. All the submitted query tuning jobs can be viewed and selected in the grid including their status and progress.
Features still requiring the Eclipse client are:
Visual Explain and Access Path Explorer (yes, we know this is a priority and we are working on it )
Access Plan Advice and Workload Access Plan Advice
Access Plan Compare and Workload Access Plan Compare
Statistical View Advice
Materialized Query Tables Advice
Multidimensional Clustering Advice
Partition Distribution Advice
Table Organization Advice (new in this Query Tuner release in support of BLU Acceleration)
Test Candidate Indexes
Plan Hints Support
Workload Capture (from other sources) and Workload Management
Also, you still need the Query Tuner license activated on the target database to be able to use the function from the Performance Manager web console. The license activation capability is still in the Query Tuner Eclipse client.
Hope you will enjoy this new function. Give us some feedback on our Query Tuner forum or post a question here.
Today IBM announced updates to several performance management products, all of which are available individually or as part of the DB2 Advanced Enterprise Server Edition. They include:
We all know that poorly performing applications can lead to lost revenue and customer dissatisfaction. We have made numerous improvements in all these products to make it easier to identify and resolve issues before they can impact your business. There are far too many improvements to list but some notable improvements include:
IBM InfoSphere Optim Performance Manager V5.2
We have added enhancements to better analyze stored procedure performance. Rank stored procedures and drill down to find poorly performing SQL. We have extended reporting now with scheduling, emailing, and retention management. To help you automate actions, we have provided the ability to trigger user defined jobs in response to alerts.
IBM InfoSphere Optim Query Workload Tuner for DB2 Linux, UNIX, Windows V3.2
Once you have identified SQL that could use some help, use Query Workload Tuner to provide suggestions and analyze possible improvements or use Query Workload Tuner to analyze SQL before it ever goes into production. V3.2 has many enhancements to help with "what-if" analysis. Use Query Workload Tuner and virtual indexes to analyze the impact of adding, dropping or changing an index. Use the plan comparator to see the changes to access plans.
IBM InfoSphere Optim pureQuery Runtime for Linux, Unix Windows V3.2
Suppose you have poorly performing SQL from a third party application. pureQuery Runtime can help with that. Use pureQuery to swap out poorly performing SQL. With V3.2, new configuration scripts makes it easier to deploy on WebSphere Application Server and configuration changes can go into effect immediately.
IBM Data Studio V3.2
IBM Data Studio has also been updated. Many enhancements were made to alerts and scheduling in the web console. Users can now define their own alert view preferences. The client was updated to make debugging even simpler and more robust. Debug triggers and PL/SQL anonymous blocks. You can now view global and package variables, associative arrays and array of rows in the debugger. It is also much easier to recompile a routine for debugging. It is almost automatic. Starting with V3.2 the Data Studio administration client will be available via DB2 Express-C
and refreshed as needed. The full client is available via the normal Data Studio download page
. Remember, for the full client, when using the Installation Manager option, only the options selected will be downloaded and installed on your system. If you are using the administration client, try the full client.
Finally, connection management is now easier for many of these products. IBM InfoSphere Optim Performance Manager, Query Workload Tuner, and Data Studio can now obtain and stay synchronized with connection information from the Optim Configuration Manager's database repository.
For more information, check the announcement here
Here's a handy tip for a Monday. You might already know that, in InfoSphere Optim Query Tuner and InfoSphere Optim Query Workload Tuner, you can use the client to develop SQL stored procedures. When you are working in a routine editor, you can right-click anywhere in an SQL statement and select Start Tuning to open the Query Tuner workflow assistant. The statement appears in the Query Tuner workflow assistant. From this location, you can run advisors and tools to get analyses and recommendations.
Until version 3.1, if the SQL statement contained variables, the variables were present in the statement after it was imported into the workflow assistant, and the statement couldn't be tuned. However, in version 3.1, the variables are replaced with parameter markers.
For example, the SQL statement in this routine uses the variables ORDER_DETAIL_CODE_IN and SALES_STAFF_CODE_IN. The first is a VARCHAR, the second is an INTEGER.
After you right-click in the routine editor and select Start Tuning, the Query Tuner workflow assistant opens to the Run Single-Query Advisors and Analysis Tools page. The variables are replaced with parameter markers, using the appropriate casts to indicate data types.
From this point, you can click the Select What to Run
button and choose the tools and advisors that you want to run on the statement, and proceed tuning as you would with any other statement.
Robert Heath, email@example.com (Start the subject line with "MDL blog: ")
Information Development for InfoSphere Optim Query Tuner and InfoSphere Optim Query Workload Tuner
Have you ever noticed that you can't do anything that someone else hasn't declared to be an art? To use a now commonplace method of generating examples, go to Google and search on the words "art of". To list only those on the first page of results that I get:
- art of manliness
- art of trolling
- art of participation
- art of questioning
- art of living
- art of shaving
- art of flight
- art of dying
- art of seduction
- art of the steal
- art of marriage
- art of getting by
These are dubious arts, at best. If shaving is an art, then it's most likely a Dadaist one, and the masterwork would be to reverse what Marcel Duchamp did to the Mona Lisa
. I wouldn't care to hear from anyone smug enough to think he'd mastered the art of living, and I suspect that I never will hear the wisdom of any masters in the art of dying. And is it just me or do you also get the idea that the inventors of the "art of getting by" were perhaps just tired of failing to be masters of anything else and thought they might have a chance with that?
Here and there on the internet are some of the opinion that tuning SQL statements is an art. That might be. However, you don't have to rely on artistic inspiration to solve performance problems quickly and resolutely if you are using IBM Data Studio, InfoSphere Optim Query Tuner, or InfoSphere Optim Query Workload Tuner. In fact, tuning generally follows a sequence of steps that can cover many cases and that you can modify when you encounter idiosyncratic problems. The general sequences for tuning single statements and for tuning query workloads are in the information centers for these products, but I want to recap them here, because they might not be widely known.General steps for tuning query workloads that run on DB2 for Linux, UNIX, and Windows
(The original steps are here
- Create a workload of the SQL statements that you want to tune.
- Run the Workload Statistics Advisor to generate RUNSTATS commands for ensuring that the DB2 optimizer has the most current and the most needed statistics. Then, run the commands.
- Run the Workload Statistics Advisor again to generate recommendations for creating statistical views and modifying existing statistical views. Then, run the DDL scripts and RUNSTATS commands for collecting statistics for the views.
- Run the Workload Index Advisor and follow its recommendations to ensure that the right indexes exist to help avoid unnecessary table scans. Then, run the DDL scripts to create the recommended indexes.
The idea is to make sure that the relevant statistics are current before you move on to getting recommendations from the Workload Index Advisor.General steps for tuning query workloads that run on DB2 for z/OS
(The original steps are here
- Create a workload of the SQL statements that you want to tune.
- Collect EXPLAIN information about the SQL statements that are in the query workload. The workflow assistant must run the EXPLAIN statement for each of the SQL statements that are in the query workload. The Workload Statistics Advisor requires up-to-date information about the SQL statements.
- Run the Workload Statistics Advisor to generate RUNSTATS commands for ensuring that the DB2 optimizer has the most current and the most needed statistics.
- Run the Workload Index Advisor and follow its recommendations to ensure that the right indexes exist to help avoid unnecessary table scans.
- Run the Workload Query Advisor and follow its recommendations to find and revise structures in SQL statements that are likely to cause the DB2 optimizer to choose a suboptimal access path.
- Run the Workload Access Path Advisor to discover whether the DB2 optimizer is using a suboptimal access path to process the statement. Then, follow the advisor's recommendations.
Again, the idea is to make sure that the relevant statistics are current before you move on to getting recommendations from the other advisors. General steps for tuning single SQL statements with the free features in IBM Data Studio full client or administration client
(The original steps are here
- Format the SQL statement to make it easier to read and comprehend. Seeing a formatted version of the SQL statement can lead to insights that are not as apparent with the unformatted version.
- Analyze the access plan graph for the SQL statement to find out how DB2 accesses data in the objects that the statement references.
- Run the Statistics Advisor and follow its recommendations to ensure that the DB2 optimizer has the most current and most important statistics.
The idea here is to examine the SQL statement first to try to understand where the current bottlenecks might be, then to update the relevant statistics. You can then regenerate an access plan graph and find out whether the bottlenecks are improved.General steps for tuning single SQL statements
(The original steps are here
- Capture the SQL statement that you want to tune.
- Format and annotate the problem statement to make it easier to read and comprehend. The annotations show relevant statistics that can help you understand what information the DB2 optimizer is using when generating the access plan.
- Follow either or both of these steps:
Generate a visual representation of the access plan to see the choices that the optimizer makes in accessing the data.
Browse the access plan for problems by using the Access Plan Explorer.
- Run the Statistics Advisor and follow its recommendations to ensure that the DB2 optimizer always has the most current and the most needed statistics.
- Run the Index Advisor and follow its recommendations to ensure that the right indexes exist to help avoid unnecessary table scans.
- Run the Query Advisor and follow its recommendations to find and revise structures in the statement that are likely to cause the DB2 optimizer to choose a suboptimal access path.
- Run the Access Path Advisor to discover whether the DB2 optimizer is using a suboptimal access path to process the statement. Then, follow the advisor's recommendations.
- If the DB2 optimizer continues to use a suboptimal access path, create, validate, and deploy a plan hint (for DB2 for z/OS) or optimization profile (DB2 for Linux, UNIX, and Windows) that gives the optimizer explicit instructions for choosing an access path.
The idea here is to examine the SQL statement first to try to understand where the current bottlenecks might be, then to update the relevant statistics. You can then regenerate an access plan graph and find out whether the bottlenecks are improved. You could also move on to the other advisors, regenerating an access plan graph at times to compare it against your benchmark. Plan hints and optimization profiles in this sequence are tools of last resort.
Do these general steps match those that you take to resolve 80% of performance problems when you are tuning with Data Studio, InfoSphere Optim Query Tuner, or InfoSphere Optim Query Workload Tuner? If not, how do your steps differ?
What sorts of uncommon situations do you find yourself in, and how do you modify these steps then?
Robert Heath, firstname.lastname@example.org (Start the subject line with "MDL blog: ")
Information Development for InfoSphere Optim Query Tuner and InfoSphere Optim Query Workload Tuner
Using software can be like trying to get to a freeway onramp that you know well by driving through an unfamiliar part of a city. You know what the outcome should look like (cruising on the open highway), but to get there you have to learn new landmarks ("Ah, so I turn left at the donut shop!"), new shortcuts ("So, I saved 5 minutes by taking Jefferson. I'll have to remember that."), the streets that go only one way (*pant, pant* "No, I guess I can't drive that way down Sycamore after all."), and more.
The information roadmap for InfoSphere Optim Query Tuner and InfoSphere Optim Query Workload Tuner will help you get to know the streets and landmarks of those two products, so that you can tune single SQL statements or workloads and get to your open highways faster. You'll find it at http://www.ibm.com/developerworks/data/roadmaps/roadmap_ioqt_ioqwt.html
If you have any suggestions for links to include in it or have any other comments about it, please let me know.
Information Development for InfoSphere Optim Query Tuner and InfoSphere Optim Query Workload Tuner
Take Charge of your Data Environment: IBM Performance Management Solutions
Performance issues are often at the source of declining productivity, increasing capital expenditures, and lost revenue. This Thursday, November 10, we’re hosting a webinar, IBM Performance Management Solutions,
Featuring OPM and OQWT,
to walk you through IBM Performance Management offerings. I'm joined by Bob Harbus, DB2 Technical Specialist and Danny Zilio, Software
Analyst – Optim Query Workload Tuner. We'll be discussing how you can use InfoSphere Optim Performance Manager and InfoSphere Optim Query Workload Tuner to manager performance, both during development and in production. Join us to share your questions, challenges, and insights.
Hope to hear you on the call!
Holly Hayes, Optim Tools Product Manager
As I've been visiting various customers and presenting at conferences, I'm finding that there is a lot of interest in our new performance monitoring capabilities in Optim Performance Manager. One of hottest topics in that area is the new statement-level performance metrics. These were significantly enhanced in DB2 LUW 9.7 and are planned for DB2 for z/OS version 10, currently in beta. There are two aspects of this technology that people find exciting:
- You get details on the cost of individual SQL statements rather than seeing a rollup of the costs for an entire package or plan.
- The cost of collecting this data is very low -- in the range of 3% overhead or less.
That last bullet is really the part that excites people. In the past, you had to run an expensive SQL trace to get this kind of data, and most customers found the overhead was too high to have the trace on all the time. The new DB2 technology gives us statement cost histograms for short time intervals during the day (typically 60 seconds or so). Armed with this data, Optim Performance Manager can show us how the cost of an individual SQL statement changes during the day, week, or month.
The histograms can also allow us to easily identify statements that have volatile cost due to data skew. The combination of this function with the Optim Performance Manager's end-to-end monitoring, which allows to account each SQL to the individual workload it originates from (end user, application, client machines etc.), provides a pretty powerful tool. We believe this will be an important new capability in DB2 and our tools, since it holds the promise of allowing us to review performance problems after the fact without having to recreate the problem scenario. That will save all of us a lot of time, since in many cases it isn't easy to reconstruct the conditions that caused the performance problem.
Today we announced a major enhancement to our performance monitoring and management solution for DB2, with the 4.1 release of Optim Performance Manager for DB2 for Linux, UNIX, and Windows (I’ll use ‘OPM’ in the rest of this blog entry). This is a major new version of OPM that includes a a significantly improved up and running experience and quick problem resolution.
The biggest change you’ll see out of the box is the new Web-based user interface and redesigned problem resolution workflow Our beta customers have given us great feedback in the development and refinement of this interface, and the result seems to be pretty well-received. One of our beta clients states that “The browser interface is easy to use, with intuitive dashboard displays and easy to understand presentation of information.” Even better, since it is Web-based, you can monitor databases anywhere without having to install software on various PCs.
The repository server collects performance metrics from the monitored database and stores them into a DB2 database. You can navigate through the stored data by time and see reports or dashboard data from the chose time period. This allows post-mortem problem detection and resolution, or for proactive monitoring and trend analysis. There are also interactive reports, such as for table space disk growth and for Top n SQL statements, that you can generate from this stored information.
The team has done a lot of work on getting up and running with the monitoring solution must faster. There is an integrated installer and there are predefined monitoring profiles for a variety of workloads, such as BI, OLTP, SAP, QA and Development. I’m really happy with the reports coming from the beta that installation and configuration is “easy.”
Finally, you can launch Optim Query Tuner from several of the dashboards, including the Active SQL and Extended Insight Dashboards, to do in-context query tuning on individual problem queries.
To realize the full power of the new integrations and lifecycle capabilities of this release, you should definitely check out the new package available in this release, called Optim Performance Manager Extended Edition (OPM EE) that builds on the base capabilities in OPM by inclusion of Extended Insight (previously a separately orderable feature), integration with Tivoli monitoring solutions, and configuration tooling for DB2 Workload Manager.
If you like the value of Extended Insight, which provides key metrics and visualizations of SQL as it travels through the software stack for dynamic Java applications, you’ll really like that we’ve extended the capabilities in this release of OPM EE to include CLI applications. We also include out of the box, customizable, workload views for SAP, Cognos, DataStage, and InfoSphere SQL Warehouse help get you going.
To round out our monitoring story to support the strong message we tell with static SQL, OPM EE now includes monitoring support for static SQL from Java applications. So if you want to take advantage of static SQL from Java, either by using the pureQuery API or by using client optimization for any JDBC application, you can get the Extended Insight information that you could previously only get for dynamic.
We’ve also made it possible to import pureQuery application metadata into OPM so that detailed information about the application source (Java package name, method name, line number) can be displayed on the Extended Insight Dashboard for any individual SQL statement. This particular feature will require a pureQuery Runtime license.
Integration with Tivoli monitoring solutions smoothes the handoff between system operators and the detailed database performance analysis performed by DBAs. The integration enables the ability to drill into the deep database diagnostic capabilities of OPM EE directly from the Tivoli Enterprise Portal. One of our beta clients who does extensive work with clients using Tivoli found this integration very useful, and points out that “Outsourced operations will love the Tivoli integration as it allows them to monitor multiple WAS and DB2 instances from a single point of control.”
Finally, OPM EE provides new tooling to significantly ease the configuration of DB2 workload manager. Although the existing WLM configuration tooling is still shipped with InfoSphere SQL Warehouse, this new tooling is integrated into OPM EE. Key monitoring information vital to workload management is presented in context so that you can do related configuration and validation within a single tool
There is really way more than I can possibly cover a blog entry. Here are links where you can find more information and see the user interface in action.
I’m the performance architect for, among other things, the pureQuery platform. My team has the responsibility for not just ensuring that our products perform well, but also to help produce verifiable performance numbers that we can share with confidence.
I’m happy to say that we are ready to share our performance numbers for pureQuery access to DB2 for Linux, UNIX and Windows. (We already have some great numbers published for z/OS for both Java and .NET).
The goal of this particular performance test was to measure throughput improvement using static SQL execution, which is possible to do even for existing JDBC applications with no change to the application source code. The increased throughput comes mainly as a result of saving the cost of preparing the SQL when using static vs dynamic SQL. We typically don’t see the same level of interest in static execution from DB2 for LUW customers as we do from DB2 for z/OS customers because the LUW platform does not have the same memory constraints as z/OS – and therefore LUW customers might be more likely to throw hardware at the problem to achieve greater dynamic cache hit ratios and hence improve throughput.
However, static SQL also provides predictable performance because the access plan is pre-determined and I often find users are happier with predictable response times rather than ultra-fast response which can deteriorate over time.
Static SQL execution also provides much more than predictable performance. By using it, you can significantly improve problem determination and traceability. You can also reduce the risk of static SQL injection from dynamically executing applications. You can read about some of those benefits in this article. And there are additional benefits to pureQuery usage such as literal consolidation or the ability to make emergency fixes to application SQL without changing the application, which you can read about in Sonali’s article on 2.2 features.
OK, now that I’ve hopefully convinced you that there are many, many reasons to consider pureQuery and static SQL execution for DB2 LUW environments, I would like to go ahead and share our performance results.
The measurement environment
Our measurements were done with a typical 3-tier environment of a client, application server, and database server as shown here.
A word about the “ERWW” application we use. ERWW is an OLTP application based on an order entry and tracking system that is designed to exercise the database tier much more than the application tier (that is to say, there is not a lot of business logic in the application). The ERWW workload models a wholesale supplier managing orders, and consists of seven transaction types. The frequency of transactions is set to simulate a realistic scenario; the mix used in the benchmark environment was 47 percent update transactions, 53 percent read-only transactions. The workload is triggered by a Java client program which generates HTTP requests for the required transaction mix.
Before I go into our results, I have to offer up the standard disclaimer that any of you who are familiar with performance work are used to hearing. The tests that we ran were done in a controlled environment where we were able to carefully control extenuating factors that can influence the results. In particular, the type of application you run can significantly affect the results in terms of the mix of database-intensive work versus application-intensive work. The ERWW workload is a very database intensive workload and most of the work is done by the database server processing SQL requests. Therefore, by using pureQuery to optimize the database server side processing, we are in fact optimizing a large chunk of the workload. Consequently the performance gains for this workload are significant. We chose ERWW because it was readily available to us, and not because we thought it would give us the best results. I guess what I am trying to say is that your results will vary.
OK, now that that’s out of the way. We measured static execution using both client optimization of an existing JDBC application and also as a ‘new’ version of the application written in pureQuery annotated method style. The performance is reported in Normalized Throughput Rate - Transactions Per Second (ITR). The ITR is the notional throughput rate assuming that the CPUs are 100 percent busy. For example, consider an application with a transaction rate of 200 transactions per second at 75 percent CPU consumption. The ITR for this application would be 200 * 100/75 = 267 tps. This is the notional transaction rate that could be achieved if the CPUs were 100 percent busy, and no other bottleneck is hit first.
We measured the JDBC workload with both 90% and 95% package cache hit ratios. To achieve a 90% package cache hit ratio with the ERWW workload, the DB2 Package Cache (PCKCACHESZ) was sized to 180 x 4k pages, and for a 95% hit ratio it was sized to 210 x 4k pages.
Here are the results with a 90% cache hit ratio. The results are shown on the vertical axis as the database ITR improvements over the baseline of JDBC.
As you can see, client optimization almost doubled throughput over the existing JDBC application. The new application that uses pureQuery method style API more than doubled the database transaction throughput.
The results with a 95% cache hit ratio are shown here.
Note that we achieved significant throughput improvements even with a high package cache hit ratio.
In summary, pureQuery and static execution can offer many benefits, one of which may be improving the performance of your data servers with your applications. By changing the dynamic SQL to static SQL, pureQuery should help you either achieve better throughput on your existing hardware, or reduce CPU consumption of your existing hardware, allowing you to load more tasks onto it. I highly recommend that you also check out the bigger picture around Java acceleration (including the other benefits I mention) as shown in this video.
IOD was a busy time for the Optim team. I hosted three sessions and also was in the pedestal area and had a chance to interact with several DB2 and IDS customers. While all the solutions certainly got their attention, two solutions seemed to bubble up to the top in terms of customer discussion and questions. Optim pureQuery
was one of them, mainly from the DB2 for z/OS crowd. There is definitely a trend in the market of customers wanting to take advany tage of the wealth of data stored on the DB2 for z/OS platform rather than transport that data to another platform. This trend indicates that JAVA is the preferred language when developing these new applications; however, finding an acceptable data access layer both in terms of ease of use as well as high performance, has been a challenge. Many companies are looking at things like Hibernate to address their ease of use challenges, but unfortunately Hibernate doesn't address their high performance requirement. Several customers wanted to understand how pureQuery could help them in this situation and were very excited once we talked to them about not only the benefits of Static SQL
, but also the client optimization
features of pureQuery, and most importantly how we could visualize the SQL that Hibernate was generating to the developer and DBA. Seems to be a perfect fit for providing the high performance that DB2 z/OS customers expect for these new JAVA applications.
The second solution that seemed to get a lot of attention was our Performance Expert
solution. When it comes to problem diagnosis everyone is always looking for a better mousetrap and it seems that solutions in this area get hot and then run their course. Well, Performance Expert seems to be the up and coming hot product. I believe this is due to the sheer wealth of information this product collects. If , or should I say when, a problem occurs in production the most difficult challenge is gathering that information, especially because that moment in time is now past and you really need historical data to determine what went wrong. Performance Expert collects information and stores this information in intervals, making it incredibly easy to get the information needed all synchronized to the right time. You can see a short video of this capability here
. The other thing I found interesting was all the various reports and consoles customers were writing based on the data found in the Performance Warehouse. From capacity planning to SLA reporting, it seems to me that there is a lot of customization going on that would probably make a great birds-of-a-feather session in future conferences.
As I mentioned earlier, all the solutions got their attention. I hosted a joint session with Randy Wilson from Blue Cross Blue Shield of Tennessee where Randy described a very interesting outage situation (not fun!) and how Optim High Performance Unload
saved the day for them in terms of being able to supplement their recovery and provide the historical data they needed. Interesting how customers really think outside the box and come up with creative uses for our solution. After that session I had quite a few discussions with customers on how they could use High Performance Unload for situations like Randy's where back-level DB2 or dropped tables caused challenges in terms of recovery.
Well it was a busy week and now comes all the customer follow-up that happens after IOD!
I totally love performance, and I like the whole process of trying to find the issues. There is nothing more cool than being the hero-of-the-hour by finding the extreme I/O usage, the I/O hot spots, the file system contention, the logical control unit bottlenecks, the horrible buffer pool management, the poor SQL, or any combination of the above.
But what makes it so interesting is the tooling surrounding performance. To be more specific, the fact that I can now tune DB2 for LUW in the fashion as I would tune DB2 for z/OS. Heresy, you say? Please hear me out.
All through the 1990's and on through today, I've been using Tivoli OMEGAMON XE
for tuning DB2 for z/OS. This tool is great for finding just about any issue or bottleneck, including looking at historical data. In 2003, I had to support DB2 for LUW, and I felt like I was stepping back in time. My biggest complaint was the fact that I had to set up scripts to pull down activity. And finding historical information on DB2 for LUW activity? Forget about it.
But that was then.
What we have now for DB2 for LUW is, quite simply, amazing. DB2 Performance Expert
is a monitoring tool any DB2 person would just love. I have my main console, which is strikingly similar to the OMEGAMON classic view. There is the drill down capability in DB2 Performance Expert where I look for troubleshooting issues. But what do I really like? The fact that I can now look at historical information without a script or agent running on the monitored server!
Doing the Monday morning tasks - you on-call DBAs know what I'm talking about here - are made so much easier with DB2 Performance Expert. I can look at what happened over the weekend so that neither I nor my manager get blindsided by applications or by my end users. In addition, I can find problems in the here-and-now without physically issuing snapshots either at the command line processor or within a script. I can find the long running SQL and easily find the tables it's going after. By knowing the tables, I can go to the next level - is this a one-off SQL situation or is this a table that needs an index, or is this a table that needs to be isolated either in a bufferpool or using file system placement? DB2 for LUW will not be the product that you "poke a stick" at for tuning.
Will DB2 Performance Expert give you the desire to hang on to your turn in the on-call rotation for another week? I can't answer that. But finding the issues simply and easily with this tool will make you the hero-of-the-hour.
As you may recall, when we announced Data Studio pureQuery Runtime
2.1 for LUW, one of the new features in the release was ability to use pureQuery with .NET applications. This support is available for all of the DB2 servers; however, I want to focus a bit on this from a z/OS perspective, since it is primarily these customers who let us know that they heard and liked the pureQuery for Java story we were telling, but that they needed something like this for .NET as well. They wanted the advantages of static SQL - for security, manageability, and performance reasons.
So we did add that support for .NET in the 2.1 release of the Data Studio pureQuery Runtime and in the latest IBM ADO .NET provider. Although it doesn't have all the rich tools support that Java does, it provides many of the key benefits that Java shops can get - static SQL performance and consistency, static SQL authorization model, and the ability to create uniquely named packages that can help DBAs and system programmers isolate performance problems to a particular application and particular SQL statements. And, since it's using client optimization, that means your applications can get these benefits without having to change source code.
To validate the performance benefit, I'm very happy to announce that we've published the results of our performance study
(using the IRWW benchmark) of the pureQuery support for .NET. I don't want to spoil the surprise, but the numbers are very impressive with huge increases in throughput and dramatic reductions in CPU per transaction.
Also, be sure to see this developerWorks tutorial
. It's a good step-by-step guide to the process of enabling .NET applications to use pureQuery.
I just got off the phone with a DBA who just started to use DB2 Performance Expert (PE) a couple of months ago. This customer, whom I will call “Milo,” after my cat, called me to tell they were able to finally diagnose a pesky performance problem that had been hiding for all these months.
Let me back up to give you some history. When I first talked to Milo, they were having performance problems on all their environments. Before using PE, they usually ran a series of scripts to diagnose performance problems. These scripts would help them collect performance metrics for each partition. To add to the complexity they had a bunch of different environments they were monitoring. This means that Milo would end up with reports all over the place --in different systems and in different directory structures. Then when he downloaded them to his workstation he would need to print them out.
Milo's desk was covered with printouts. He had stacks of printouts on the left, stacks of printouts on the right -- they were even stacked on his bookcase. Milo's baseball bobble heads were holding up printouts because Milo ran out of room in his cube. Each area of his cube represented a different environment. Milo even made jokes that he was drowning in his own printouts, and he felt like the performance reports were multiplying if you left them alone. We both got a good chuckle out of that.
Fast forward to life after getting PE. With PE, Milo was easily able to pinpoint the performance problem in their large warehousing environments. PE was able to monitor all their different environments, which helped them diagnose a ton of problems because they were able to see all the performance metrics and how they correlated. For example, PE allowed them to view each partition and compare the partitions.
However, there was one sticky problem in their large warehouse environment (environment A) which they were unable to diagnose. But since they began using PE, that slowdown didn’t show up for months. Then one day, that pesky performance problem finally revealed itself again. Apparently one of the DBAs ran the old performance scripts on environment A, at which time the problem reappeared so the team could finally work on isolating the problem.
Milo told me laughing We couldn't figure it out….. the slowdown happened almost like clockwork. We had our scripts scheduled to capture the performance data and somehow it just happened. We used PE to diagnose the problem, only to discover it was our own poorly written performance scripts that caused the problem. You see, environment A was built on smaller UNIX boxes, with less memory and other resources. When they ran the performance scripts they caused they system to run out of memory, thus impacting their DB2 system. Since PE was able to show them what else is using resources on the system outside of DB2 they were able to see the problem immediately. Their scripts didn't check OS resources, but PE does.
The person who wrote the original scripts has been gone for a long time. The scripts ran fine and never caused performance problems on the other larger environments, so there was never any reason to examine them. Ooops :-)
Cheers - Alice Ma
I work with LOTS of customers who use or want to use DB2 Performance Expert for LUW. The first question I get asked usually has to do with the basic architecture - what pieces run where and on what platform?
The first thing to understand is that DB2 PE is a client/server application. The PE server
runs on LUW platforms (AIX, SUN, Linux, Windows, etc). Most customers install the PE client
on Windows then connect to the PE server running on AIX, SUN, Linux, Windows, etc.
Look at the diagram below.
As shown there, we recommend that you install the PE server on its own DB2 instance -- not on the monitored box -- and have that server remotely monitoring the other DB2 instances on the network. Why? Well, you don't want to impact the monitored DB2 instances. I like to call this window shopping... you look but you don't want to touch. Note that in the 3.2 release, you get a license of DB2 ESE for just this purpose so you don't need to use one of your own DB2 installations for this.
PE can monitor different DB2 levels all from a single PE Server. You can also mix and match your operating systems, too. For example, the PE server can be installed on AIX and remotely monitor your DB2 instance running on AIX, SUN, Linux, Windows, etc..
If you want the gory details on specific levels of hardware and software server support for both the client piece and the server piece of DB2 PE, see this technote
I'll be back soon with answers to more FAQs. Feel free to submit your questions here using the comment link below or send an email to dstudio at us.ibm.com.
-- Alice Ma
In these trying economic times, we have to find ways to reduce costs everywhere we can. And if you can reduce costs while at the same time improving staff productivity, well, that looks pretty good.
For organizations running WebSphere and DB2 applications on z/OS, and there are quite a few, we think we have just that opportunity in pureQuery software. Stephen Brodsky and others have been writing about the value of pureQuery on this blog
and we published some good numbers on pureQuery performance
as well. In case you prefer listening to reading, I recently recorded a podcast on pureQuery. You can find the podcast on iTunes by searching the podcast directory for “Did you say mainframe?" which will bring up the Did you say mainframe? category. My podcast is the one entitled "Enhancing Java environments with pureQuery."
Also, Stephen and I are doing a webcast together on February 4th focused on pureQuery in the mainframe environment. You can register here
. I hope you’ll join us and that this creates an opportunity for your organization to save money.