Writing to you now from outside of Rome at IOD EMEA, but I wanted to take some time to focus on some of my impressions of this year's IDUG NA in Tampa:
The first one is branding. It’s been a year since we renamed many of our Data Studio tools to Optim (and InfoSphere), partly as a result of acquiring Princeton Softech, who was using the name Optim. But there is still confusion. I spend time on this topic in my presentations, explaining that Data Studio is no longer the product family name. I explain that the term Data Studio now refers to the no-charge offering that supports DB2 (on LUW, z/OS, and i5) as well as IDS and that there is no such things as a "set of Data Studio Tools." Branding is (obviously) a very powerful thing, and changing branding can take a long time to absorb. Kathy tells me that we recently updated our packaging article on developerWorks, so if you are still confused, you may want to check it out. It lays out the high level functionality, names, etc. for the ‘portfolio previously known as Data Studio.’
Secondly, I was surprised to hear the number of customers that are using .NET to access DB2 for z/OS. I attended a Speed-SIG focused on development in Java and .NET, and all of the customers attending said that they have .NET apps accessing DB2 for z/OS. I knew that this existed, but the high frequency did surprise me. Of course, it gave us a good chance to talk about pureQuery Runtime and how it can help .NET applications in much the same way as Java applications.
Of course, it was great to see many familiar faces and lots of new ones in Tampa. The IDUG board of directors was happy to see a 20% increase in attendance over last year's North America conference. And don’t forget, IDUG Europe is planned for November 8-12 in Vienna.
I'm excited to be co-presenting with one of our customers at IOD EMEA in May. Roberta Barnabe, a DB2 Specialist for UGF (Unipol Gruppo Finanziario) Assicurazioni, will be sharing her experiences with IBM Optim Query Workload Tuner for DB2 for z/OS. For any of you interested in z/OS query tuning and want to hear what someone besides we IBMers have to say, I encourage you to come hear what she has to say about:
- Statistics Advisor
- Dynamic statements analysis
- Access Plan Graph and Query Format
- What-if analysis
- Workload analysis
- Query Environment Capture
Ending with a summary statement that the tool resulted in an easier way to perform performing tuning analysis, and "BIG TIME SAVING !!!!!!"
I hope to see you there. Here are the particulars:
Session TSB-3313Thursday, May 20
11:45AM -12:35 PM
Hi, everyone. It seems as if lately I am always on the road. Today, I am actually back here at SVL, where, fortunately or unfortunately, depending on your point of view, it is raining, and raining hard. Tomorrow I head off to the UK for regional user group meetings. I also have Rome to look forward to, and that’s why I’m writing today, to give you the highlights at IOD EMEA for products in my portfolio.
When I look at the sessions, one major theme I see is around performance. Regardless of how ‘cool’ new applications and new technologies are, the issue of performance never goes away and often becomes even more critical as users become even more intolerant of slow response times or unavailability. So, although performance may not have the “coolness” factor of some topics, it’s bread and butter for most of our customers. The key is to make performance management more streamlined, less costly people-wise, and more focused on prevention rather than constant reaction.
If you’re going to IOD EMEA, I invite you to make a point of joining us at the following sessions to learn more about what we’re doing to help realize those goals. The sessions here are listed in chronological order, but you will of course need to check for changes at the conference venue itself. Some of the speakers may change as well. I look forward to seeing you there. If you haven’t signed up for an executive one on one with me through your IBM sales rep and are interested in doing so, here is a link to where you can find more information on how to do that.
|Date and time ||Session ||Title ||Speaker |
|Wednesday, May 19 |
|TSB-3181 ||Java Developer Best Practices for DB2 Performance ||Dave Beulke |
|Wednesday, May 19 |
|BLD-3375 ||How British Petroleum Manages Enterprise Data Growth with IBM Optim ||Jim Lee, David Sohl |
|Thursday, May 20 |
|TSB-3350 || |
End-to-End Monitoring and Problem Determination with Optim Performance Solution
|Torsten Steinbach, Holger Karn |
|Thursday, May 20 |
|TSB-2947 ||Query Optimization and Query Tuning with Optim Query Tuner ||Gene Fuh |
|Thursday, May 20 |
10:30AM -1:30 PM
|3259 ||Hands on Lab! Optim Performance Manager – Live! ||Ute Baumbach and Michael Skubowius |
|Thursday, May 20 |
11:45AM -12:35 PM
|TSB-3347 ||Venedim: A first hand experience with Optim Performance Management solutions ||Jean-Marc Blaise, Torsten Steinbach, Holger Karn |
|Thursday, May 20 |
11:45AM -12:35 PM
|TSB-3313 ||Unipol Slashes Costs and Improves Application Performance with Optim (features Query Workload Tuner) ||Client speaker and Bryan Smith |
|Friday, May 21 |
|TSB-2760 ||Why are DB2 DBAs (z/OS & LUW) Interested in Data Studio and Optim Tooling? ||Bryan Smith |
|Friday, May 21 |
|TSB-3341 ||Optim and Data Studio Portfolio Strategy: Optimize performance and availability while lowering costs ||Curt Cotner |
|Friday, May 21 |
|TSB-2890 ||Improve Database Archive Performance: Optim Data Growth Best Practices ||Pamela Hoffman |
|Wednesday through Friday (repeated session) ||TSB-3870 ||Usability lab: Optim Performance Manager User Experience for DB2 Performance Management ||Dirk Willuhn, Ute Baumbach |
We recently uploaded Fix Packs for many of our products. The major theme for these Fix Packs is support for Windows 7. Here are the product Fix Packs that include the Windows 7 support with links to the appropriate download documentation.
You may notice that High Performance Unload is not in that list. I will post again when that fix pack is ready.
Remember, you can find help from volunteers and other users at discussion forums that are available for all of these products. See the Integrated Data Management community space for links to all discussion forums on developerWorks.
Today we announced a major enhancement to our performance monitoring and management solution for DB2, with the 4.1 release of Optim Performance Manager for DB2 for Linux, UNIX, and Windows (I’ll use ‘OPM’ in the rest of this blog entry). This is a major new version of OPM that includes a a significantly improved up and running experience and quick problem resolution.
The biggest change you’ll see out of the box is the new Web-based user interface and redesigned problem resolution workflow Our beta customers have given us great feedback in the development and refinement of this interface, and the result seems to be pretty well-received. One of our beta clients states that “The browser interface is easy to use, with intuitive dashboard displays and easy to understand presentation of information.” Even better, since it is Web-based, you can monitor databases anywhere without having to install software on various PCs.
The repository server collects performance metrics from the monitored database and stores them into a DB2 database. You can navigate through the stored data by time and see reports or dashboard data from the chose time period. This allows post-mortem problem detection and resolution, or for proactive monitoring and trend analysis. There are also interactive reports, such as for table space disk growth and for Top n SQL statements, that you can generate from this stored information.
The team has done a lot of work on getting up and running with the monitoring solution must faster. There is an integrated installer and there are predefined monitoring profiles for a variety of workloads, such as BI, OLTP, SAP, QA and Development. I’m really happy with the reports coming from the beta that installation and configuration is “easy.”
Finally, you can launch Optim Query Tuner from several of the dashboards, including the Active SQL and Extended Insight Dashboards, to do in-context query tuning on individual problem queries.
To realize the full power of the new integrations and lifecycle capabilities of this release, you should definitely check out the new package available in this release, called Optim Performance Manager Extended Edition (OPM EE) that builds on the base capabilities in OPM by inclusion of Extended Insight (previously a separately orderable feature), integration with Tivoli monitoring solutions, and configuration tooling for DB2 Workload Manager.
If you like the value of Extended Insight, which provides key metrics and visualizations of SQL as it travels through the software stack for dynamic Java applications, you’ll really like that we’ve extended the capabilities in this release of OPM EE to include CLI applications. We also include out of the box, customizable, workload views for SAP, Cognos, DataStage, and InfoSphere SQL Warehouse help get you going.
To round out our monitoring story to support the strong message we tell with static SQL, OPM EE now includes monitoring support for static SQL from Java applications. So if you want to take advantage of static SQL from Java, either by using the pureQuery API or by using client optimization for any JDBC application, you can get the Extended Insight information that you could previously only get for dynamic.
We’ve also made it possible to import pureQuery application metadata into OPM so that detailed information about the application source (Java package name, method name, line number) can be displayed on the Extended Insight Dashboard for any individual SQL statement. This particular feature will require a pureQuery Runtime license.
Integration with Tivoli monitoring solutions smoothes the handoff between system operators and the detailed database performance analysis performed by DBAs. The integration enables the ability to drill into the deep database diagnostic capabilities of OPM EE directly from the Tivoli Enterprise Portal. One of our beta clients who does extensive work with clients using Tivoli found this integration very useful, and points out that “Outsourced operations will love the Tivoli integration as it allows them to monitor multiple WAS and DB2 instances from a single point of control.”
Finally, OPM EE provides new tooling to significantly ease the configuration of DB2 workload manager. Although the existing WLM configuration tooling is still shipped with InfoSphere SQL Warehouse, this new tooling is integrated into OPM EE. Key monitoring information vital to workload management is presented in context so that you can do related configuration and validation within a single tool
There is really way more than I can possibly cover a blog entry. Here are links where you can find more information and see the user interface in action.
Earlier this week, I was invited to attend and present at the Inaugural Iowa DB2 Users Group Semi-Annual Event in Des Moines. It was a very well-organized event and had great attendance. During one of the breaks, I had someone tell me, "I understand the vision of heterogeneous database tooling, but please don't divert your attention away from further work on the tooling in support of DB2, in exchange for, say Oracle." It was valid concern since we spend a generous amount of time in our presentations talking about support for Oracle, SQL Server, Sybase... on the horizon. I tried to reassure her that we are still very focused on DB2, and that much of our support extending to other database platforms is done when we can leverage things like common SQL interfaces to the other platforms. I referred them to Curt Cotner's slide (included below – I also added Guardium for completeness) that he showed during the kickoff presentation as further evidence of this. I think she was convinced, or at least felt that I understood her concern.
Technorati Tags: optim
Well, I'm glad I got your attention... One of the things that I've been pondering lately is what percentage of DBAs manage more than one database platform. Since this is an IBM blog, how many DBAs manage both DB2 for Linux, UNIX, and Windows and DB2 for z/OS? I've heard that they exist, but maybe I haven't been asking, and y'all haven't been telling. Does your shop have any of these elusive creatures? If so, what platforms do they support? If not, why not? Does this model work well for you? Why or why not? Is this something that you'd like to see happen?
One of our objectives in creating heterogeneous database tooling is to help reduce the learning curve when using another database platform. While this is still a work in progress for parts of our portfolio, I'd sure like to know how important this is to DBAs.
So, please tell... Post your answer here, or on the IBM Optim Facebook fan page (click on the Fans tab if you don’t see it), or, if you’re shy, you can just send me an email at bfsmith at us.ibm.com.
Technorati Tags: smith
Technorati Tags: ibm
A quick note to let you know that I recently updated my article on packaging for the Optim solutions for database development, administration, and performance management (sometimes known as ‘heritage Data Studio’). The major changes were to include functional changes introduced with the Fix Packs that I blogged about earlier, to include the bundled solutions we offer, and to update links to announcement letters, relevant blog entries and more.
The intention of this article is to answer typical questions I get about feature/functionality, including supported databases, across the portfolio. I hope this helps.
Hope you have the upcoming Virtual Tech Briefing on pureQuery by Information Champion Dan Galvin on your calendars. We had a meeting to discuss this today, and hearing about pureQuery (a topic I spend a fair amount of time on) from someone who is not an IBMer is refreshing, and I hope you feel the same. I think it’s important for people to hear a perspective from someone who is both really knowledgeable on the topic, and has real-world implementation expertise as well.
If you are an architect, developer, or an interested DBA, you will want to hear Dan’s perspective as he has lots of experience in building enterprise applications. Even as we discussed the topics and flow of the briefing, we got into some interesting discussions about Java data access patterns and data mapping approaches.
The nice thing about pureQuery is that you don’t necessarily have to get into battles over the benefits of one persistence approach over another. pureQuery acts as a complementary technology that can help developers and their DBAs with problem resolution and database performance with any JDBC based data access (like Hibernate, JPA , iBatis etc.). But it’s also nice to have the pureQuery API as a choice for people who really are interested in utilizing a light-weight, and more database-aware API that can leverage database best practices and has tooling slick tooling in Optim Development Studio for rapid generation of the data access layer.
Anyway, I hope you can join me, Dan, and Sonali (who will demo some cool tooling features) on March 11th at 10 AM Pacific. Please register, and even if you cannot attend that day, you can at least access the replay.
I wanted to point you to a new video series on pureQuery development, specifically developing a new application using pureQuery annotated methods. If you are new to pureQuery and would like a quick overview of the development experience and productivity benefits using Optim Development Studio, then these videos are a must see. When using DB2, the videos will show you in action, the potential for improving SQL performance in your application by using static SQL. The series is based on a tutorial in the Integrated Data Management information center, so you have that option for learning as well.
If you are a seasoned pureQuery architect or developer and want to introduce the capabilities to your co-workers or friends, this video series will come to your aid.
Plus, it’s divided into three relatively short modules, so you can view it if you have few free minutes, say, at lunch time! So, keep it handy and spread the word!
I’m the performance architect for, among other things, the pureQuery platform. My team has the responsibility for not just ensuring that our products perform well, but also to help produce verifiable performance numbers that we can share with confidence.
I’m happy to say that we are ready to share our performance numbers for pureQuery access to DB2 for Linux, UNIX and Windows. (We already have some great numbers published for z/OS for both Java and .NET).
The goal of this particular performance test was to measure throughput improvement using static SQL execution, which is possible to do even for existing JDBC applications with no change to the application source code. The increased throughput comes mainly as a result of saving the cost of preparing the SQL when using static vs dynamic SQL. We typically don’t see the same level of interest in static execution from DB2 for LUW customers as we do from DB2 for z/OS customers because the LUW platform does not have the same memory constraints as z/OS – and therefore LUW customers might be more likely to throw hardware at the problem to achieve greater dynamic cache hit ratios and hence improve throughput.
However, static SQL also provides predictable performance because the access plan is pre-determined and I often find users are happier with predictable response times rather than ultra-fast response which can deteriorate over time.
Static SQL execution also provides much more than predictable performance. By using it, you can significantly improve problem determination and traceability. You can also reduce the risk of static SQL injection from dynamically executing applications. You can read about some of those benefits in this article. And there are additional benefits to pureQuery usage such as literal consolidation or the ability to make emergency fixes to application SQL without changing the application, which you can read about in Sonali’s article on 2.2 features.
OK, now that I’ve hopefully convinced you that there are many, many reasons to consider pureQuery and static SQL execution for DB2 LUW environments, I would like to go ahead and share our performance results.
The measurement environment
Our measurements were done with a typical 3-tier environment of a client, application server, and database server as shown here.
A word about the “ERWW” application we use. ERWW is an OLTP application based on an order entry and tracking system that is designed to exercise the database tier much more than the application tier (that is to say, there is not a lot of business logic in the application). The ERWW workload models a wholesale supplier managing orders, and consists of seven transaction types. The frequency of transactions is set to simulate a realistic scenario; the mix used in the benchmark environment was 47 percent update transactions, 53 percent read-only transactions. The workload is triggered by a Java client program which generates HTTP requests for the required transaction mix.
Before I go into our results, I have to offer up the standard disclaimer that any of you who are familiar with performance work are used to hearing. The tests that we ran were done in a controlled environment where we were able to carefully control extenuating factors that can influence the results. In particular, the type of application you run can significantly affect the results in terms of the mix of database-intensive work versus application-intensive work. The ERWW workload is a very database intensive workload and most of the work is done by the database server processing SQL requests. Therefore, by using pureQuery to optimize the database server side processing, we are in fact optimizing a large chunk of the workload. Consequently the performance gains for this workload are significant. We chose ERWW because it was readily available to us, and not because we thought it would give us the best results. I guess what I am trying to say is that your results will vary.
OK, now that that’s out of the way. We measured static execution using both client optimization of an existing JDBC application and also as a ‘new’ version of the application written in pureQuery annotated method style. The performance is reported in Normalized Throughput Rate - Transactions Per Second (ITR). The ITR is the notional throughput rate assuming that the CPUs are 100 percent busy. For example, consider an application with a transaction rate of 200 transactions per second at 75 percent CPU consumption. The ITR for this application would be 200 * 100/75 = 267 tps. This is the notional transaction rate that could be achieved if the CPUs were 100 percent busy, and no other bottleneck is hit first.
We measured the JDBC workload with both 90% and 95% package cache hit ratios. To achieve a 90% package cache hit ratio with the ERWW workload, the DB2 Package Cache (PCKCACHESZ) was sized to 180 x 4k pages, and for a 95% hit ratio it was sized to 210 x 4k pages.
Here are the results with a 90% cache hit ratio. The results are shown on the vertical axis as the database ITR improvements over the baseline of JDBC.
As you can see, client optimization almost doubled throughput over the existing JDBC application. The new application that uses pureQuery method style API more than doubled the database transaction throughput.
The results with a 95% cache hit ratio are shown here.
Note that we achieved significant throughput improvements even with a high package cache hit ratio.
In summary, pureQuery and static execution can offer many benefits, one of which may be improving the performance of your data servers with your applications. By changing the dynamic SQL to static SQL, pureQuery should help you either achieve better throughput on your existing hardware, or reduce CPU consumption of your existing hardware, allowing you to load more tasks onto it. I highly recommend that you also check out the bigger picture around Java acceleration (including the other benefits I mention) as shown in this video.
Our team blog coordinator has been getting on me about posting to kick off the new year and to reflect a bit on how things went last year. So let me try to do that as concisely as possible.
Reflecting on 2009, I have to say that I am pleased to see that people are really starting to understand the business value of integrated data management (IDM) to help reduce inefficiencies in communications and to enable enforcement of rules across IT roles. And there has been even stronger interest, particularly from the z/OS community, around potential cost reductions with use of pureQuery in the application infrastructure. I know that our tech sales and enablement teams have been very busy with pureQuery savings assessments (briefly introduced in this video demo) and are constantly on the road talking to customers about pureQuery.
We’ve had some good customer successes this year that reflect the realization of the benefits of integrated data management. Here are a couple of note:
Collaboration, innovation, and data governance with data modeling at La Caixa bank
La Caixa was challenged with control over an increasing number of database entities. They wanted to set up a way to enable administrative control over created entities that would help both their administrators and developers. They decided to go with InfoSphere Data Architect, in no small part because of its integration with Rational Software Development Platform and the rest of the Optim portfolio, as well as its extensibility. They are taking advantage of the support for source control to store physical data models in a central repository and allowing only approved objects to be promoted to production. And they’ve done some customization on the DDL files to enable the SCM to parse the files in order to decide where to install the other database objects. Data governance is improved, and the integration with Rational enables the administrators to grant access to data models to appropriate developers for improved collaboration.
Greater security and performance for openJPA applications using pureQuery at Payment Business Services (PBS)
PBS has developed a new Payment Services system programmed in Java using openJPA. The developers liked working with openJPA, but the database administrators were concerned about performance and security with dynamic SQL. So they decided to take advantage of the client optimization capabilities in pureQuery to leverage static SQL for that application. They have Optim pureQuery Runtime for z/OS under a WebSphere Application Server to capture the dynamic SQ,L and then they bind that captured SQL into DB2 packages for static execution. They use Optim Development Studio to view and work with the captured SQL for tuning, problem determination, and for communicating with developers. This marriage of pureQuery with new application infrastructures in the z/OS environment is really critical to helping enterprises to both grow technologically while maintaining appropriate levels of security and performance.
What’s coming in 2010?
Looking ahead, I’m driving the team toward delivering on specific aspects of the IDM vision. It is my belief that this year we will be able to deliver on even more integration among the portfolio members (and other IBM products) to fully realize a database performance management scenario that crosses roles from system operator all the way through to modifying and tuning the database access layer. You will likely see more integration with Cognos, SAP, and DataStage. And we’ll be delivering betas and new products to help DBAs do their jobs better. And, yes, there is a good deal of work being done to support z/OS this year. And, before you ask, yes, Windows 7 support is coming soon.
I hope to see you at our next big IOD EMEA event in Rome where I will go into more detail on some of the capabilities I can only hint at here.
I'm on the plane today returning from two customer visits in the D.C. area where I was talking about the advantages of pureQuery in DB2 for z/OS environments for DBAs (as opposed to applications developers). One uses WAS on z/OS and the other uses WAS on Linux for System z. Both customers were very familiar with how to manage and monitor CICS applications accessing DB2 data, and they felt out of control with Java application access to their mainframe. One customer even said, "we tried to tell our application teams to stop using Hibernate" because they couldn't see/review the SQL or know which application was running the dynamic SQL in their system. They said the application developers ignored them(!). I told them (hopefully, I wasn't too preachy) that they needed to stop resisting these contemporary application architectures (Java, .NET) and instead take back control by transforming those dynamic SQL statements to static SQL statements via pureQuery. I told them that if they continue the "no Java will access MY mainframe", that they are headed down a path of getting rid of their mainframe. The application development teams don't care what DBMS is used for their application, and if met with resistance from the z/OS team, they will simply deploy the applications on some other DBMS.
Related to this, I just watched this new short 7- minute video on pureQuery for DB2 for z/OS, and it's a great quick summary of the value that can be gotten with pureQuery. In my mind, it's a no-brainer to implement pureQuery when you have Java or .NET applications accessing DB2 for z/OS.
If you want to get a good idea of what the process is like in more detail, check out the upcoming virtual tech briefing on this very topic. It's entitled pureQuery Deep Dive Part 1: Client optimization, scheduled for February 4th. Patrick Titzler, one of the authors of the original tutorial on this topic, and Chris Farrar, client optimization technical lead, will be presenting. It should be good.
Hi, all. I am a developer on our Optim Query Tuner products. A new feature called ‘Compare Access Plan’ was shipped in Optim Query Tuner (for DB2 for z/OS) and Query Workload Tuner V2.2 fix pack 1
that allow DB2 for z/OS DBAs to compare the access path cost of SQL statements in packages between the latest two EXPLAIN versions. The tool generates a HTML report that provides high level package cost and individual package details.
In this blog, I’ll describe how to launch the feature, and then I’ll describe some common scenarios. Launching the Access Plan Comparison
First you must be connected to a DB2 for z/OS server. You can then invoke this feature by going to Data Source Explorer->Query Tuner->Compare Access Plans
The following input screen is displayed:
Each of the options circled above is useful for different scenarios, which I’ll describe here.Option: Compare two collections that contain the same packages
Use this option in scenarios such as:
- Identifying packages whose performance can be impacted following DB2 migration from one release to another. It can help you identify packages that can benefit from REBIND after migration and packages that should not be rebound
- After running RUNSTATS to identify packages whose performance can be impacted if a REBIND is performed.
Let’s look at the first scenario in more detail - version to version migration. In DB2 for z/OS environment, after migration you have two options for running application programs that contains embedded SQL statements:
- REBIND – Rebind the existing packages to make use of new features that are introduced in the newer version. However, the rebind may alter (positively or negatively) the access plan of SQL statements in the underlying packages.
- Do nothing – Continue to operate with the packages that were bound in the earlier DB2 version. Thus there are no changes in the access plan of the SQL statements in the underlying packages. The flip side is that some of the SQL statements that could benefit from changes in the newer DB2 version would still operate in the old way.
Assume that you want to determine if you need to rebind the selected collections (group of packages) to exploit new features available in the new version of DB2 for better performance. You would need to do the following:
- Before the migration, bind any packages that are going to be analyzed with EXPLAIN(YES).
- After the migration, bind the packages to a new collection with EXPLAIN(YES) to insure that the current packages are not overlaid and are available for execution. The packages must be exactly the same in the before and after migration collections. The underlying application programs must not be changed in any way or rebuilt. Also be sure that the EXPLAIN output for the bind points to the same set of EXPLAIN tables. In either case, the access plan comparison report will show an error if you don’t do this correctly.
- From Optim Query Tuner, in the input screen shown above, you specify the sets of collections to be compared by entering the name of the first collection in the Source Collection Name field. For example, in a migration scenario, the source collection might be the name of the collection prior to the migration and the target collection might be the name of the collection following the migration. You can also use filtering to narrow down to a subset of packages.
- Click Add. See the screen-shot below for an example (you can repeat the above steps to add additional collections):
- Click Generate Report to get the comparison report. Query Tuner will look into the catalog tables and EXPLAIN output to compare the access plan of all the operative and valid SQL statements in the underlying packages pre and post migration of the selected collection(s) and then generate an HTML comparison report.
The HTML report is self-descriptive. It displays result in two sections:Summary Table shown below:
Lists out all the compared packages in decreasing order of the performance degradation. Thus the package whose performance has degraded the most is shown at the top. The package cost is determined by aggregating the cost of individual SQL statements within the package. TOTAL_COST is used as a parameter for computing performance degradation. However TOTAL_COST information was added in the explain table only in DB2 z/OS v8 new function mode. For releases that don’t have this information, the PROCMS value is used to determine the performance degradation. For easy navigation, your can click on the package name to navigate to the section where the package details are listed. Package Table shown below:
Lists out all the SQL statements within a package in the decreasing order of the performance degradation. The same parameter that was used for determining package degradation is used for computing individual SQL degradation also.
Use this report to determine whether or not the performance of the SQL statements in the target collection has degraded or whether any packages have errors. Packages that have errors are highlighted in red, indicating one of the following reasons:
- The collection name is invalid or no qualifying packages were found.
- The package content was different between the two collections.
- A package appears in only one of the collections.
- The explain entries are missing for a package in one or both of the collections.
You can save the report by clicking Save Report
. Firefox users can use the save and print feature to save the report.
What to do next
You would either rebind the package/s to the original collection to make them live, or begin analysis of those statements that have been identified as having degradation in performance and not rebind them until you have resolved any performance issues. You can use Optim Query Tuner to analyze the SQL statements that have problems and to receive specific advice about the problem queries.
Option: Compare two different versions of the same collection
Use this option for those cases when you rebind packages regularly and want to gain an understanding of the changes in the access plan before and after the rebind. As with the previous scenario, bind any packages that are going to be analyzed with EXPLAIN(YES), before the rebind. Then rebind packages to the original collection with EXPLAIN(YES) to insure that the current packages are overlaid and are available for execution. The packages pre- and post-rebinds must be exactly the same. The underlying application programs must not be rebuilt or changed in any way.
Now, in the input screen you will specify the name for the collection that contains the packages to be analyzed in the Collection Name
field as shown here:
Click Generate Report
to get the comparison report. Query Tuner will look into the catalog tables and EXPLAIN output to compare the access plan of all the operative and valid SQL statements in the underlying packages pre and post rebind of the selected collection(s) and then generate a comparison report. From the generated HTML report you can determine whether or not the later version of the collection has degraded performance.
I hope you find this new feature useful. Let us know what you think. You can also ask questions and discuss with other users on the Optim Query Tuning Solutions discussion forum
The Optim Development Studio 2.2 Fix Pack 1
was made available on December 11. Although a fix pack, the focus is on customer requirements, so there are many "new" features focused on pureQuery development. These features are suggestions from our customers, based on their extensive usage of pureQuery on large scale projects, to improve developer productivity. We hope you will benefit from these while building pureQuery applications. To see what pureQuery brings to your organization, read my earlier article
Here are some of the highlights of the fix pack.Speedy pureQuery application iterative testing
- As part of pureQuery development, you are always changing your SQL in the annotated style interfaces. You now have the ability to generate JUnits for those interfaces and speed up testing. Simply right click on the annotated interface from the package explorer and select Generate JUnit
from the menu. Improved SQL support in Java database applications
Improved developer and DBA handshake
- Bridging the gap between Java and SQL has been one of the key benefits of pureQuery since the first release. This feature has been improved to allow live validation of SQL against the database, if you have a live connection available. This is a sure shot way to ensure SQL is accurately validated, and you may want to always set this flag if you have a live database connection.
You can set this flag (called “Validate SQL problems against live database”) from Window->Preferences->pureQuery, or from the project properties.
- SQL merge statements can now be auto-generated when pureQuery code is generated from one or more tables and at least one primary key exists on the table(s). Simply check the boxes for SQL merge statements in the pureQuery code generation wizard (by default, they are checked).
We are providing yet another step towards seamless integration between the developer and DBA roles. When using Optim Development Studio to bind SQL into static SQL packages, you can now generate .bat or .sh files which can be shared with the DBA. The DBA can use that as a starting point to customize the bind calls, or use them as is.Improved usability for tuning queries in ODS using Optim Query Tuner
You can generate reports that suggest how to tune SQL statements for better performance, when Optim Query Tuner
is installed in a shell shared environment with Optim Development Studio. Simply right click on an SQL statement from your pureQuery application or from the SQL Outline, and select Generate Query Tuner report
from the pureQuery menu.Improved support for application development when using the pureQuery API
Applications often need to access SQL from multiple schemas within the same project. This is now possible when building pureQuery annotated style applications. You can specify different schemas (to quality unqualified tables/ views) or paths (to qualify unqualified procedures/ functions) for each annotated interface in a pureQuery application, and these values will be used for SQL development/editing within your interfaces. To enable this, right click on the annotated style interface and provide values using the Connection Settings.Improved support to use existing beans when using pureQuery API
If you use customized row handlers in your pureQuery applications, you can now get a head start in building these by customizing the ones generated by pureQuery. Using this new feature, you can externalize row handlers from an interface into separate files, and enable re-use. To see this feature in action, right click on the annotated style interface from the Package Explorer and select Create Handler Classes
from the pureQuery menu.
In addition to the above Optim Development Studio features, there are numerous enhancements in the Optim pureQuery Runtime Fix Pack
- Usability improvements in the static binder that allow you to specify package level bind options and allows more control over packages to bind from various artifacts
- Improved exception handling to get access to underlying sqlstate and sqlcode easily
- There are also improvements in serviceability and literal substitution features
We hope you will download the fix packs and try some of these enhancements.
As always, let us know what features you would like to see and please keep the feedback coming! Feel free to submit questions using the Optim Development Studio and pureQuery Runtime forum