Managing the data lifecycle
As part of our Control Center replacement work we are gathering information about customer usage of the Task Center. If you are using Task Center, please complete our survey about your usage and what you value most. Would love to have your input by the end of November. Link to survey here.
Happy IOD Season! Kimberly Madia here from the Optim product marketing team, and I am excited to be blogging for the first time. I wanted to share some details about key activities for the Optim tools at the IOD Global Conference in
You will also not want to miss our big tent events. This is a fantastic opportunity to hear from the executive team and other thought leaders including Arvind Krishna and Steven Adler.
• Expo Grand Opening Reception, Sunday, 6:00 – 8:00 PM
• InfoSphere Community Reception, Sunday, 8:00 – 10:00 PM
• Data Management Keynote Session, Monday 2:45 – 3:45 PM
• InfoSphere Keynote, Tuesday, 3:00 – 4:00 PM
• Information Governance Keynote, Wednesday, 10:00–11:00 AM
• IOD 2010 Networking Party, Tuesday, 7:00 – 11:00 PM
Next, you will have the opportunity to meet Optim tools product experts including several distinguished engineers and of course Curt Cotner, IBM Fellow, VP & CTO for Database Servers. Here are a few of the experts, session numbers and times when these folks will be available to you.
• MTE-3683, Curt Cotner, Tue, 26/Oct 4:30-5:30 PM
• MTE-3490, Bryan Smith, Tue, 26/Oct 1:00-2:00 PM
As for technical presentations, we have lots. Certainly too many to mention here, so I will just point out a few highlights. Look for the Optim tools sessions in both DB2 and Information Governance tracks.
We have hands on labs covering the following products:
We also have usability labs for the following products:
Finally, at IOD you will have numerous opportunities to see live product demos. Optim tools will be available in the InfoSphere demo area as well as all Data Management demo suites.
Well that wraps up the highlights. We in product marketing are certainly excited about this conference. Looking forward to seeing you there! Have a great weekend.
hollyann 100000SR9G Tags:  data-studio optim lifecycle hayes 2010 october 7 Comments 8,162 Views
Big news last week on the Optim tools front.
Let me take you through the highlights.
Administration and Availability
IBM_Optim 27000269HS Tags:  purequery ods bommireddipalli webcast announcement fixpacks download java 5,788 Views
I talk to many, many people about pureQuery and the benefits it can bring to performance, security, and problem determination of database applications. A key part of being able to deliver these enhancements to the wide variety of existing Java applications is the ability to layer pureQuery optimization onto existing applications without making changes to the application code. This process is called client optimization.
We have some enterprise customers now reaping the benefits of client optimization leading to this delivery which helps further improve security, manageability, and scalability to support complex enterprise environments. The main result of these requirements is delivery of a shared repository that can be used to securely store pureQuery artifacts. Not only does this provide added security, it provides the DBA with centralized control of the client optimization process. In other words, after the initial pureQuery Runtime setup within the application server by the administrator, you, the DBA, can modify the application’s pureQuery Runtime environment configuration without requiring further changes to the application server.
In addition, the centralized repository makes it possible to do continuous capturing, providing a step toward the vision of a more automated capture, configure, and bind process.
Another enhancement based on customer requests is the ability to group SQL into packages based on string tokens or special register values to streamline and simplify package management. An example of where this can be useful is if you are currently using special registers to set the schema qualifier during dynamic execution. Now you can package up the appropriate SQL statements into separate packages (one per schema for example) and do the same thing statically using the QUALIFIER BIND option.To help make sense of these enhancements, the product documentation for client optimization has been updated with scenario-based guidance on running, deploying, managing, and troubleshooting the client optimization process.
If you are interested in hearing more about the capabilities in this release from product experts, I invite you to join us on June 24th for the Virtual Tech Briefing entitled pureQuery Deep Dive Part 3: Client optimization administration. My colleagues Patrick Titzler and Chris Farrar, who some of you may remember from the first pureQuery deep dive tech briefing, will provide and overview of the new capabilities, how it fits into the overall direction we’re going, and will be available to answer questions. Patrick is planning also to do some demos of the tooling support for these features delivered in the Optim Development Studio Fix Pack 3.
As I've been visiting various customers and presenting at conferences, I'm finding that there is a lot of interest in our new performance monitoring capabilities in Optim Performance Manager. One of hottest topics in that area is the new statement-level performance metrics. These were significantly enhanced in DB2 LUW 9.7 and are planned for DB2 for z/OS version 10, currently in beta. There are two aspects of this technology that people find exciting:
The histograms can also allow us to easily identify statements that have volatile cost due to data skew. The combination of this function with the Optim Performance Manager's end-to-end monitoring, which allows to account each SQL to the individual workload it originates from (end user, application, client machines etc.), provides a pretty powerful tool. We believe this will be an important new capability in DB2 and our tools, since it holds the promise of allowing us to review performance problems after the fact without having to recreate the problem scenario. That will save all of us a lot of time, since in many cases it isn't easy to reconstruct the conditions that caused the performance problem.
Writing to you now from outside of Rome at IOD EMEA, but I wanted to take some time to focus on some of my impressions of this year's IDUG NA in Tampa:
The first one is branding. It’s been a year since we renamed many of our Data Studio tools to Optim (and InfoSphere), partly as a result of acquiring Princeton Softech, who was using the name Optim. But there is still confusion. I spend time on this topic in my presentations, explaining that Data Studio is no longer the product family name. I explain that the term Data Studio now refers to the no-charge offering that supports DB2 (on LUW, z/OS, and i5) as well as IDS and that there is no such things as a "set of Data Studio Tools." Branding is (obviously) a very powerful thing, and changing branding can take a long time to absorb. Kathy tells me that we recently updated our packaging article on developerWorks, so if you are still confused, you may want to check it out. It lays out the high level functionality, names, etc. for the ‘portfolio previously known as Data Studio.’
Secondly, I was surprised to hear the number of customers that are using .NET to access DB2 for z/OS. I attended a Speed-SIG focused on development in Java and .NET, and all of the customers attending said that they have .NET apps accessing DB2 for z/OS. I knew that this existed, but the high frequency did surprise me. Of course, it gave us a good chance to talk about pureQuery Runtime and how it can help .NET applications in much the same way as Java applications.
Of course, it was great to see many familiar faces and lots of new ones in Tampa. The IDUG board of directors was happy to see a 20% increase in attendance over last year's North America conference. And don’t forget, IDUG Europe is planned for November 8-12 in Vienna.
I'm excited to be co-presenting with one of our customers at IOD EMEA in May. Roberta Barnabe, a DB2 Specialist for UGF (Unipol Gruppo Finanziario) Assicurazioni, will be sharing her experiences with IBM Optim Query Workload Tuner for DB2 for z/OS. For any of you interested in z/OS query tuning and want to hear what someone besides we IBMers have to say, I encourage you to come hear what she has to say about:
Ending with a summary statement that the tool resulted in an easier way to perform performing tuning analysis, and "BIG TIME SAVING !!!!!!"
I hope to see you there. Here are the particulars:
Thursday, May 20
11:45AM -12:35 PM
Hi, everyone. It seems as if lately I am always on the road. Today, I am actually back here at SVL, where, fortunately or unfortunately, depending on your point of view, it is raining, and raining hard. Tomorrow I head off to the UK for regional user group meetings. I also have Rome to look forward to, and that’s why I’m writing today, to give you the highlights at IOD EMEA for products in my portfolio.
When I look at the sessions, one major theme I see is around performance. Regardless of how ‘cool’ new applications and new technologies are, the issue of performance never goes away and often becomes even more critical as users become even more intolerant of slow response times or unavailability. So, although performance may not have the “coolness” factor of some topics, it’s bread and butter for most of our customers. The key is to make performance management more streamlined, less costly people-wise, and more focused on prevention rather than constant reaction.
If you’re going to IOD EMEA, I invite you to make a point of joining us at the following sessions to learn more about what we’re doing to help realize those goals. The sessions here are listed in chronological order, but you will of course need to check for changes at the conference venue itself. Some of the speakers may change as well. I look forward to seeing you there. If you haven’t signed up for an executive one on one with me through your IBM sales rep and are interested in doing so, here is a link to where you can find more information on how to do that.
We recently uploaded Fix Packs for many of our products. The major theme for these Fix Packs is support for Windows 7. Here are the product Fix Packs that include the Windows 7 support with links to the appropriate download documentation.
You may notice that High Performance Unload is not in that list. I will post again when that fix pack is ready.
Remember, you can find help from volunteers and other users at discussion forums that are available for all of these products. See the Integrated Data Management community space for links to all discussion forums on developerWorks.
IBM_Optim 27000269HS Tags:  cotner optim_performance_manager monitor announcement performance announcements 5,149 Views
Today we announced a major enhancement to our performance monitoring and management solution for DB2, with the 4.1 release of Optim Performance Manager for DB2 for Linux, UNIX, and Windows (I’ll use ‘OPM’ in the rest of this blog entry). This is a major new version of OPM that includes a a significantly improved up and running experience and quick problem resolution.
The biggest change you’ll see out of the box is the new Web-based user interface and redesigned problem resolution workflow Our beta customers have given us great feedback in the development and refinement of this interface, and the result seems to be pretty well-received. One of our beta clients states that “The browser interface is easy to use, with intuitive dashboard displays and easy to understand presentation of information.” Even better, since it is Web-based, you can monitor databases anywhere without having to install software on various PCs.
The repository server collects performance metrics from the monitored database and stores them into a DB2 database. You can navigate through the stored data by time and see reports or dashboard data from the chose time period. This allows post-mortem problem detection and resolution, or for proactive monitoring and trend analysis. There are also interactive reports, such as for table space disk growth and for Top n SQL statements, that you can generate from this stored information.
The team has done a lot of work on getting up and running with the monitoring solution must faster. There is an integrated installer and there are predefined monitoring profiles for a variety of workloads, such as BI, OLTP, SAP, QA and Development. I’m really happy with the reports coming from the beta that installation and configuration is “easy.”
Finally, you can launch Optim Query Tuner from several of the dashboards, including the Active SQL and Extended Insight Dashboards, to do in-context query tuning on individual problem queries.
To realize the full power of the new integrations and lifecycle capabilities of this release, you should definitely check out the new package available in this release, called Optim Performance Manager Extended Edition (OPM EE) that builds on the base capabilities in OPM by inclusion of Extended Insight (previously a separately orderable feature), integration with Tivoli monitoring solutions, and configuration tooling for DB2 Workload Manager.
If you like the value of Extended Insight, which provides key metrics and visualizations of SQL as it travels through the software stack for dynamic Java applications, you’ll really like that we’ve extended the capabilities in this release of OPM EE to include CLI applications. We also include out of the box, customizable, workload views for SAP, Cognos, DataStage, and InfoSphere SQL Warehouse help get you going.
To round out our monitoring story to support the strong message we tell with static SQL, OPM EE now includes monitoring support for static SQL from Java applications. So if you want to take advantage of static SQL from Java, either by using the pureQuery API or by using client optimization for any JDBC application, you can get the Extended Insight information that you could previously only get for dynamic.
We’ve also made it possible to import pureQuery application metadata into OPM so that detailed information about the application source (Java package name, method name, line number) can be displayed on the Extended Insight Dashboard for any individual SQL statement. This particular feature will require a pureQuery Runtime license.
Integration with Tivoli monitoring solutions smoothes the handoff between system operators and the detailed database performance analysis performed by DBAs. The integration enables the ability to drill into the deep database diagnostic capabilities of OPM EE directly from the Tivoli Enterprise Portal. One of our beta clients who does extensive work with clients using Tivoli found this integration very useful, and points out that “Outsourced operations will love the Tivoli integration as it allows them to monitor multiple WAS and DB2 instances from a single point of control.”
Finally, OPM EE provides new tooling to significantly ease the configuration of DB2 workload manager. Although the existing WLM configuration tooling is still shipped with InfoSphere SQL Warehouse, this new tooling is integrated into OPM EE. Key monitoring information vital to workload management is presented in context so that you can do related configuration and validation within a single tool
There is really way more than I can possibly cover a blog entry. Here are links where you can find more information and see the user interface in action.
Earlier this week, I was invited to attend and present at the Inaugural Iowa DB2 Users Group Semi-Annual Event in Des Moines. It was a very well-organized event and had great attendance. During one of the breaks, I had someone tell me, "I understand the vision of heterogeneous database tooling, but please don't divert your attention away from further work on the tooling in support of DB2, in exchange for, say Oracle." It was valid concern since we spend a generous amount of time in our presentations talking about support for Oracle, SQL Server, Sybase... on the horizon. I tried to reassure her that we are still very focused on DB2, and that much of our support extending to other database platforms is done when we can leverage things like common SQL interfaces to the other platforms. I referred them to Curt Cotner's slide (included below – I also added Guardium for completeness) that he showed during the kickoff presentation as further evidence of this. I think she was convinced, or at least felt that I understood her concern.
Well, I'm glad I got your attention... One of the things that I've been pondering lately is what percentage of DBAs manage more than one database platform. Since this is an IBM blog, how many DBAs manage both DB2 for Linux, UNIX, and Windows and DB2 for z/OS? I've heard that they exist, but maybe I haven't been asking, and y'all haven't been telling. Does your shop have any of these elusive creatures? If so, what platforms do they support? If not, why not? Does this model work well for you? Why or why not? Is this something that you'd like to see happen?
One of our objectives in creating heterogeneous database tooling is to help reduce the learning curve when using another database platform. While this is still a work in progress for parts of our portfolio, I'd sure like to know how important this is to DBAs.
So, please tell... Post your answer here, or on the IBM Optim Facebook fan page (click on the Fans tab if you don’t see it), or, if you’re shy, you can just send me an email at bfsmith at us.ibm.com.
A quick note to let you know that I recently updated my article on packaging for the Optim solutions for database development, administration, and performance management (sometimes known as ‘heritage Data Studio’). The major changes were to include functional changes introduced with the Fix Packs that I blogged about earlier, to include the bundled solutions we offer, and to update links to announcement letters, relevant blog entries and more.
The intention of this article is to answer typical questions I get about feature/functionality, including supported databases, across the portfolio. I hope this helps.
Hope you have the upcoming Virtual Tech Briefing on pureQuery by Information Champion Dan Galvin on your calendars. We had a meeting to discuss this today, and hearing about pureQuery (a topic I spend a fair amount of time on) from someone who is not an IBMer is refreshing, and I hope you feel the same. I think it’s important for people to hear a perspective from someone who is both really knowledgeable on the topic, and has real-world implementation expertise as well.
Anyway, I hope you can join me, Dan, and Sonali (who will demo some cool tooling features) on March 11th at 10 AM Pacific. Please register, and even if you cannot attend that day, you can at least access the replay.
I wanted to point you to a new video series on pureQuery development, specifically developing a new application using pureQuery annotated methods. If you are new to pureQuery and would like a quick overview of the development experience and productivity benefits using Optim Development Studio, then these videos are a must see. When using DB2, the videos will show you in action, the potential for improving SQL performance in your application by using static SQL. The series is based on a tutorial in the Integrated Data Management information center, so you have that option for learning as well.
If you are a seasoned pureQuery architect or developer and want to introduce the capabilities to your co-workers or friends, this video series will come to your aid.
I’m the performance architect for, among other things, the pureQuery platform. My team has the responsibility for not just ensuring that our products perform well, but also to help produce verifiable performance numbers that we can share with confidence.
I’m happy to say that we are ready to share our performance numbers for pureQuery access to DB2 for Linux, UNIX and Windows. (We already have some great numbers published for z/OS for both Java and .NET).
The goal of this particular performance test was to measure throughput improvement using static SQL execution, which is possible to do even for existing JDBC applications with no change to the application source code. The increased throughput comes mainly as a result of saving the cost of preparing the SQL when using static vs dynamic SQL. We typically don’t see the same level of interest in static execution from DB2 for LUW customers as we do from DB2 for z/OS customers because the LUW platform does not have the same memory constraints as z/OS – and therefore LUW customers might be more likely to throw hardware at the problem to achieve greater dynamic cache hit ratios and hence improve throughput.
However, static SQL also provides predictable performance because the access plan is pre-determined and I often find users are happier with predictable response times rather than ultra-fast response which can deteriorate over time.
Static SQL execution also provides much more than predictable performance. By using it, you can significantly improve problem determination and traceability. You can also reduce the risk of static SQL injection from dynamically executing applications. You can read about some of those benefits in this article. And there are additional benefits to pureQuery usage such as literal consolidation or the ability to make emergency fixes to application SQL without changing the application, which you can read about in Sonali’s article on 2.2 features.
OK, now that I’ve hopefully convinced you that there are many, many reasons to consider pureQuery and static SQL execution for DB2 LUW environments, I would like to go ahead and share our performance results.
The measurement environment
A word about the “ERWW” application we use. ERWW is an OLTP application based on an order entry and tracking system that is designed to exercise the database tier much more than the application tier (that is to say, there is not a lot of business logic in the application). The ERWW workload models a wholesale supplier managing orders, and consists of seven transaction types. The frequency of transactions is set to simulate a realistic scenario; the mix used in the benchmark environment was 47 percent update transactions, 53 percent read-only transactions. The workload is triggered by a Java client program which generates HTTP requests for the required transaction mix.
Before I go into our results, I have to offer up the standard disclaimer that any of you who are familiar with performance work are used to hearing. The tests that we ran were done in a controlled environment where we were able to carefully control extenuating factors that can influence the results. In particular, the type of application you run can significantly affect the results in terms of the mix of database-intensive work versus application-intensive work. The ERWW workload is a very database intensive workload and most of the work is done by the database server processing SQL requests. Therefore, by using pureQuery to optimize the database server side processing, we are in fact optimizing a large chunk of the workload. Consequently the performance gains for this workload are significant. We chose ERWW because it was readily available to us, and not because we thought it would give us the best results. I guess what I am trying to say is that your results will vary.
OK, now that that’s out of the way. We measured static execution using both client optimization of an existing JDBC application and also as a ‘new’ version of the application written in pureQuery annotated method style. The performance is reported in Normalized Throughput Rate - Transactions Per Second (ITR). The ITR is the notional throughput rate assuming that the CPUs are 100 percent busy. For example, consider an application with a transaction rate of 200 transactions per second at 75 percent CPU consumption. The ITR for this application would be 200 * 100/75 = 267 tps. This is the notional transaction rate that could be achieved if the CPUs were 100 percent busy, and no other bottleneck is hit first.
We measured the JDBC workload with both 90% and 95% package cache hit ratios. To achieve a 90% package cache hit ratio with the ERWW workload, the DB2 Package Cache (PCKCACHESZ) was sized to 180 x 4k pages, and for a 95% hit ratio it was sized to 210 x 4k pages.
Here are the results with a 90% cache hit ratio. The results are shown on the vertical axis as the database ITR improvements over the baseline of JDBC.
As you can see, client optimization almost doubled throughput over the existing JDBC application. The new application that uses pureQuery method style API more than doubled the database transaction throughput.
Note that we achieved significant throughput improvements even with a high package cache hit ratio.
In summary, pureQuery and static execution can offer many benefits, one of which may be improving the performance of your data servers with your applications. By changing the dynamic SQL to static SQL, pureQuery should help you either achieve better throughput on your existing hardware, or reduce CPU consumption of your existing hardware, allowing you to load more tasks onto it. I highly recommend that you also check out the bigger picture around Java acceleration (including the other benefits I mention) as shown in this video.
Our team blog coordinator has been getting on me about posting to kick off the new year and to reflect a bit on how things went last year. So let me try to do that as concisely as possible.
Reflecting on 2009, I have to say that I am pleased to see that people are really starting to understand the business value of integrated data management (IDM) to help reduce inefficiencies in communications and to enable enforcement of rules across IT roles. And there has been even stronger interest, particularly from the z/OS community, around potential cost reductions with use of pureQuery in the application infrastructure. I know that our tech sales and enablement teams have been very busy with pureQuery savings assessments (briefly introduced in this video demo) and are constantly on the road talking to customers about pureQuery.
We’ve had some good customer successes this year that reflect the realization of the benefits of integrated data management. Here are a couple of note:
La Caixa was challenged with control over an increasing number of database entities. They wanted to set up a way to enable administrative control over created entities that would help both their administrators and developers. They decided to go with InfoSphere Data Architect, in no small part because of its integration with Rational Software Development Platform and the rest of the Optim portfolio, as well as its extensibility. They are taking advantage of the support for source control to store physical data models in a central repository and allowing only approved objects to be promoted to production. And they’ve done some customization on the DDL files to enable the SCM to parse the files in order to decide where to install the other database objects. Data governance is improved, and the integration with Rational enables the administrators to grant access to data models to appropriate developers for improved collaboration.
Greater security and performance for openJPA applications using pureQuery at Payment Business Services (PBS)
PBS has developed a new Payment Services system programmed in Java using openJPA. The developers liked working with openJPA, but the database administrators were concerned about performance and security with dynamic SQL. So they decided to take advantage of the client optimization capabilities in pureQuery to leverage static SQL for that application. They have Optim pureQuery Runtime for z/OS under a WebSphere Application Server to capture the dynamic SQ,L and then they bind that captured SQL into DB2 packages for static execution. They use Optim Development Studio to view and work with the captured SQL for tuning, problem determination, and for communicating with developers. This marriage of pureQuery with new application infrastructures in the z/OS environment is really critical to helping enterprises to both grow technologically while maintaining appropriate levels of security and performance.
What’s coming in 2010?
Looking ahead, I’m driving the team toward delivering on specific aspects of the IDM vision. It is my belief that this year we will be able to deliver on even more integration among the portfolio members (and other IBM products) to fully realize a database performance management scenario that crosses roles from system operator all the way through to modifying and tuning the database access layer. You will likely see more integration with Cognos, SAP, and DataStage. And we’ll be delivering betas and new products to help DBAs do their jobs better. And, yes, there is a good deal of work being done to support z/OS this year. And, before you ask, yes, Windows 7 support is coming soon.
I'm on the plane today returning from two customer visits in the D.C. area where I was talking about the advantages of pureQuery in DB2 for z/OS environments for DBAs (as opposed to applications developers). One uses WAS on z/OS and the other uses WAS on Linux for System z. Both customers were very familiar with how to manage and monitor CICS applications accessing DB2 data, and they felt out of control with Java application access to their mainframe. One customer even said, "we tried to tell our application teams to stop using Hibernate" because they couldn't see/review the SQL or know which application was running the dynamic SQL in their system. They said the application developers ignored them(!). I told them (hopefully, I wasn't too preachy) that they needed to stop resisting these contemporary application architectures (Java, .NET) and instead take back control by transforming those dynamic SQL statements to static SQL statements via pureQuery. I told them that if they continue the "no Java will access MY mainframe", that they are headed down a path of getting rid of their mainframe. The application development teams don't care what DBMS is used for their application, and if met with resistance from the z/OS team, they will simply deploy the applications on some other DBMS.
Related to this, I just watched this new short 7- minute video on pureQuery for DB2 for z/OS, and it's a great quick summary of the value that can be gotten with pureQuery. In my mind, it's a no-brainer to implement pureQuery when you have Java or .NET applications accessing DB2 for z/OS.
If you want to get a good idea of what the process is like in more detail, check out the upcoming virtual tech briefing on this very topic. It's entitled pureQuery Deep Dive Part 1: Client optimization, scheduled for February 4th. Patrick Titzler, one of the authors of the original tutorial on this topic, and Chris Farrar, client optimization technical lead, will be presenting. It should be good.
Hi, all. I am a developer on our Optim Query Tuner products. A new feature called ‘Compare Access Plan’ was shipped in Optim Query Tuner (for DB2 for z/OS) and Query Workload Tuner V2.2 fix pack 1 that allow DB2 for z/OS DBAs to compare the access path cost of SQL statements in packages between the latest two EXPLAIN versions. The tool generates a HTML report that provides high level package cost and individual package details.
In this blog, I’ll describe how to launch the feature, and then I’ll describe some common scenarios.
Launching the Access Plan Comparison
First you must be connected to a DB2 for z/OS server. You can then invoke this feature by going to Data Source Explorer->Query Tuner->Compare Access Plans.
The following input screen is displayed:
Each of the options circled above is useful for different scenarios, which I’ll describe here.
Option: Compare two collections that contain the same packages
Use this option in scenarios such as:
Let’s look at the first scenario in more detail - version to version migration. In DB2 for z/OS environment, after migration you have two options for running application programs that contains embedded SQL statements:
Assume that you want to determine if you need to rebind the selected collections (group of packages) to exploit new features available in the new version of DB2 for better performance. You would need to do the following:
The HTML report is self-descriptive. It displays result in two sections:
Summary Table shown below:
Lists out all the compared packages in decreasing order of the performance degradation. Thus the package whose performance has degraded the most is shown at the top. The package cost is determined by aggregating the cost of individual SQL statements within the package. TOTAL_COST is used as a parameter for computing performance degradation. However TOTAL_COST information was added in the explain table only in DB2 z/OS v8 new function mode. For releases that don’t have this information, the PROCMS value is used to determine the performance degradation. For easy navigation, your can click on the package name to navigate to the section where the package details are listed.
Package Table shown below:
Lists out all the SQL statements within a package in the decreasing order of the performance degradation. The same parameter that was used for determining package degradation is used for computing individual SQL degradation also.
Use this report to determine whether or not the performance of the SQL statements in the target collection has degraded or whether any packages have errors. Packages that have errors are highlighted in red, indicating one of the following reasons:
What to do next
You would either rebind the package/s to the original collection to make them live, or begin analysis of those statements that have been identified as having degradation in performance and not rebind them until you have resolved any performance issues. You can use Optim Query Tuner to analyze the SQL statements that have problems and to receive specific advice about the problem queries.
Option: Compare two different versions of the same collection
Use this option for those cases when you rebind packages regularly and want to gain an understanding of the changes in the access plan before and after the rebind. As with the previous scenario, bind any packages that are going to be analyzed with EXPLAIN(YES), before the rebind. Then rebind packages to the original collection with EXPLAIN(YES) to insure that the current packages are overlaid and are available for execution. The packages pre- and post-rebinds must be exactly the same. The underlying application programs must not be rebuilt or changed in any way.
Now, in the input screen you will specify the name for the collection that contains the packages to be analyzed in the Collection Name field as shown here:
Click Generate Report to get the comparison report. Query Tuner will look into the catalog tables and EXPLAIN output to compare the access plan of all the operative and valid SQL statements in the underlying packages pre and post rebind of the selected collection(s) and then generate a comparison report. From the generated HTML report you can determine whether or not the later version of the collection has degraded performance.
I hope you find this new feature useful. Let us know what you think. You can also ask questions and discuss with other users on the Optim Query Tuning Solutions discussion forum.
The Optim Development Studio 2.2 Fix Pack 1 was made available on December 11. Although a fix pack, the focus is on customer requirements, so there are many "new" features focused on pureQuery development. These features are suggestions from our customers, based on their extensive usage of pureQuery on large scale projects, to improve developer productivity. We hope you will benefit from these while building pureQuery applications. To see what pureQuery brings to your organization, read my earlier article.
Here are some of the highlights of the fix pack.
Speedy pureQuery application iterative testing - As part of pureQuery development, you are always changing your SQL in the annotated style interfaces. You now have the ability to generate JUnits for those interfaces and speed up testing. Simply right click on the annotated interface from the package explorer and select Generate JUnit from the menu.
Improved SQL support in Java database applications
Improved developer and DBA handshake
We are providing yet another step towards seamless integration between the developer and DBA roles. When using Optim Development Studio to bind SQL into static SQL packages, you can now generate .bat or .sh files which can be shared with the DBA. The DBA can use that as a starting point to customize the bind calls, or use them as is.
Improved usability for tuning queries in ODS using Optim Query Tuner
You can generate reports that suggest how to tune SQL statements for better performance, when Optim Query Tuner is installed in a shell shared environment with Optim Development Studio. Simply right click on an SQL statement from your pureQuery application or from the SQL Outline, and select Generate Query Tuner report from the pureQuery menu.
Improved support for application development when using the pureQuery API
Applications often need to access SQL from multiple schemas within the same project. This is now possible when building pureQuery annotated style applications. You can specify different schemas (to quality unqualified tables/ views) or paths (to qualify unqualified procedures/ functions) for each annotated interface in a pureQuery application, and these values will be used for SQL development/editing within your interfaces. To enable this, right click on the annotated style interface and provide values using the Connection Settings.
Improved support to use existing beans when using pureQuery API
If you use customized row handlers in your pureQuery applications, you can now get a head start in building these by customizing the ones generated by pureQuery. Using this new feature, you can externalize row handlers from an interface into separate files, and enable re-use. To see this feature in action, right click on the annotated style interface from the Package Explorer and select Create Handler Classes from the pureQuery menu.
In addition to the above Optim Development Studio features, there are numerous enhancements in the Optim pureQuery Runtime Fix Pack, including:
As always, let us know what features you would like to see and please keep the feedback coming! Feel free to submit questions using the Optim Development Studio and pureQuery Runtime forum on developerWorks.