Today IBM announced updates to several performance management products, all of which are available individually or as part of the DB2 Advanced Enterprise Server Edition. They include:
We all know that poorly performing applications can lead to lost revenue and customer dissatisfaction. We have made numerous improvements in all these products to make it easier to identify and resolve issues before they can impact your business. There are far too many improvements to list but some notable improvements include:
IBM InfoSphere Optim Performance Manager V5.2
We have added enhancements to better analyze stored procedure performance. Rank stored procedures and drill down to find poorly performing SQL. We have extended reporting now with scheduling, emailing, and retention management. To help you automate actions, we have provided the ability to trigger user defined jobs in response to alerts.
IBM InfoSphere Optim Query Workload Tuner for DB2 Linux, UNIX, Windows V3.2
Once you have identified SQL that could use some help, use Query Workload Tuner to provide suggestions and analyze possible improvements or use Query Workload Tuner to analyze SQL before it ever goes into production. V3.2 has many enhancements to help with "what-if" analysis. Use Query Workload Tuner and virtual indexes to analyze the impact of adding, dropping or changing an index. Use the plan comparator to see the changes to access plans.
IBM InfoSphere Optim pureQuery Runtime for Linux, Unix Windows V3.2
Suppose you have poorly performing SQL from a third party application. pureQuery Runtime can help with that. Use pureQuery to swap out poorly performing SQL. With V3.2, new configuration scripts makes it easier to deploy on WebSphere Application Server and configuration changes can go into effect immediately.
IBM Data Studio V3.2
IBM Data Studio has also been updated. Many enhancements were made to alerts and scheduling in the web console. Users can now define their own alert view preferences. The client was updated to make debugging even simpler and more robust. Debug triggers and PL/SQL anonymous blocks. You can now view global and package variables, associative arrays and array of rows in the debugger. It is also much easier to recompile a routine for debugging. It is almost automatic. Starting with V3.2 the Data Studio administration client will be available via DB2 Express-C
and refreshed as needed. The full client is available via the normal Data Studio download page
. Remember, for the full client, when using the Installation Manager option, only the options selected will be downloaded and installed on your system. If you are using the administration client, try the full client.
Finally, connection management is now easier for many of these products. IBM InfoSphere Optim Performance Manager, Query Workload Tuner, and Data Studio can now obtain and stay synchronized with connection information from the Optim Configuration Manager's database repository.
For more information, check the announcement here
I talk to many, many people about pureQuery and the benefits it can bring to performance, security, and problem determination of database applications. A key part of being able to deliver these enhancements to the wide variety of existing Java applications is the ability to layer pureQuery optimization onto existing applications without making changes to the application code. This process is called client optimization.
We have some enterprise customers now reaping the benefits of client optimization leading to this delivery which helps further improve security, manageability, and scalability to support complex enterprise environments. The main result of these requirements is delivery of a shared repository that can be used to securely store pureQuery artifacts. Not only does this provide added security, it provides the DBA with centralized control of the client optimization process. In other words, after the initial pureQuery Runtime setup within the application server by the administrator, you, the DBA, can modify the application’s pureQuery Runtime environment configuration without requiring further changes to the application server.
In addition, the centralized repository makes it possible to do continuous capturing, providing a step toward the vision of a more automated capture, configure, and bind process.
Another enhancement based on customer requests is the ability to group SQL into packages based on string tokens or special register values to streamline and simplify package management. An example of where this can be useful is if you are currently using special registers to set the schema qualifier during dynamic execution. Now you can package up the appropriate SQL statements into separate packages (one per schema for example) and do the same thing statically using the QUALIFIER BIND option.
To help make sense of these enhancements, the product documentation for client optimization has been updated with scenario-based guidance on running, deploying, managing, and troubleshooting the client optimization process.
If you are interested in hearing more about the capabilities in this release from product experts, I invite you to join us on June 24th for the Virtual Tech Briefing entitled pureQuery Deep Dive Part 3: Client optimization administration. My colleagues Patrick Titzler and Chris Farrar, who some of you may remember from the first pureQuery deep dive tech briefing, will provide and overview of the new capabilities, how it fits into the overall direction we’re going, and will be available to answer questions. Patrick is planning also to do some demos of the tooling support for these features delivered in the Optim Development Studio Fix Pack 3.
- Download document that links to Optim Development Studio 2.2 Fix Pack 3
- Download document that links to pureQuery Runtime for Linux, UNIX, and Windows 2.2 Fix Pack 3. The APAR number for the pureQuery Runtime for z/OS delivery is PM15915. The PTF is UK57844
- pureQuery client optimization topic in the Integrated Data Management Information Center
- Register for the June 24th Virtual Tech Briefing on the enhancements delivered in this release
Hope you have the upcoming Virtual Tech Briefing on pureQuery by Information Champion Dan Galvin on your calendars. We had a meeting to discuss this today, and hearing about pureQuery (a topic I spend a fair amount of time on) from someone who is not an IBMer is refreshing, and I hope you feel the same. I think it’s important for people to hear a perspective from someone who is both really knowledgeable on the topic, and has real-world implementation expertise as well.
If you are an architect, developer, or an interested DBA, you will want to hear Dan’s perspective as he has lots of experience in building enterprise applications. Even as we discussed the topics and flow of the briefing, we got into some interesting discussions about Java data access patterns and data mapping approaches.
The nice thing about pureQuery is that you don’t necessarily have to get into battles over the benefits of one persistence approach over another. pureQuery acts as a complementary technology that can help developers and their DBAs with problem resolution and database performance with any JDBC based data access (like Hibernate, JPA , iBatis etc.). But it’s also nice to have the pureQuery API as a choice for people who really are interested in utilizing a light-weight, and more database-aware API that can leverage database best practices and has tooling slick tooling in Optim Development Studio for rapid generation of the data access layer.
Anyway, I hope you can join me, Dan, and Sonali (who will demo some cool tooling features) on March 11th at 10 AM Pacific. Please register, and even if you cannot attend that day, you can at least access the replay.
I’m the performance architect for, among other things, the pureQuery platform. My team has the responsibility for not just ensuring that our products perform well, but also to help produce verifiable performance numbers that we can share with confidence.
I’m happy to say that we are ready to share our performance numbers for pureQuery access to DB2 for Linux, UNIX and Windows. (We already have some great numbers published for z/OS for both Java and .NET).
The goal of this particular performance test was to measure throughput improvement using static SQL execution, which is possible to do even for existing JDBC applications with no change to the application source code. The increased throughput comes mainly as a result of saving the cost of preparing the SQL when using static vs dynamic SQL. We typically don’t see the same level of interest in static execution from DB2 for LUW customers as we do from DB2 for z/OS customers because the LUW platform does not have the same memory constraints as z/OS – and therefore LUW customers might be more likely to throw hardware at the problem to achieve greater dynamic cache hit ratios and hence improve throughput.
However, static SQL also provides predictable performance because the access plan is pre-determined and I often find users are happier with predictable response times rather than ultra-fast response which can deteriorate over time.
Static SQL execution also provides much more than predictable performance. By using it, you can significantly improve problem determination and traceability. You can also reduce the risk of static SQL injection from dynamically executing applications. You can read about some of those benefits in this article. And there are additional benefits to pureQuery usage such as literal consolidation or the ability to make emergency fixes to application SQL without changing the application, which you can read about in Sonali’s article on 2.2 features.
OK, now that I’ve hopefully convinced you that there are many, many reasons to consider pureQuery and static SQL execution for DB2 LUW environments, I would like to go ahead and share our performance results.
The measurement environment
Our measurements were done with a typical 3-tier environment of a client, application server, and database server as shown here.
A word about the “ERWW” application we use. ERWW is an OLTP application based on an order entry and tracking system that is designed to exercise the database tier much more than the application tier (that is to say, there is not a lot of business logic in the application). The ERWW workload models a wholesale supplier managing orders, and consists of seven transaction types. The frequency of transactions is set to simulate a realistic scenario; the mix used in the benchmark environment was 47 percent update transactions, 53 percent read-only transactions. The workload is triggered by a Java client program which generates HTTP requests for the required transaction mix.
Before I go into our results, I have to offer up the standard disclaimer that any of you who are familiar with performance work are used to hearing. The tests that we ran were done in a controlled environment where we were able to carefully control extenuating factors that can influence the results. In particular, the type of application you run can significantly affect the results in terms of the mix of database-intensive work versus application-intensive work. The ERWW workload is a very database intensive workload and most of the work is done by the database server processing SQL requests. Therefore, by using pureQuery to optimize the database server side processing, we are in fact optimizing a large chunk of the workload. Consequently the performance gains for this workload are significant. We chose ERWW because it was readily available to us, and not because we thought it would give us the best results. I guess what I am trying to say is that your results will vary.
OK, now that that’s out of the way. We measured static execution using both client optimization of an existing JDBC application and also as a ‘new’ version of the application written in pureQuery annotated method style. The performance is reported in Normalized Throughput Rate - Transactions Per Second (ITR). The ITR is the notional throughput rate assuming that the CPUs are 100 percent busy. For example, consider an application with a transaction rate of 200 transactions per second at 75 percent CPU consumption. The ITR for this application would be 200 * 100/75 = 267 tps. This is the notional transaction rate that could be achieved if the CPUs were 100 percent busy, and no other bottleneck is hit first.
We measured the JDBC workload with both 90% and 95% package cache hit ratios. To achieve a 90% package cache hit ratio with the ERWW workload, the DB2 Package Cache (PCKCACHESZ) was sized to 180 x 4k pages, and for a 95% hit ratio it was sized to 210 x 4k pages.
Here are the results with a 90% cache hit ratio. The results are shown on the vertical axis as the database ITR improvements over the baseline of JDBC.
As you can see, client optimization almost doubled throughput over the existing JDBC application. The new application that uses pureQuery method style API more than doubled the database transaction throughput.
The results with a 95% cache hit ratio are shown here.
Note that we achieved significant throughput improvements even with a high package cache hit ratio.
In summary, pureQuery and static execution can offer many benefits, one of which may be improving the performance of your data servers with your applications. By changing the dynamic SQL to static SQL, pureQuery should help you either achieve better throughput on your existing hardware, or reduce CPU consumption of your existing hardware, allowing you to load more tasks onto it. I highly recommend that you also check out the bigger picture around Java acceleration (including the other benefits I mention) as shown in this video.
Our team blog coordinator has been getting on me about posting to kick off the new year and to reflect a bit on how things went last year. So let me try to do that as concisely as possible.
Reflecting on 2009, I have to say that I am pleased to see that people are really starting to understand the business value of integrated data management (IDM) to help reduce inefficiencies in communications and to enable enforcement of rules across IT roles. And there has been even stronger interest, particularly from the z/OS community, around potential cost reductions with use of pureQuery in the application infrastructure. I know that our tech sales and enablement teams have been very busy with pureQuery savings assessments (briefly introduced in this video demo) and are constantly on the road talking to customers about pureQuery.
We’ve had some good customer successes this year that reflect the realization of the benefits of integrated data management. Here are a couple of note:
Collaboration, innovation, and data governance with data modeling at La Caixa bank
La Caixa was challenged with control over an increasing number of database entities. They wanted to set up a way to enable administrative control over created entities that would help both their administrators and developers. They decided to go with InfoSphere Data Architect, in no small part because of its integration with Rational Software Development Platform and the rest of the Optim portfolio, as well as its extensibility. They are taking advantage of the support for source control to store physical data models in a central repository and allowing only approved objects to be promoted to production. And they’ve done some customization on the DDL files to enable the SCM to parse the files in order to decide where to install the other database objects. Data governance is improved, and the integration with Rational enables the administrators to grant access to data models to appropriate developers for improved collaboration.
Greater security and performance for openJPA applications using pureQuery at Payment Business Services (PBS)
PBS has developed a new Payment Services system programmed in Java using openJPA. The developers liked working with openJPA, but the database administrators were concerned about performance and security with dynamic SQL. So they decided to take advantage of the client optimization capabilities in pureQuery to leverage static SQL for that application. They have Optim pureQuery Runtime for z/OS under a WebSphere Application Server to capture the dynamic SQ,L and then they bind that captured SQL into DB2 packages for static execution. They use Optim Development Studio to view and work with the captured SQL for tuning, problem determination, and for communicating with developers. This marriage of pureQuery with new application infrastructures in the z/OS environment is really critical to helping enterprises to both grow technologically while maintaining appropriate levels of security and performance.
What’s coming in 2010?
Looking ahead, I’m driving the team toward delivering on specific aspects of the IDM vision. It is my belief that this year we will be able to deliver on even more integration among the portfolio members (and other IBM products) to fully realize a database performance management scenario that crosses roles from system operator all the way through to modifying and tuning the database access layer. You will likely see more integration with Cognos, SAP, and DataStage. And we’ll be delivering betas and new products to help DBAs do their jobs better. And, yes, there is a good deal of work being done to support z/OS this year. And, before you ask, yes, Windows 7 support is coming soon.
I hope to see you at our next big IOD EMEA event in Rome where I will go into more detail on some of the capabilities I can only hint at here.
I'm on the plane today returning from two customer visits in the D.C. area where I was talking about the advantages of pureQuery in DB2 for z/OS environments for DBAs (as opposed to applications developers). One uses WAS on z/OS and the other uses WAS on Linux for System z. Both customers were very familiar with how to manage and monitor CICS applications accessing DB2 data, and they felt out of control with Java application access to their mainframe. One customer even said, "we tried to tell our application teams to stop using Hibernate" because they couldn't see/review the SQL or know which application was running the dynamic SQL in their system. They said the application developers ignored them(!). I told them (hopefully, I wasn't too preachy) that they needed to stop resisting these contemporary application architectures (Java, .NET) and instead take back control by transforming those dynamic SQL statements to static SQL statements via pureQuery. I told them that if they continue the "no Java will access MY mainframe", that they are headed down a path of getting rid of their mainframe. The application development teams don't care what DBMS is used for their application, and if met with resistance from the z/OS team, they will simply deploy the applications on some other DBMS.
Related to this, I just watched this new short 7- minute video on pureQuery for DB2 for z/OS, and it's a great quick summary of the value that can be gotten with pureQuery. In my mind, it's a no-brainer to implement pureQuery when you have Java or .NET applications accessing DB2 for z/OS.
If you want to get a good idea of what the process is like in more detail, check out the upcoming virtual tech briefing on this very topic. It's entitled pureQuery Deep Dive Part 1: Client optimization, scheduled for February 4th. Patrick Titzler, one of the authors of the original tutorial on this topic, and Chris Farrar, client optimization technical lead, will be presenting. It should be good.
The Optim Development Studio 2.2 Fix Pack 1
was made available on December 11. Although a fix pack, the focus is on customer requirements, so there are many "new" features focused on pureQuery development. These features are suggestions from our customers, based on their extensive usage of pureQuery on large scale projects, to improve developer productivity. We hope you will benefit from these while building pureQuery applications. To see what pureQuery brings to your organization, read my earlier article
Here are some of the highlights of the fix pack.Speedy pureQuery application iterative testing
- As part of pureQuery development, you are always changing your SQL in the annotated style interfaces. You now have the ability to generate JUnits for those interfaces and speed up testing. Simply right click on the annotated interface from the package explorer and select Generate JUnit
from the menu. Improved SQL support in Java database applications
Improved developer and DBA handshake
- Bridging the gap between Java and SQL has been one of the key benefits of pureQuery since the first release. This feature has been improved to allow live validation of SQL against the database, if you have a live connection available. This is a sure shot way to ensure SQL is accurately validated, and you may want to always set this flag if you have a live database connection.
You can set this flag (called “Validate SQL problems against live database”) from Window->Preferences->pureQuery, or from the project properties.
- SQL merge statements can now be auto-generated when pureQuery code is generated from one or more tables and at least one primary key exists on the table(s). Simply check the boxes for SQL merge statements in the pureQuery code generation wizard (by default, they are checked).
We are providing yet another step towards seamless integration between the developer and DBA roles. When using Optim Development Studio to bind SQL into static SQL packages, you can now generate .bat or .sh files which can be shared with the DBA. The DBA can use that as a starting point to customize the bind calls, or use them as is.Improved usability for tuning queries in ODS using Optim Query Tuner
You can generate reports that suggest how to tune SQL statements for better performance, when Optim Query Tuner
is installed in a shell shared environment with Optim Development Studio. Simply right click on an SQL statement from your pureQuery application or from the SQL Outline, and select Generate Query Tuner report
from the pureQuery menu.Improved support for application development when using the pureQuery API
Applications often need to access SQL from multiple schemas within the same project. This is now possible when building pureQuery annotated style applications. You can specify different schemas (to quality unqualified tables/ views) or paths (to qualify unqualified procedures/ functions) for each annotated interface in a pureQuery application, and these values will be used for SQL development/editing within your interfaces. To enable this, right click on the annotated style interface and provide values using the Connection Settings.Improved support to use existing beans when using pureQuery API
If you use customized row handlers in your pureQuery applications, you can now get a head start in building these by customizing the ones generated by pureQuery. Using this new feature, you can externalize row handlers from an interface into separate files, and enable re-use. To see this feature in action, right click on the annotated style interface from the Package Explorer and select Create Handler Classes
from the pureQuery menu.
In addition to the above Optim Development Studio features, there are numerous enhancements in the Optim pureQuery Runtime Fix Pack
- Usability improvements in the static binder that allow you to specify package level bind options and allows more control over packages to bind from various artifacts
- Improved exception handling to get access to underlying sqlstate and sqlcode easily
- There are also improvements in serviceability and literal substitution features
We hope you will download the fix packs and try some of these enhancements.
As always, let us know what features you would like to see and please keep the feedback coming! Feel free to submit questions using the Optim Development Studio and pureQuery Runtime forum
IOD was a busy time for the Optim team. I hosted three sessions and also was in the pedestal area and had a chance to interact with several DB2 and IDS customers. While all the solutions certainly got their attention, two solutions seemed to bubble up to the top in terms of customer discussion and questions. Optim pureQuery
was one of them, mainly from the DB2 for z/OS crowd. There is definitely a trend in the market of customers wanting to take advany tage of the wealth of data stored on the DB2 for z/OS platform rather than transport that data to another platform. This trend indicates that JAVA is the preferred language when developing these new applications; however, finding an acceptable data access layer both in terms of ease of use as well as high performance, has been a challenge. Many companies are looking at things like Hibernate to address their ease of use challenges, but unfortunately Hibernate doesn't address their high performance requirement. Several customers wanted to understand how pureQuery could help them in this situation and were very excited once we talked to them about not only the benefits of Static SQL
, but also the client optimization
features of pureQuery, and most importantly how we could visualize the SQL that Hibernate was generating to the developer and DBA. Seems to be a perfect fit for providing the high performance that DB2 z/OS customers expect for these new JAVA applications.
The second solution that seemed to get a lot of attention was our Performance Expert
solution. When it comes to problem diagnosis everyone is always looking for a better mousetrap and it seems that solutions in this area get hot and then run their course. Well, Performance Expert seems to be the up and coming hot product. I believe this is due to the sheer wealth of information this product collects. If , or should I say when, a problem occurs in production the most difficult challenge is gathering that information, especially because that moment in time is now past and you really need historical data to determine what went wrong. Performance Expert collects information and stores this information in intervals, making it incredibly easy to get the information needed all synchronized to the right time. You can see a short video of this capability here
. The other thing I found interesting was all the various reports and consoles customers were writing based on the data found in the Performance Warehouse. From capacity planning to SLA reporting, it seems to me that there is a lot of customization going on that would probably make a great birds-of-a-feather session in future conferences.
As I mentioned earlier, all the solutions got their attention. I hosted a joint session with Randy Wilson from Blue Cross Blue Shield of Tennessee where Randy described a very interesting outage situation (not fun!) and how Optim High Performance Unload
saved the day for them in terms of being able to supplement their recovery and provide the historical data they needed. Interesting how customers really think outside the box and come up with creative uses for our solution. After that session I had quite a few discussions with customers on how they could use High Performance Unload for situations like Randy's where back-level DB2 or dropped tables caused challenges in terms of recovery.
Well it was a busy week and now comes all the customer follow-up that happens after IOD!
You’ve heard a lot here about pureQuery value
: better performance, faster development, improved manageability, and increased security. Now we’re making it even easier to purchase pureQuery. We’re announcing two new DB2 Connect
offerings that bundle Optim pureQuery Runtime
- DB2 Connect Application Server Advanced Edition and
- DB2 Connect Unlimited Advanced Edition for System z.
Since Optim pureQuery Runtime, in essence, extends and enhances your database drivers, this packaging makes perfect sense. You install pureQuery Runtime on the same system as your database driver and pureQuery enables SQL capture from executing applications, transparent literal consolidation and replacement, transparent conversion to static execution, heterogeneous batching of SQL requests, SQL lockdown to approved list to mitigate the risk of SQL injection, SQL replacement for emergency application fixes, and client execution metrics collection for hot spot identification.
Purchasing is made easier because most DB2 for z/OS customers already have DB2 Connect on their approved products list, speeding up procurement. And you can easily upgrade from DB2 Connect Application Server Edition or from DB2 Connect Unlimited Edition for System z to the respective Advanced Edition offering. Check out our announcement letter here
And for those of you going to the Information On Demand Global Conference
, be sure to check out how State Farm (State Farm Optimizes App Performance with Optim Development Studio, pureQuery, TDM-1718) and Handelsbanken (Evaluating Java Data Access Technologies at Handelsbanken, TDM-2177) are realizing pureQuery value.
It’s only been a few months since our last release, and my team and I have been busy taking our products closer to realizing the Integrated Data Management vision for integrated lifecycle development and heterogeneous database support.
So what's different in this release? With the announcement and release of Optim Development Studio 2.2
(formerly Data Studio Developer), we added:
- Features that help developers and DBAs work together to create high performing Java data access code with high productivity, on Oracle, DB2, and Informix Dynamic Server databases.
- Speedy iterative testing
- Enhanced impact analysis
- Support for building and debugging SQL/PL procedures to run against either DB2 9.7 or Oracle databases.
As the architect for pureQuery tools, I wrote a new developerWorks article
that goes into some detail on the new capabilities, mostly from a pureQuery perspective. (You can see more about the other Oracle support capabilities in Venkatesh's blog
.) I’ll give you some highlights of this release from a pureQuery perspective in hopes that it will convince you to go read the article
and download the trial code!
Some of the important features –Oracle pureQuery support
The big news of course is support for pureQuery capabilities for Oracle databases – pureQuery code generation, SQL content assist, validation and all the editing capabilities ( now also available for JDBC, and native SQL in JPA and Hibernate applications), client optimization, dependency analysis, hot spot analysis. If you aren’t familiar with these capabilities, my article does review them.
The screenshot below shows a sample pureQuery application that was run against Oracle. Using the SQL outline (formerly called pureQuery outline), you can see the performance metrics from the Oracle queries. You can also see predicted cost using the EXPLAIN Data
option. (That EXPLAIN Data
option is new for DB2 and IDS as well.) Visibility of data privacy attributes to developers
The other interesting integration work that was done is the ability to maintain data privacy attributes from modeling through development and test. Anson Kokkat touched on this in his recent blog
. Production databases often contain sensitive information such as credit card numbers or social security numbers. When data architects create data models for such databases using InfoSphere Data Architect, they can identify which attributes or columns contain sensitive information and specify appropriate privacy policies to be used with them.
By associating this model with their database, developers can easily see which columns are identified as containing sensitive information and this can help them maintain compliance in how they handle that data in their applications. This protection also extends further to their applications - they can see how those private columns are being used in context to ensure they are not doing something inappropriate, such as printing out data in those columns.
The screenshot below shows the private columns in the SQL outline (a little padlock icon is used to indicate a private column), their privacy properties, and how you can navigate to the model to get more information. Other key capabilities include:
- Ability to copy and paste objects or data subsets as an aid to developers. If you need something more heavy duty, the copy/paste wizard can generate a script that can be used with Optim Test Data Management solutions. (Note that this capability is currently available only with DB2 LUW 9.7 and Oracle.)
- For DB2, you can now specify that literals be replaced with parameter markers as part of the SQL capture process. Because this makes the statement less ‘unique’ it is now eligible to be bound statically. This was a requirement we had from several customers. Also, we’ve made a lot of enhancements in package management to give you more granularity in identifying and rebinding only packages that have been impacted by a change. In addition, you can bind in the background.
There are really a lot more enhancements, but you’ll need to read the article – I don’t have room to list everything here. We’re also working on some videos that will show these features in action.
Many of us on this blog talk about what IBM is doing to improve the Java database applications space, both from an application development and database administration point of view.
Here is the way I like to categorize the areas in which these improvements are seen:
- A comprehensive monitoring capability that gives you a broad, yet contextual perspective on database and application performance, and early warning of emergent problems.
- Better insight into applications, and the ability to stabilize, and improve resource utilization in a non invasive way
- Building performance into the applications early on in the development cycle.
Performance Expert with Extended Insight, pureQuery, Optim Development Studio, and Query Tuner are all important components
of the solution that we call the Java Acceleration Solution
. But if you are a big picture person and want to see how these components work together, we put together this 15 minute video video
on developerWorks that we hope you'll find entertaining as well as giving you an example of this bigger picture as you follow along with our friends at the fictional Great Outdoors Company.
By the way, we also developed some hands on lab material that walks you through these scenarios so that you can explore, and try it out yourself. If you're interested, we'll be offering this as a lab at the Information on Demand Conference
in Las Vegas this year. if you are attending this year, and would like to try it out, be sure to look for the Java Acceleration Solution Hands-on Lab (HOL-1276) in the Integrated Data Management Track of IOD.
Hi, everyone. Glad to be back after some travelling. I wanted to give you a heads up on a virtual tech briefing
that I will be giving next week focusing on an overview of Optim Development Studio
. We have a special guest speaker - Nik Teshima - who is the product manager for Rational Application Developer for WebSphere Software (RAD)
. If you're wondering why we invited Nik along to this briefing, it's to help address questions we hear a lot from people who don't quite understand the complementary relationship between RAD and Optim Development Studio.
I will walk you through some of the capabilities that Optim Development Studio brings plenty to RAD’s robust Java development environment and why this moves into the "must have" category if you are doing any kind of database access development.
You will also see how the two products shell-share seamlessly (along with MANY other Rational and Optim products) -- you can install them alone or together in a modular fashion to include the capabilities you need. Anyway, please join Nik and me next week on Thursday, August 20 at 1 PM Eastern for the Integrated Data Management Virtual Technical Briefing on Optim Development Studio 101
where I do plan to demo some of these capabilities and will be happy to answer your questions.
Hi, I’m the lead architect for Optim pureQuery Runtime
, and I want to start using this blog to help address questions that I get from people as they learn about or use pureQuery capabilities. In this first blog, I’ll discuss the new SQL replacement capability.
By using client optimization, an administrator can modify the SQL from a captured application. The enhanced tooling to support this capability is described in Sonali's article, What's new and cool in Optim Development Studio 2.2
. The intended usage of this feature is to let a DBA make a change to an SQL statement without the need to edit and recompile an application. This could be useful, for example, in late night or weekend emergencies when an application can't easily be changed. It is also useful in cases where a third party application embeds or generates sub-optimal SQL and a change to the application is not possible without contacting the vendor. In any of these cases, you should aim to change the application directly at the first practical opportunity to use the improved SQL.
When I talk about this capability to people, there are two questions that frequently come up:
- What is the extent of the change I can make to the SQL?
- Isn't there a security risk to allow editing of the SQL? How can we control access?
Great questions. I'll discuss each of them separately.What can I change in the captured SQL?
There are restrictions on what you can change when creating the replacement SQL. The Optim Development Studio pdqxml editor will prevent many of the restricted changes, which is why it is strongly recommend that you use this editor to create the replacement SQL. The primary restrictions on the replacement SQL are:
- You may not change the SQL statement type. For example, you can’t change a SELECT to an INSERT.
- The number and types of any input parameters or output result columns must be unchanged. For example, if your SELECT statement is expecting two columns of CHAR and INT as result, your changed SELECT statement must also expect a result of two columns of CHAR and INT.
At first glance, these may seem to greatly narrow the capability. But when you think about it, any changes like those described above couldn't work very well anyway without a corresponding application changes, at which point you would likely just change the SQL directly in the application.
Nevertheless, there are quite a number of useful changes that you could make that would not violate the restrictions. You can:
- Influence access path / index usage:
- Add an ORDER BY clause
- Add OPTIMIZE FOR 1 ROW
- Other "tricks" to influence the DB2 Optimizer (like adding OR 0 =1 to a predicate)
- For Oracle - add a comment hint to end of the statement
- Influence fetch size for distributed queries:
- Add FETCH FIRST n ROWS ONLY clause
- Add an OPTIMIZE FOR n ROWS clause
- Add FOR FETCH ONLY or FOR UPDATE clause
- Add a predicate that narrows the data returned - just be sure it uses literals and not parameters
- Change the locking behavior:
- Add WITH ISOLATION clause
- Add SKIP LOCKED DATA clause
- Add FOR UPDATE clause
- Directly manage the schema name for object references:
- Add or change the schema qualifier on a table or other object reference.
How can we control access?
- Help manage EXPLAIN DATA:
Some people have expressed concern that there is a security risk involved with the ability to change captured SQL. While there is some potential for abuse, there are means for controlling changes. I’ll discuss some control points within the context of how this feature can be used with either static or dynamic SQL.
Client optimization is most frequently used to convert dynamic SQL execution to static execution. To use SQL replacement here, you must modify the SQL before performing the bind operation. Any changes made to the SQL in the capture file after the bind are ignored. So control over the capture file contents needs to be managed across the capture/configure/bind process. In many instances, this would be done by the same administrator. Ultimately the bind is performed by an administrator with all the authority to perform all the SQL contained in the file. This is not very different than a traditional 3GL program deployment. And any bound SQL is then visible for inspection in the DB2 catalog.
There are scenarios where client optimization is used (for example, to gather performance metrics) even when the eventual execution mode is dynamic SQL. In these cases, the capture file is examined at execution time for the existence of replacement SQL. If present, that SQL is used in the prepare and execute. But you can disable that execution-time replacement. A pureQuery configuration property, enableDynamicSQLReplacement, controls whether this is allowed. The default is false, so you have to do something to turn on the execution time replacement. The gives control at a datasource or application level..
An important point here is that all the basic building blocks of security remain in place.
That is, SQL privileges are necessary to execute the bound packages or the dynamic statements. But even with that being true, additional care must be taken. To prevent unexpected changes to the file you must control write access to the file. It can be locked down on the executing server by making it read-only. It is also important to control the updateability of execution-time properties that can affect the application’s execution. The capture file, and any application properties files need to be thought of, along with any executables, as a collection of related resources, all of which need protection.
I hope you’ve found this useful. Let me know if there are other questions you have about pureQuery, and I’ll do my best to answer them.
-- Bill Bireley
You may have read about the new capabilities in Optim Development Studio in my developerWorks article entitled What's new and cool in Optim Development Studio 2.2
Nowyou have the opportunity to see it in action. My colleague, ZeusCourtois, has put together a video series for ChannelDB2 that walksthrough, step by step, the features I described in the article. Here's the link to the first video in the series: http://www.channeldb2.com/video/whats-new-in-optim-development-1
The 5 parts correspond to the following topics:
Do let us know if you would like to know more!
Also, don't miss the Optim Development Studio 101 virtual tech briefing coming up on August 20th at 1 PM Eastern, 10 AM Pacific to get a tour of Optim Development Studio from Vijay Bommireddipalli, whom you may know as an occasional blogger on this site. We also have a surprise guest for this briefing. You'll see how Optim Development Studio can extend the capabilities in Rational Application Developer to turbo-charge the development and optimization of data persistence layers.
You can register for this briefing here
. See a schedule of upcoming briefings here
I had two recent visits with customers where I was explaining pureQuery. When I finished what I thought was a nice polished presentation on the subject, both times someone said, "So, I have to use those pureQuery APIs in order to turn my dynamic SQL into static SQL." Ugh. You know that feeling where it seems like you must be speaking in a foreign language because the words just aren't being understood? I felt some relief when Rafael Coss told me that he gets this every time he explains pureQuery, and he has a great knack for making the complex seems simple.
Just in case you are also of the impression that the pureQuery APIs must replace existing JDBC, Spring, Hibernate, etc. calls to the database, the answer is no. The conversion from dynamic to static SQL using client optimization, does not require any changes to your application. Plain old JDBC calls can remain in your programs and with pureQuery Runtime, we can capture the SQL and it can be statically bound to DB2 (z/OS or LUW). This explanation usually creates the "Ah ha" moment.
So, while pondering this, I have come up with a new way to explain pureQuery. I now plan to hold off introducing the APIs until after I finish talking about client optimization and the great capabilities you get when you use client optimization:
- How Optim Development Studio tooling provides the pureQuery Outline to visualize the relationships among Java code, SQL statements and database objects
- How SQL injection can be reduced/eliminated
- How framework-generated SQL can be reviewed and possibly tuned
- How SQL can be revised
- How non-captured SQL can be blocked from reaching the data sever
(By the way, Patrick Titzler's tutorial
, although a little dated, still is the best source I know of to understand the process of client optimization.)
Then, after a deep breath and a new title slide, I plan to talk more about ORM frameworks and our pureQuery APIs. (By the way, if you're curious about how pureQuery relates to ORM frameworks, check out Rafael's ChannelDB2 video
Hopefully the "Ah ha" moments won't get delayed any more. :-)