I’ve been working on DBA solutions lately -- in particular, performance optimization topics. You’ve heard some cool things about our offerings already if you follow the Data Studio blog, such as Alice’s blog
on performance diagnostics or Jeff’s blog
on pureQuery from a systems programmer perspective. One of the things I want to do is bring our different offerings together in a more cohesive context.
Performance optimization can be broken down into 2 big categories:
Regarding the latter, first you have to recognize that you have a problem (or a potential problem) and where the source of the problem is – that’s what DB2 Performance Expert and Extended Insight Feature
Then you have to fix the problem. Of course, there’s always adding more resources – more CPU, more memory, more storage. But if that’s not an option, for database performance problems you’re going to want to:
So I’m pulling this together into a scenario for a Webcast on Tuesday, April 21st entitled IBM Integrated Data Management Solutions for Performance Optimization
. You can register here
. I know, a little late notice. But hey, there will be a replay too.
-- Rafael Coss
I was looking at Scott Ambler’s surveys
on IT project success rate. It is very interesting how project success as seen through Scott’s surveys present a more hopeful picture for project success than from the Standish Group’s Chaos Report
, which in its 2006 refresh reported a 35% success rate and a 46% “challenged” rate. (Nice blog entry summarizing a variety of research on the topic in Dan Galorath’s blog
and 2006 Standish numbers from an SD Times article
.) Standish defined success as “on time, on budget, meeting the spec”, while challenged means they had cost or time overruns or didn’t fully meet the user’s needs. But I digress…
Scott’s data indicates that projects that use evolutionary development methodologies, e.g. Agile
or Rational Unified Process
, fare better than those using traditional waterfall or ad-hoc processes. That’s not surprising given the emphasis on tight collaboration among stakeholders and continuous evolution and validation. Really, it’s pretty intuitive. So I was thinking about key characteristics of iterative methodologies and how they relate to database and data access development. (I know, Scott has already thought about this too.
See his Agile Data
site. And Rafael did a Webcast
on it earlier in the year.) But more specifically, I wanted to look at how our Data Studio portfolio supports evolutionary development methodologies. Yes, there’s more to do, but I think what we offer goes a long way towards accelerating solution delivery with high quality results. Vijay and I are going to do a Webcast on this April 28th titled Accelerating Solution Delivery for Data-Driven Applications
. Hope you’ll join us.
In some ways, this is also the companion Webcast to Rafael’s Performance Optimization
webcast. In his blog
, he talked about how from a lifecycle perspective performance optimization can broken down into doing it right the first time or fixing it after that fact. His Webcast focused on the latter and this one on the former.
What are your stories about evolutionary methodologies and database development? Have you used Data Studio software in this context?
We talk about the benefits of static SQL execution as a benefit of using Data Studio Developer and pureQuery. And those benefits are great. But if you're an Informix developer or DBA, you may still be wondering if Data Studio Developer can help you improve application performance, since IDS doesn't currently support DB2's static execution model.
When we announced Data Studio Developer 2.1, we added many features that benefit Informix database developers and DBAs. Guy Bowerman
covered some of the key 'base' features for both developers and DBAs, including support for UPDATE STATISTICS, improved support for triggers and tablefragmentation. But if you're a Java database developer or DBA who cares about performance, take a look at pureQuery capabilities for Informix
as well, including:
- Heterogeneous Batching: You can now develop pureQuery code and gain performance benefits and reducing network operations by using heterogeneous batching of operations.
- Query Response Time Optimization: DBAs might appreciate this one. If you have existing JDBC- based applications, you can see the response times for each SQL statement executed in your application and replace the most expensive queries with better ones, without accessing or changing the application source code.
I highly recommend downloading Data Studio Developer
, and take a look at the IDS tutorial for Data Studio Developer
that was recently updated to include some of the new pureQuery features. And stay tuned. I will soon be talking more on developerWorks about “What's new and cool” in our upcoming release.
Hi, I haven't written a blog entry for a couple of months, so if you're new to our blog, I'm the team lead and architect for Data Studio Administrator
One of the features in the 2.1 release is a quick and easy way to copy object(s) from one database to another. You can read all about it in this developerWorks article
that members of my team recently published. But let me summarize here.
Suppose I've just made some changes to my development database and application, and I've completed my testing. Now I want to copy the database changes to a test environment for further validation. Let's see how I can do that using Data Studio Administrator 2.1
with Fix Pack 1.
I connect to my source database in the Data Source Explorer (what used to be called the Database Explorer). I like to drill down using the flat folder view (new in 2.1, which presents objects in folders by type) versus the hierarchical mode, so I toggle to that view using the icon in the Data Source Explorer tool bar. I expand the database and select the Tables folder. In the Object List View I copy the tables that I've changed and want to move to the test system.
Now I'm ready to paste these objects into my target.
To paste the objects into a new database, I connect to the target database, expand the database and select the Schemas folder. I click on the schema I'd like these objects pasted into, right click and choose paste.
When I do this, I have the option to also paste dependent objects and data. The wizard helps me to create a change management script to implement these changes. Optionally I can customize the change further. Finally I can deploy them along with my application changes.
Copy and paste is a quick and convenient way to move objects between environments like development and test.
Hi, this is my first time blogging here. I'm an architect in the Data Studio development team, and I work on integrations, heterogeneous database access, and more. I wanted to use this opportunity to tell you about some work I did with pureQuery and Enterprise Generation Language. Enterprise Generation Language
(EGL) is a modern programming language specifically designed to help the business-oriented developers quickly write full-function applications and services based on Java and modern Web technologies. Business-oriented developers write their business logic in EGL source code using the powerful development facilities of Rational Business Developer Extension
, Rational Developer for System z
with EGL, or Rational Developer for i for SOA Construction
. From there, the tools then generate Java or COBOL code, along with all the runtime artifacts you need to deploy the application to the desired execution platform.
Data Access is one of the key components of EGL. You can access your database data using EGL SQL Records which provides a very high level of abstraction and allows you access to the data using simple verbs or you can write your own data access logic. Below are simple examples showing both scenarios.Figure 1. SQL RecordsFigure 2. A basic data access program
If you are a regular reader of this forum, you probably already know that pureQuery
is IBM's, high-performance data access platform focused on simplifying, developing, securing, managing, and optimizing applications that access data. You may have read about the benefits of using pureQuery client optimization with Hibernate, JPA, and even .NET applications. You can also use pureQuery technology with the Java code generated from EGL to
- Optimize applications that access DB2 on any platform by capturing the statements generated by your EGL application, binding the statements to database packages, and then executing the application in static mode.
- Get an insight into your EGL application using Data Studio Developer (which shell-shares with Rational Business Developer) to see a list of SQL statements originating from your EGL application with details on number of times executed and execution times. And you can of course use the outline to jump between the SQL statement and the originating line of Java source code.
- Replace SQL in the program without having to change the application code (if you don't have access to the source code, for example).
- Prevent SQL injection by allowing only SQL statements that have been captured and approved to run against the database.
Kathy Zeidenstein and I have put together a tutorial
on Rational Cafe that shows how this integration works and how the technologies can be used by EGL customers writing applications with DB2 data servers.
Enjoy, and let me know if you have any questions.
I just got back from IDUG in Denver and wanted to dash off a quick blog about it. I sat in on several sessions where Data Studio and pureQuery
were being presented (some by our team, some by consultants, some as BOF sessions). We really couldn't have asked for things to go better:
- I saw many of the same customers attending a series of sessions on these topics, so several of the customers were basically "majoring" in Data Studio and pureQuery.
- There were several external consultants presenting their findings of how these technologies worked in their field engagements. All reports were in line with what we found in our own lab measurements.
- I sat in one presentation where the consultant presenter reported 25% savings, but the folks in the audience reported that their savings was closer to 40%.
It really great to hear these success stories. I look forward to meeting more of you and hearing your stories at IOD EMEA
in Berlin and IOD NA
in Las Vegas. By the way, the deadline for submitting talks for IOD North America is May 29th. I know there is a strong emphasis on getting customers speakers, so if you have a story to tell, you should submit an abstract.
See you at both conferences.
-- Curt Cotner
I wanted to let you know about a new two-part article series written by Alice Ma and me which is intended to help you use DB2 Performance Expert for Linux, UNIX, and Windows
to your best advantage.
- Part 1 focuses on health monitoring, describing how you can set up PE to notify you when problems occur even if you are not sitting in front of the PE Client, and how you can easily see what is going on on your database using System Health data views.
- Part 2 focuses on more advanced concepts, e.g. how to take performance baselines, how to monitor DB2 WLM and how to customize PE to monitor partitioned environments effectively.
Performance baselines are useful if you plan a change in your environment, such as applying a new DB2 fixpack or changing a DB2 configuration parameter. Take a performance baseline before and after the change, and you can easily compare whether the change improved or harmed the performance of your DB2 system.
I hope that the concepts described in the articles will help you to more effectively use DB2 Performance Expert. Note that although there is not a downloadable trial version of Performance Expert, if you are interested you can contact your IBM Sales Representative to get it.
-- Ute Baumbach
OK - I'm finally contributing to our Data Studio blog. Yes, I should have done this a long time ago and yes, now that I've posted this, I've pretty much committed myself to regularly contributing. I'm the product manager for the Data Studio portfolio and oversee the direction of the products within Data Studio.
So my first blog will be about a topic near and dear to my heart, "A Day in the Life of a DBA". I used to be a DBA prior to working for software companies, and I remember being pulled in all different directions during the day. From design sessions with development on new applications, to restoring databases when things go wrong, a DBA's life is never boring. So how does this relate to Data Studio?
We've been working hard on providing functionality for the DBA. Most people that think Data Studio is only for the developer, which may have been true a year ago, but that has changed with the recent 2.1 release (Dec. '08), and the upcoming release will offer even more functionality for the DBA.
With Data Studio and Data Studio Administrator did you know you can:
- Manage Objects - Create, alter, drop, copy/paste, compare, migrate, view dependencies, and the list goes on and on
- Manage Privileges - Grant, revoke, manage roles and users
- Manage Data - Export, Import, Load, Edit Data and more
- Utilities Management - Backup, Restore (including redirected restore), Reorg, and more
- Commands - Start, Stop, List Applications and more
A new demo
has been created that walks through some this functionality. To try this out for yourself, I recommend you download Data Studio Administrator
-- Deb Jenson
Greetings from Berlin! I’m very pleased today to tell you about several new announcements we are making in the portfolio previously known as Data Studio. With these new releases, we are taking great strides toward the vision of Integrated Data Management -- An integrated, modular environment to manage enterprise application data and optimize data-driven applications, from requirements to retirement across heterogeneous environments.
While delivering on an Integrated Data Management vision is a very broad value proposition and one that will involve all aspects of IBM Software Group, you’ll see with this release that we are adopting the Optim name as a rallying point for this technology emphasizing our focus on optimizing the value of your data assets by managing them across their lifecycle. This announcement represents another major step in delivering on our vision focusing both on extending heterogeneity and portfolio integration that provides the basis for cross-role, cross-lifecycle collaboration, efficiency, and alignment.
In future posts, we’ll take you through a couple of scenarios enabled by the new releases, but to get you started, here is a summary of the announcement and links to announcement letters and web pages:
- Enhancements to and renaming of Optim Development Studio, Optim pureQuery Runtime, and Optim Database Administrator (formerly Data Studio Developer, Data Studio pureQuery Runtime, and Data StudioAdministrator). You’ll find lots of new functionality for both developers and DBAs in these releases. Most notably:
- pureQuery and development support for Oracle databases (including PL/SQL) for a common integrated development environment across DB2, Informix, and Oracle
- A host of fantastic new pureQuery capabilities that customers have been asking for, including translation of literals to host variables to improve application performance
- More complete DB2 for Linux, UNIX, and Windows administration capabilities in Optim Database Administrator including support for large warehousing environments
- Better governance of privacy attributes across design, development, and test environments Better support for DB2 package management so that DBAs can restrict changes and rebinds to just those packages affected by a change.
Many of you may be wondering about the no-charge capabilities. We heard loud and clear that our customers want a stand-alone download package for the no-charge capability. We are reverting to that packaging model with this release. More on that as it becomes available in the near future.
- New product: Optim Query Tuner for DB2 for Linux, UNIX, and Windows brings single-query tuning advice and formatting to theDB2 developer. I think Ray Willoughby will be blogging more on this, but this product provides a great first step toward enabling developers to more effectively tune queries during development. Its integration with Optim Development Studio provides a seamless environment for crafting queries, identifying SQL hot spots, and optimizing queries all pre-production to help produce enterprise-ready code and facilitate collaboration between the developers and DBAs. See the announcement letter and Web page for more detail.
- New release of InfoSphere Data Architect. This product has always been a shining star in the portfolio for its heterogeneous database capabilities. This release includes improvements to the data governance value proposition around data privacy and the ability to maintain volumetric data for capacity planning. Most notably, users can choose from pre-defined privacy templates and share privacy definitions with developers and testers via Optim Development Studio. See the announcement letter or Web page for more detail
I’m asking some of my technical leaders and product managers to go into more detail on these announcements in future blog posts. In the meantime, I must go back to the conference. Lots to talk about…
updated 6/16/ to fix minor typo.
Following up on Curt's blog
about the new releases in June, let's take a deeper look at what is new in InfoSphere Data Architect
Building on top of the privacy specifications for generating test data that was already built in the product from December 2008 you will now be able to pick from a predefined list of categories for specific data privacy information. It's probably best to explain this with an example. Let's say you have a credit card column that you want to mark as private by generating a random number where you maintain the first 4 digits of the card. Within InfoSphere Data Architect you can specify that you want to use the credit card masking policy, and IDA will be able to connect to Optim Test Data Management and Data Privacy
solutions to get the appropriate masking method that should be used. Not only can you generate this in the design phase of your model you can now share this with Optim Development Studio
so now when developing applications you have the ability to view what data is private and even look at the SQL that accesses the sensitive data.
Also new in IDA 7.5.2 is the capability to size storage requirements and estimate for data capacity and growth. This is often called volumetrics
support and as per customer requests we have implemented this in the new release.
Finally, building on the fact that InfoSphere Data Architect is more than just a data modeling tool, we have leveraged all the different use cases that customers have implemented to improve on the different integration scenarios that we provide with IDA. We already know that Data Architect is built on top of the Rational Software Delivery platform (reminder, this product used to be called Rational Data Architect) and we continue to improve in those areas, but we have also enhanced integration scenarios related to Information Management as well. Since most of the Optim Solutions for Integrated Data Management are built on Eclipse you can utilize the sharing of connection information feature that was introduced in the June releases. Also new in IDA 7.5.2 is improved integration with IBM Industry Models
and glossary information. All Industry Models and the newly added glossary information can now be managed in InfoSphere Data Architect.
The trial of this release will be available in a few weeks at the current trial download
location. The announcement letter is here
. Oh, by the way, the announcement letter also contains information about the updated Learning Services course for IDA
that has been enhanced to cover more product capabilities. I always strongly recommend that new users get education, and this new and improved course can help you get what you need to get started.
-- Anson Kokkat
As mentioned in my earlier blog, I work on heterogeneous data accessand I am one of the technical leads responsible for Oracle integration.Until we get more detail out on developerWorks, I want to provide youwith an overview of the support we will have for Oracle
Curt hinted in his announcement blog
about Oracle support in the upcoming release of Optim products that were formerly named Data Studio. Support for Oracle has been been expanded in Optim Development Studio (formerly Data Studio Developer) to extend to the development environment the capabilities already available in InfoSphere Data Architect
(for modeling and design) and the Optim solutions for data privacy
, data growth
, and test data management
With Optim Development Studio 2.2, support for Oracle has been added in the areas of:
- Object Management - You can explore into your Oracle database using the Data Source Explorer and view/create/alter/drop tables, views, materialized views, sequences, synonyms, indexes and user-defined types. And as usual, for any of the edits that you make it generates appropriate DDL to be deployed to the Oracle backend. Furthermore, using the Data Source Explorer, you can view contents of your existing stored procedures, functions and PL/SQL packages.
- PL/SQL life cycle management - Start by creating a new PL/SQL package in a Data Development project (specification and body), add contents to it and then deploy the package to the server (with debug enabled), debug a PL/SQL package stored procedure or function. The debug works like the standard Eclipse Java debugger: you can set breakpoints where you want, and step through and look at changes to variables defined in your stored procedure or function. You can also copy a package from the Data Source Explorer to a Data Development project and edit the package / deploy it back or debug the package entities from there. The same use case scenario extends to PL/SQL based stored procedures and functions as well.
- Data Management - Look at/Export/Import/Edit the contents for a table or view. Copy schema objects from one Oracle schema to the other (enforce data privacy rules on the copy over if required - this applies to copying over a table with data from one schema to another). You can also use the new Copy functionality to keep your DB2 implementation current with your Oracle implementation by copying the impacted database objects between the two database servers. This can only be done with the new DB2 Linux, UNIX, Windows 9.7 release with the DB2 database in the Oracle compatibility mode.
- Visual Explain - View the Visual Explain of a particular SQL statement by right clicking on the SQL statement and then clicking on Open Visual Explain. This action can be performed from the SQL Editor, from the pureQuery-enabled Java editor and the SQL outline view. This shows an explain graph for the statement with details on the different nodes/operators (type of operator, cost / cardinality etc). Also, right clicking on a particular node in the graph will show additional details (for a table operator, you get to see the columns, indexes with related data).
pureQuery support has been added for Oracle with this release as well. You should be able to use the pureQuery scenarios as listed in this article
except for the dynamic-to-static and the JPA ones. There are several enhancements to pureQuery-based development that have been introduced with this release. Sonali Surange will be publishing an article on this soon.
As Curt mentions in his blog, Oracle support in the product(s) adds to the heterogeneity of the product set and provides people who need to develop cross-database applications a single platform for developing applications with a consistent look and feel. And this development environment is part of a larger integrated data management suite that provides heterogeneous data lifecycle capabilities from design through application retirement.
Keep an eye on the Integrated Data Management Community Space
, which should have a link to the updated trial download before long. The announcement letter
also indicates the Oracle versions and drivers supported. Please try the product out and give us your valued feedback.
-- Venkatesh Gopal
This is my first time 'showing my face' among the illustrious bloggers on this site, but know that I am the one behind the scenes flogging them with a big stick to blog frequently, write articles, make videos, etc.. .
I wanted to tell you about something a little new we're trying this year.. we toyed around with what to call them - technical chats, lunch and learns, technical webinar - but since we ended up partnering with developerWorks to do these we are using the term they use for these interactive sessions - 'Virtual Technical Briefing'.
The goal is provide you with access to some of our technical experts who will present and demo on various technical topics to do with integrated data management (of course involving our products). And we wanted to keep it short so you'd be motivated to squeeze it in during your lunch hour (if the time is right) or come do the replay when you can.
These sessions are pretty cool - it's done all over VOIP and your network connection. You just need an email address to sign in and join us. (Join early or sign in a few days in advance to go through the system check).
Our kickoff session is called Data Studio becomes Optim: What does it mean for you
The live event is Tuesday, June 30th at 10 AM Pacific, 1 PM Eastern. Holly Hayes and Kevin Foster will present.
We have a tentative schedule and list of topics posted on developerWorks.
But it's not too late to influence that. Let me know if there's something you are just dying to hear more about, and we'll see what we can do. You can add a comment here or send an email to firstname.lastname@example.org...
See you there!
I’m very excited about our recent announcements as they really feature our drive to deliver on the Integrated Data Management vision with key enhancements in heterogeneity support, cross-lifecycle integration, and automated delivery of best practices. But maybe you’re wondering about what we mean by Integrated Data Management
Organizations often have a myriad of tools from many vendors designed to increase productivity and effectiveness of the application development, data management, and administration staff. Most tools are built to purpose and put little emphasis on leveraging information gleaned from the others. We think that this lack of cohesion results in increased costs and increased risks. For example, how do you align an organization around compliance requirements? A security analyst needs to specify privacy requirements, a developer needs to analyze how they manipulate sensitive data in their application, a tester needs to fictionalize data for use outside production, a DBA needs to encrypt databases hosting sensitive data in production. Or in another example, how does the administration staff isolate problems across interacting components, associate problematic SQL with an issuing application, identify source application code or responsible developer?
To a large extent Integrated Data Management is about delivering capability to enable alignment, productivity and performance. It is about enabling alignment across the data lifecycle based on policies defined up front that are shared and enabled with downstream tasks. It is about improving not only individual productivity, but also organizational productivity supporting tasks across heterogeneous environments, increasing automation, and facilitating collaboration. And it is about optimizing performance and resource utilization embedding best practices and industry or application expertise into solutions. These capabilities are key enablers for accelerating business growth, reducing infrastructure costs, and enabling data governance. I’ve elaborated on our vision in Integrated Data Management: Managing data across its lifecycle
on its third revision on developerWorks.
Our most recent announcement extends our support for Oracle environments. You can read about the new Optim Development Studio support for Oracle in Venkatesh’s blog entry
. This complements the Oracle support already available for data design, test data management, and data archival capabilities. New portfolio integration points feature privacy management throughout the development cycle specifying privacy policies in InfoSphere Data Architect,
visualizing sensitive data and actions against it in Optim Development Studio
, generating test data definitions in either Data Architect or Development Studio for execution with Optim Test Data Management
and Data Privacy
solutions. I’m sure a blog entry is coming on this soon. And our new Optim Query Tuner
takes best practices and delivers it as expert tuning advice to developers or DBAs for tuning queries.
I’d be interested in knowing if this vision for Integrated Data Management resonates with you and your organization. As you think about how your various roles have to interact, what linkages could we enable that would provide the most value to your organization?
Hello, long time reader, first time blogger here. I work as a tech lead managing the advancement of heterogeneous database support for Optim Development Studio and Optim Database Administrator product offerings.
Pleasantries out the way, I am here to tell you about the new ways of packaging our no-charge capabilities that we hope you’ll like. We’re calling this is no-charge capability Data Studio and are using the Optim name for the is value-added capability. The goal is that this naming convention should be less confusing and simpler. You can get your basic admin and development toolingwith Data Studio, and then add additional functionality if needed byacquiring other Optim products (or Rational or InfoSphere etc...). You can try some of these additional capabilities by downloading and using the trials
Data Studio comes in two flavors:
- The first being the standard Eclipse IDE (Integrated Development Environment) which is the way most of the Optim offerings are released.
- The second is what we call stand-alone. This is built as an Eclipse RCP (Rich Client Platform) package.
The stand-alone package is the one I will focus on in this blog. Eclipse RCP in simplistic terms refers to the absolute minimum set of plug-ins required to create an Eclipse-based Rich Client Application. One of the primary goals for the Data Studio (RCP) stand-alone package was to provide a lightweight executable that would help DB2 and Informix DBAs perform day-day simple admin tasks. So, all references to JDT (Java Development Tools) do not exist which also helps in keeping the stand-alone image lightweight.
The stand-alone package is only 196 MB (network download size) and available on 32/64 bit platforms for Windows XP/Vista and Linux RedHat/SUSE which you can download here
. The installation process itself is trivial. It’s a self-extracting binary that lays out the files in appropriate directories. You will notice that the installer creates a default workspace for you in $HOME/IBM/Data Studio 2.2 stand-alone directory. You can change that at a later time. Since the stand-alone version is completely self-contained, a JRE v1.6 (Java runtime) is also bundled and installed with the product. The built-in help and welcome experience provide appropriate context-sensitive help and tutorials.
As depicted in the table below, the stand-alone version is rich with features that enable DBAs perform their day-day tasks effectively. The main difference between the two packages is that the IDE package also has support for Java stored procedures, Web Services, SQLJ development and XML because it targets developers as well.
Default perspectives are also different. For the stand-alone, you will be presented with Database Administration; for the IDE, the default is the Data Perspective. Both packages support Data Development Project creation, with the IDE flavor able to create/debug Java stored procedures (in addition to SQL stored procedures, supported by both.)
A key highlight of both no-charge offerings is to let you know when another offering can help you perform a task. The IDE version is installed via IBM Installation Manager (IM) and by definition can shell share with other Eclipse-based products such as Optim, Rational and InfoSphere.With the stand-alone package, if you want to shell-share with another product, you will need to switch to the Data Studio IDE package. Not to worry though if you started with the stand-alone package but then want to shell share with other products. Any/all work done with the stand-alone can be reused after moving to the IDE package.
Please remember to read the system requirements
before you download. It references important information like Java Runtime versions, Linux download tips, etc. Also, you can check out the discussion forum
if you have questions.
It is our sincere hope that you give this a spin and drop us a line (or add a comment below) about what you think about Data Studio.
-- Srini Bhagavan
It was just over a month ago that I posted the information about our new releases
under the Optim name. Today we announce the z/OS versions of some of those products.
Optim Query Tuner is designed for single-query tuning, and Optim Query Workload Tuner provides both single query and workload tuning capability. Both offer seamless integration with Optim Development Studio
. Optim Query Workload Tuner is a renamed enhancement of IBM Optimization Expert for z/OS and is the upgrade path included with your subscription and support when you are ready to move to the next release. Note that future query tuning enhancements will be made to these products. OSC is still available but will not be enhanced. It will be replaced by similar capabilities in Data Studio under development today.
And a reminder, we did add new discussion forums, including the Optim Query Tuning solution discussion forum
This product is for deployment natively on z/OS systems (for example, with a WebSphere Application Server for z/OS deployment). Some of the capabilities we added were in response to requests from z/OS customers, including the ability to replace literals with parameter markers, making more statements eligible for static execution. You can find out more about this capability in Sonali’s article on developerWorks
Thanks for reading.
I'm looking forward to next Wednesday, July 22nd when I get to participate in one of the virtual technical briefings that Kathy Z blogged about recently
. The topic is InfoSphere Data Architect 101
, and I'm planning to do something with one of the technical architects that is a combination of presentation and demonstration, so hopefully we'll keep it interesting for you.
If you want to get a little background before coming, you can check out this great introductory video
. Also, Holly covered some of the new privacy capabilities in the first virtual tech briefing, Data Studio becomes Optim: What does it mean for you
, which will be available for replay for a limited time. Logistics:
Just sign in
with your computer and email address! Date:
10 AM Pacific, 1 PM Eastern (but sign in 30 minutes early if you can)
The whole thing is done via the computer, so you may want to go to the web site ahead of time and click on the system check
Talk with you soon.
-- Anson Kokkat
I had two recent visits with customers where I was explaining pureQuery. When I finished what I thought was a nice polished presentation on the subject, both times someone said, "So, I have to use those pureQuery APIs in order to turn my dynamic SQL into static SQL." Ugh. You know that feeling where it seems like you must be speaking in a foreign language because the words just aren't being understood? I felt some relief when Rafael Coss told me that he gets this every time he explains pureQuery, and he has a great knack for making the complex seems simple.
Just in case you are also of the impression that the pureQuery APIs must replace existing JDBC, Spring, Hibernate, etc. calls to the database, the answer is no. The conversion from dynamic to static SQL using client optimization, does not require any changes to your application. Plain old JDBC calls can remain in your programs and with pureQuery Runtime, we can capture the SQL and it can be statically bound to DB2 (z/OS or LUW). This explanation usually creates the "Ah ha" moment.
So, while pondering this, I have come up with a new way to explain pureQuery. I now plan to hold off introducing the APIs until after I finish talking about client optimization and the great capabilities you get when you use client optimization:
- How Optim Development Studio tooling provides the pureQuery Outline to visualize the relationships among Java code, SQL statements and database objects
- How SQL injection can be reduced/eliminated
- How framework-generated SQL can be reviewed and possibly tuned
- How SQL can be revised
- How non-captured SQL can be blocked from reaching the data sever
(By the way, Patrick Titzler's tutorial
, although a little dated, still is the best source I know of to understand the process of client optimization.)
Then, after a deep breath and a new title slide, I plan to talk more about ORM frameworks and our pureQuery APIs. (By the way, if you're curious about how pureQuery relates to ORM frameworks, check out Rafael's ChannelDB2 video
Hopefully the "Ah ha" moments won't get delayed any more. :-)
You may have read about the new capabilities in Optim Development Studio in my developerWorks article entitled What's new and cool in Optim Development Studio 2.2
Nowyou have the opportunity to see it in action. My colleague, ZeusCourtois, has put together a video series for ChannelDB2 that walksthrough, step by step, the features I described in the article. Here's the link to the first video in the series: http://www.channeldb2.com/video/whats-new-in-optim-development-1
The 5 parts correspond to the following topics:
Do let us know if you would like to know more!
Also, don't miss the Optim Development Studio 101 virtual tech briefing coming up on August 20th at 1 PM Eastern, 10 AM Pacific to get a tour of Optim Development Studio from Vijay Bommireddipalli, whom you may know as an occasional blogger on this site. We also have a surprise guest for this briefing. You'll see how Optim Development Studio can extend the capabilities in Rational Application Developer to turbo-charge the development and optimization of data persistence layers.
You can register for this briefing here
. See a schedule of upcoming briefings here
In my earlier posting on Oracle support
in Optim Development Studio, I briefly mentioned our support for Visual Explain. Now, I will provide more details and walk you through a couple of use case scenarios for Oracle Visual Explain.
Visual Explain for Oracle is modeled similarly to the Visual Explains for other platforms - DB2 LUW, DB2 for z/OS and Informix. The simple use case scenario is one in which you gather explain information for a SQL statement (SELECT, INSERT, DELETE, UPDATE or MERGE). Let's take a look at gathering and showing explain information for the following SQL statement:
select * from scott.emp union all
select * from scott.emp;
If you were using SQL *Plus you could enter the commands and view the output as shown in Figure 1.
Figure 1.Output for the sample SQL from Oracle SQL *Plus
In Optim Development Studio, there are several launch points for Visual Explain for any particular SQL statement:
- The Project Explorer
- The SQL Editor window
- The Java editor when using pureQuery
Once launched, using a wizard, you set the options including the trace settings,which plan
table to use (defaulted to PLAN_TABLE), and the optimizer mode to use
(defaults to CHOOSE). The result is graphical output with nodes
representing the operations and a parent/child relationship. Hovering over a node you can see details such as the node type, the cardinality, cost, operation name and additional details on the operation (the diagram shows abbreviated node names). Right clicking on a node and then clicking on Show Description
brings up further details. For example, if you click on the TBFULL node, this brings up details on the table EMP, with its columns, indexes and additional catalog information as shown in Figure 2.
Figure 2. Details are available for any highlighted node in the Explain graph
There's another way to get explain information via running a background explain from the SQL outline view. I'll review this procedure since this is really a great tool to get cost information for SQL statements.
- As Sonali explained in her article and in associated video, you can get explain information (and performance hot spot visualization) for Oracle by capturing the SQL from any JDBC application.
- After the SQL statements in the application are captured and are available in the SQL outline view, you can get explain data on them by enabling Background Explain. Do this by right clicking on the project and selecting Properties and then going to pureQuery->Background Explain and clicking on the check box as shown below in Figure 3.
Figure 3. Enable background explain as a project property
- Once the explain is enabled, you can view the Explain data for the SQL statements as shown in Figure 4.
Figure 4. SQL outline view with explain information
This feature is extremely useful for developers who want to screen the
SQL statements they are writing in their application. Any SQL statement
that shows up doing a number of joins and has a high cost can be taken
to a DBA for further analysis. When you download the Optim Development Studio, let us know what you think of the Visual Explain capability as well as the other capabilities for Oracle such as PL/SQL support and other pureQuery support.
After IBM bought Telelogic
I was (and still am) frequently asked “What is the difference between InfoSphere Data Architect and Telelogic (now Rational System Architect). Here is the information I have about the product roadmaps and how these two products complement each other.Before the acquisition:
First, some background to set the stage. As you know, even before the acquisition IBM marketed a data modeling tool which at that time was called Rational Data Architect, which had originated from the old Rational Rose and Rational Software Architect product line. It has since evolved into a robust data modeling solution that is today called InfoSphere Data Architect
Meanwhile, Telelogic was in the business of creating Enterprise Architecture
solutions. The best way to explain Enterprise Architecture is to look at a common application: an IT consolidation project. The goal of the business is to leverage the architecture blueprint of strategy, process, applications, systems, and data to best identify consolidation candidates, then understand the impact of change to the organization in order to successfully plan and implement the transformation. The business goal is to leverage architectural information to maintain, even increase, operational efficiency while reducing costs. Now how do you accomplish this? This is precisely what Enterprise Architecture enables you to do. It lets you come up with the plans and processes needed to move from the goals of what you are trying to achieve from a business perspective. To successfully build a current and accurate architecture for planning, information needs to be imported from various sources within the organization. This information can already exist within Visio (process), Excel or asset management tools (IT content), or data modeling tools (information). Telelogic System Architect was (and is) dedicated to building this higher level Enterprise Architecture, but other tools are more suitable for the detailed work in an individual domain (breadth versus depth). However, synchronization among these point solutions and enterprise architecture solutions is critical for consolidation of data and elimination of duplicate work. The situation today after the acquisition:
Because of the acquisition, IBM and Rational’s robust Solution and Data Architecture line, including InfoSphere Data Architect, provide the capabilities required by focused solution and data architecture teams. System Architect and InfoSphere Data Architect provide customers the solutions they need to address both their high level EA initiatives, and focused data architecture initiatives, while providing a path of alignment between the two efforts. Today, Telelogic Solution Architect has been renamed to Rational System Architect
. Sure, there is some overlap, but this overlap becomes a value-add to IBM customers. Going back to the IT consolidation example, more accurate decisions can be derived from the Enterprise Architecture if data in the architecture comes from current and accurate sources. System Architect’s import from InfoSphere Data Architect extracts information from an accurate and current source. This is one reason that Rational System Architect has a data modeling component.
So to say all of that in one sentence, InfoSphere Data Architect is meant to be the data modeling tool focused on data architects and database modeling, whereas the data modeling components in Rational System Architect are meant to be the tools that support the data aspects of the broader enterprise architecture strategy.Going forward
IBM’s plans for the Rational System Architect product are to expand the concept of Enterprise Architecture Management, making Enterprise Architecture more relevant for more stakeholders within the organization. This is being accomplished by expanding the harvesting, analysis and reporting capabilities in core System Architect, while expanding supported integrations between System Architect and other focused domain solutions. This means that you will see more and more requirements for the lower level tooling being fulfilled by legacy Rational products. As for InfoSphere Data Architect, we are continuing to compete with our peers for the hearts and minds of data architects, enhancing data modeling capabilities, while extending the value proposition of what data designers do by enabling policy-based, management-by-intention capabilities supporting data governance initiatives.
Hope this helps.