I’ve been working on DBA solutions lately -- in particular, performance optimization topics. You’ve heard some cool things about our offerings already if you follow the Data Studio blog, such as Alice’s blog
on performance diagnostics or Jeff’s blog
on pureQuery from a systems programmer perspective. One of the things I want to do is bring our different offerings together in a more cohesive context.
Performance optimization can be broken down into 2 big categories:
Regarding the latter, first you have to recognize that you have a problem (or a potential problem) and where the source of the problem is – that’s what DB2 Performance Expert and Extended Insight Feature
Then you have to fix the problem. Of course, there’s always adding more resources – more CPU, more memory, more storage. But if that’s not an option, for database performance problems you’re going to want to:
So I’m pulling this together into a scenario for a Webcast on Tuesday, April 21st entitled IBM Integrated Data Management Solutions for Performance Optimization
. You can register here
. I know, a little late notice. But hey, there will be a replay too.
-- Rafael Coss
I was looking at Scott Ambler’s surveys
on IT project success rate. It is very interesting how project success as seen through Scott’s surveys present a more hopeful picture for project success than from the Standish Group’s Chaos Report
, which in its 2006 refresh reported a 35% success rate and a 46% “challenged” rate. (Nice blog entry summarizing a variety of research on the topic in Dan Galorath’s blog
and 2006 Standish numbers from an SD Times article
.) Standish defined success as “on time, on budget, meeting the spec”, while challenged means they had cost or time overruns or didn’t fully meet the user’s needs. But I digress…
Scott’s data indicates that projects that use evolutionary development methodologies, e.g. Agile
or Rational Unified Process
, fare better than those using traditional waterfall or ad-hoc processes. That’s not surprising given the emphasis on tight collaboration among stakeholders and continuous evolution and validation. Really, it’s pretty intuitive. So I was thinking about key characteristics of iterative methodologies and how they relate to database and data access development. (I know, Scott has already thought about this too.
See his Agile Data
site. And Rafael did a Webcast
on it earlier in the year.) But more specifically, I wanted to look at how our Data Studio portfolio supports evolutionary development methodologies. Yes, there’s more to do, but I think what we offer goes a long way towards accelerating solution delivery with high quality results. Vijay and I are going to do a Webcast on this April 28th titled Accelerating Solution Delivery for Data-Driven Applications
. Hope you’ll join us.
In some ways, this is also the companion Webcast to Rafael’s Performance Optimization
webcast. In his blog
, he talked about how from a lifecycle perspective performance optimization can broken down into doing it right the first time or fixing it after that fact. His Webcast focused on the latter and this one on the former.
What are your stories about evolutionary methodologies and database development? Have you used Data Studio software in this context?
We talk about the benefits of static SQL execution as a benefit of using Data Studio Developer and pureQuery. And those benefits are great. But if you're an Informix developer or DBA, you may still be wondering if Data Studio Developer can help you improve application performance, since IDS doesn't currently support DB2's static execution model.
When we announced Data Studio Developer 2.1, we added many features that benefit Informix database developers and DBAs. Guy Bowerman
covered some of the key 'base' features for both developers and DBAs, including support for UPDATE STATISTICS, improved support for triggers and tablefragmentation. But if you're a Java database developer or DBA who cares about performance, take a look at pureQuery capabilities for Informix
as well, including:
- Heterogeneous Batching: You can now develop pureQuery code and gain performance benefits and reducing network operations by using heterogeneous batching of operations.
- Query Response Time Optimization: DBAs might appreciate this one. If you have existing JDBC- based applications, you can see the response times for each SQL statement executed in your application and replace the most expensive queries with better ones, without accessing or changing the application source code.
I highly recommend downloading Data Studio Developer
, and take a look at the IDS tutorial for Data Studio Developer
that was recently updated to include some of the new pureQuery features. And stay tuned. I will soon be talking more on developerWorks about “What's new and cool” in our upcoming release.
Hi, I haven't written a blog entry for a couple of months, so if you're new to our blog, I'm the team lead and architect for Data Studio Administrator
One of the features in the 2.1 release is a quick and easy way to copy object(s) from one database to another. You can read all about it in this developerWorks article
that members of my team recently published. But let me summarize here.
Suppose I've just made some changes to my development database and application, and I've completed my testing. Now I want to copy the database changes to a test environment for further validation. Let's see how I can do that using Data Studio Administrator 2.1
with Fix Pack 1.
I connect to my source database in the Data Source Explorer (what used to be called the Database Explorer). I like to drill down using the flat folder view (new in 2.1, which presents objects in folders by type) versus the hierarchical mode, so I toggle to that view using the icon in the Data Source Explorer tool bar. I expand the database and select the Tables folder. In the Object List View I copy the tables that I've changed and want to move to the test system.
Now I'm ready to paste these objects into my target.
To paste the objects into a new database, I connect to the target database, expand the database and select the Schemas folder. I click on the schema I'd like these objects pasted into, right click and choose paste.
When I do this, I have the option to also paste dependent objects and data. The wizard helps me to create a change management script to implement these changes. Optionally I can customize the change further. Finally I can deploy them along with my application changes.
Copy and paste is a quick and convenient way to move objects between environments like development and test.
Hi, this is my first time blogging here. I'm an architect in the Data Studio development team, and I work on integrations, heterogeneous database access, and more. I wanted to use this opportunity to tell you about some work I did with pureQuery and Enterprise Generation Language. Enterprise Generation Language
(EGL) is a modern programming language specifically designed to help the business-oriented developers quickly write full-function applications and services based on Java and modern Web technologies. Business-oriented developers write their business logic in EGL source code using the powerful development facilities of Rational Business Developer Extension
, Rational Developer for System z
with EGL, or Rational Developer for i for SOA Construction
. From there, the tools then generate Java or COBOL code, along with all the runtime artifacts you need to deploy the application to the desired execution platform.
Data Access is one of the key components of EGL. You can access your database data using EGL SQL Records which provides a very high level of abstraction and allows you access to the data using simple verbs or you can write your own data access logic. Below are simple examples showing both scenarios.Figure 1. SQL RecordsFigure 2. A basic data access program
If you are a regular reader of this forum, you probably already know that pureQuery
is IBM's, high-performance data access platform focused on simplifying, developing, securing, managing, and optimizing applications that access data. You may have read about the benefits of using pureQuery client optimization with Hibernate, JPA, and even .NET applications. You can also use pureQuery technology with the Java code generated from EGL to
- Optimize applications that access DB2 on any platform by capturing the statements generated by your EGL application, binding the statements to database packages, and then executing the application in static mode.
- Get an insight into your EGL application using Data Studio Developer (which shell-shares with Rational Business Developer) to see a list of SQL statements originating from your EGL application with details on number of times executed and execution times. And you can of course use the outline to jump between the SQL statement and the originating line of Java source code.
- Replace SQL in the program without having to change the application code (if you don't have access to the source code, for example).
- Prevent SQL injection by allowing only SQL statements that have been captured and approved to run against the database.
Kathy Zeidenstein and I have put together a tutorial
on Rational Cafe that shows how this integration works and how the technologies can be used by EGL customers writing applications with DB2 data servers.
Enjoy, and let me know if you have any questions.
I just got back from IDUG in Denver and wanted to dash off a quick blog about it. I sat in on several sessions where Data Studio and pureQuery
were being presented (some by our team, some by consultants, some as BOF sessions). We really couldn't have asked for things to go better:
- I saw many of the same customers attending a series of sessions on these topics, so several of the customers were basically "majoring" in Data Studio and pureQuery.
- There were several external consultants presenting their findings of how these technologies worked in their field engagements. All reports were in line with what we found in our own lab measurements.
- I sat in one presentation where the consultant presenter reported 25% savings, but the folks in the audience reported that their savings was closer to 40%.
It really great to hear these success stories. I look forward to meeting more of you and hearing your stories at IOD EMEA
in Berlin and IOD NA
in Las Vegas. By the way, the deadline for submitting talks for IOD North America is May 29th. I know there is a strong emphasis on getting customers speakers, so if you have a story to tell, you should submit an abstract.
See you at both conferences.
-- Curt Cotner
I wanted to let you know about a new two-part article series written by Alice Ma and me which is intended to help you use DB2 Performance Expert for Linux, UNIX, and Windows
to your best advantage.
- Part 1 focuses on health monitoring, describing how you can set up PE to notify you when problems occur even if you are not sitting in front of the PE Client, and how you can easily see what is going on on your database using System Health data views.
- Part 2 focuses on more advanced concepts, e.g. how to take performance baselines, how to monitor DB2 WLM and how to customize PE to monitor partitioned environments effectively.
Performance baselines are useful if you plan a change in your environment, such as applying a new DB2 fixpack or changing a DB2 configuration parameter. Take a performance baseline before and after the change, and you can easily compare whether the change improved or harmed the performance of your DB2 system.
I hope that the concepts described in the articles will help you to more effectively use DB2 Performance Expert. Note that although there is not a downloadable trial version of Performance Expert, if you are interested you can contact your IBM Sales Representative to get it.
-- Ute Baumbach
OK - I'm finally contributing to our Data Studio blog. Yes, I should have done this a long time ago and yes, now that I've posted this, I've pretty much committed myself to regularly contributing. I'm the product manager for the Data Studio portfolio and oversee the direction of the products within Data Studio.
So my first blog will be about a topic near and dear to my heart, "A Day in the Life of a DBA". I used to be a DBA prior to working for software companies, and I remember being pulled in all different directions during the day. From design sessions with development on new applications, to restoring databases when things go wrong, a DBA's life is never boring. So how does this relate to Data Studio?
We've been working hard on providing functionality for the DBA. Most people that think Data Studio is only for the developer, which may have been true a year ago, but that has changed with the recent 2.1 release (Dec. '08), and the upcoming release will offer even more functionality for the DBA.
With Data Studio and Data Studio Administrator did you know you can:
- Manage Objects - Create, alter, drop, copy/paste, compare, migrate, view dependencies, and the list goes on and on
- Manage Privileges - Grant, revoke, manage roles and users
- Manage Data - Export, Import, Load, Edit Data and more
- Utilities Management - Backup, Restore (including redirected restore), Reorg, and more
- Commands - Start, Stop, List Applications and more
A new demo
has been created that walks through some this functionality. To try this out for yourself, I recommend you download Data Studio Administrator
-- Deb Jenson
Greetings from Berlin! I’m very pleased today to tell you about several new announcements we are making in the portfolio previously known as Data Studio. With these new releases, we are taking great strides toward the vision of Integrated Data Management -- An integrated, modular environment to manage enterprise application data and optimize data-driven applications, from requirements to retirement across heterogeneous environments.
While delivering on an Integrated Data Management vision is a very broad value proposition and one that will involve all aspects of IBM Software Group, you’ll see with this release that we are adopting the Optim name as a rallying point for this technology emphasizing our focus on optimizing the value of your data assets by managing them across their lifecycle. This announcement represents another major step in delivering on our vision focusing both on extending heterogeneity and portfolio integration that provides the basis for cross-role, cross-lifecycle collaboration, efficiency, and alignment.
In future posts, we’ll take you through a couple of scenarios enabled by the new releases, but to get you started, here is a summary of the announcement and links to announcement letters and web pages:
- Enhancements to and renaming of Optim Development Studio, Optim pureQuery Runtime, and Optim Database Administrator (formerly Data Studio Developer, Data Studio pureQuery Runtime, and Data StudioAdministrator). You’ll find lots of new functionality for both developers and DBAs in these releases. Most notably:
- pureQuery and development support for Oracle databases (including PL/SQL) for a common integrated development environment across DB2, Informix, and Oracle
- A host of fantastic new pureQuery capabilities that customers have been asking for, including translation of literals to host variables to improve application performance
- More complete DB2 for Linux, UNIX, and Windows administration capabilities in Optim Database Administrator including support for large warehousing environments
- Better governance of privacy attributes across design, development, and test environments Better support for DB2 package management so that DBAs can restrict changes and rebinds to just those packages affected by a change.
Many of you may be wondering about the no-charge capabilities. We heard loud and clear that our customers want a stand-alone download package for the no-charge capability. We are reverting to that packaging model with this release. More on that as it becomes available in the near future.
- New product: Optim Query Tuner for DB2 for Linux, UNIX, and Windows brings single-query tuning advice and formatting to theDB2 developer. I think Ray Willoughby will be blogging more on this, but this product provides a great first step toward enabling developers to more effectively tune queries during development. Its integration with Optim Development Studio provides a seamless environment for crafting queries, identifying SQL hot spots, and optimizing queries all pre-production to help produce enterprise-ready code and facilitate collaboration between the developers and DBAs. See the announcement letter and Web page for more detail.
- New release of InfoSphere Data Architect. This product has always been a shining star in the portfolio for its heterogeneous database capabilities. This release includes improvements to the data governance value proposition around data privacy and the ability to maintain volumetric data for capacity planning. Most notably, users can choose from pre-defined privacy templates and share privacy definitions with developers and testers via Optim Development Studio. See the announcement letter or Web page for more detail
I’m asking some of my technical leaders and product managers to go into more detail on these announcements in future blog posts. In the meantime, I must go back to the conference. Lots to talk about…
updated 6/16/ to fix minor typo.
Following up on Curt's blog
about the new releases in June, let's take a deeper look at what is new in InfoSphere Data Architect
Building on top of the privacy specifications for generating test data that was already built in the product from December 2008 you will now be able to pick from a predefined list of categories for specific data privacy information. It's probably best to explain this with an example. Let's say you have a credit card column that you want to mark as private by generating a random number where you maintain the first 4 digits of the card. Within InfoSphere Data Architect you can specify that you want to use the credit card masking policy, and IDA will be able to connect to Optim Test Data Management and Data Privacy
solutions to get the appropriate masking method that should be used. Not only can you generate this in the design phase of your model you can now share this with Optim Development Studio
so now when developing applications you have the ability to view what data is private and even look at the SQL that accesses the sensitive data.
Also new in IDA 7.5.2 is the capability to size storage requirements and estimate for data capacity and growth. This is often called volumetrics
support and as per customer requests we have implemented this in the new release.
Finally, building on the fact that InfoSphere Data Architect is more than just a data modeling tool, we have leveraged all the different use cases that customers have implemented to improve on the different integration scenarios that we provide with IDA. We already know that Data Architect is built on top of the Rational Software Delivery platform (reminder, this product used to be called Rational Data Architect) and we continue to improve in those areas, but we have also enhanced integration scenarios related to Information Management as well. Since most of the Optim Solutions for Integrated Data Management are built on Eclipse you can utilize the sharing of connection information feature that was introduced in the June releases. Also new in IDA 7.5.2 is improved integration with IBM Industry Models
and glossary information. All Industry Models and the newly added glossary information can now be managed in InfoSphere Data Architect.
The trial of this release will be available in a few weeks at the current trial download
location. The announcement letter is here
. Oh, by the way, the announcement letter also contains information about the updated Learning Services course for IDA
that has been enhanced to cover more product capabilities. I always strongly recommend that new users get education, and this new and improved course can help you get what you need to get started.
-- Anson Kokkat