Big news last week on the Optim tools front.
- DB2 introduced a new server, Advanced Enterprise Server Edition, that bundles up tons of tools for a phenomenal value, including Optim Performance Manager! Anyone looking at or owning Enterprise Server Edition needs to take a look at this high value offering. See the announcement here.
- All DB2 for Linux, Unix, and Windows customers with current subscriptions now get access to Optim Database Administrator and Optim Development Studio in addition to Data Studio. Use Optim Database Administrator to save time and reduce errors in administering DB2 databases. Use Optim Development Studio to accelerate application development against DB2 and Oracle. Let the downloading begin! (Links further down)
- We released mod releases and fixpacks today for several tools with great added function with more coming later in the month.
Let me take you through the highlights.
Administration and Availability
- Data Studio provides an easy to use development and administration tool. Data Studio 2.2.1, still available in either a standalone or integrated development environment (IDE) configuration, is enhanced for first time user experience, customizable catalog navigation, customizable, template-based routine development (check out the new article from Michael on this), enhanced data browsing, a new query tuning UI, a new health monitor, and updates for use with DB2 for z/OS V10. Early user feedback has given rave reviews to the new task launcher for improving the experience for first-time users, particularly those not familiar with the Eclipse environment. The new health monitor is a separately installed component with Web UI, but is seamlessly launchable from the Eclipse client to monitor DB2 for Linux, UNIX, and Windows health and view alerts, application, utilities, and storage. Download it here.
- Optim Database Administrator saves time and reduces errors associated with complex database changes for DB2 for Linux, UNIX, and Windows databases. Installable together with the Data Studio IDE, it provides copy and paste, compare and synch, and data migration features that identify and manage dependent objects automatically. It provides seamless integration with InfoSphere Data Architect for governing and accelerating the database design through deployment process. Optim Database Administrator 2.2.3 inherits most of the Data Studio enhancements and enhances compare filtering. It also improves performance for building change scripts when working with a small number of objects in very large databases. Use with Optim Test Data Management to streamline test data creation, reduce storage consumption, and manage data privacy. Download Optim Database Administrator 2.2.3 here.
- Optim High Performance Unload reduces demand on system resources, shrinks batch window requirements, accelerates database migrations, and minimizes impact on business operations by speeding up unload processes by 4X or more. Optim High Performance Unload 4.2 is enhanced with DB2 pureScale support and a new option for unloading from a schema.
Design and Development
- InfoSphere Data Architect enhances design productivity, data quality, enterprise consistency and data governance. It supports business and IT collaboration to keep IT aligned with business objectives and helps to define and enforce compliance to enterprise standards. New in InfoSphere Data Architect 7.5.3 is enhanced multi-dimensional modeling including automatic discovery and notation of facts, measures, dimensions, and outriggers, automatic de-normalization of operational data for designing warehouses, and lossless integration with InfoSphere Warehouse and Cognos Framework Manager. Watch for a demo of this new capability later this month. Download InfoSphere Data Architect 7.5.3 here.
- Optim Development Studio increases development efficiency up to 50% for Java data access and facilitates cross-system development and migration. A quick review: it supports development for DB2, Oracle, and Informix. Its SQL outline feature facilitates developer and DBA collaboration quickly isolating all the SQL for review and enabling impact analysis by correlating SQL with source code, database objects, and ALTER requests. Its application metadata can be transferred to Optim Performance Manager to pinpoint problem SQL to the Java method and line of code. Plus it positions the organization to take advantage of Optim pureQuery Runtime to enhance and lockdown performance and mitigate SQL injection risk. New in Optim Development Studio 2.2.1 is the ability to deploy SQL scripts, procedures, and functions to multiple development and test servers with a single gesture. The customer feedback has been fabulous. Clifford will have an article coming out in a couple weeks. Included with Optim Development Studio are development-only copies of Optim pureQuery Runtime and the DB2 Connect JDBC driver, so download all of these together from here.
- Optim Performance Manager provides out-of-the-box performance monitoring and management to improve quality of service and prevent impacts to business operations. Its intuitive web UI provides use-anywhere monitoring, alerting, and diagnosis of potential performance bottlenecks. Integrated tooling enables easy setup exploitation of DB2 workload manager to make sure that system database resources are applied according to business priorities. And its performance warehouse provides needed context for proactive performance management, trend analysis, and capacity planning. The new Optim Performance Manager fix pack (timed for delivery with DB2 Advanced Enterprise Server Edition) does double-duty providing health monitoring for all your other development and test systems at no additional charge providing a single integrated console for monitoring all your DB2 servers. Plus it has added more flexibility in alert management, DB2 Workload Manager configuration, more performance metrics, more reports, and more consumability improvements. Want more? Upgrade to Optim Performance Manager Enterprise Edition. It enables monitoring of end-to-end database transaction times to assure that critical workloads are meeting their response time targets. New in this delivery, tell us the response time goal for the workload and let us figure out how to make it happen. Plus there’s now support for .NET and Type 2 connection workloads, enhanced Tivoli integration, and more granular performance detail.
- Optim Query Tuner gives DBAs and developers expert advice on query structure, access paths, statistics, and indexes to improve query performance. Optim Query Tuner for Linux, UNIX, and Windows 2.2.1 sports a new user interface to streamline query tuning workflow.
- Optim pureQuery Runtime improves performance and predictability while reducing risk. Optim pureQuery Runtime 2.2.1 adds support for ODBC and CLI applications. Using pureQuery, you can configure a DB2 application to run the application's dynamic SQL statements statically. You can also convert SQL statement string literals to parameter markers to improve performance, replace poor performing SQL statements with optimized statements without changing the application, and reduce the risk of SQL injection attacks by restricting what SQL statement are allowed to run against the database.
OK, let me take a breath! As you can see, we’ve been busy. Hope you’re planning to stop in and see us in Las Vegas at the Information On Demand conference. Kimberly will post a summary of our key sessions and events soon.
Hello, long time reader, first time blogger here. I work as a tech lead managing the advancement of heterogeneous database support for Optim Development Studio and Optim Database Administrator product offerings.
Pleasantries out the way, I am here to tell you about the new ways of packaging our no-charge capabilities that we hope you’ll like. We’re calling this is no-charge capability Data Studio and are using the Optim name for the is value-added capability. The goal is that this naming convention should be less confusing and simpler. You can get your basic admin and development toolingwith Data Studio, and then add additional functionality if needed byacquiring other Optim products (or Rational or InfoSphere etc...). You can try some of these additional capabilities by downloading and using the trials
Data Studio comes in two flavors:
- The first being the standard Eclipse IDE (Integrated Development Environment) which is the way most of the Optim offerings are released.
- The second is what we call stand-alone. This is built as an Eclipse RCP (Rich Client Platform) package.
The stand-alone package is the one I will focus on in this blog. Eclipse RCP in simplistic terms refers to the absolute minimum set of plug-ins required to create an Eclipse-based Rich Client Application. One of the primary goals for the Data Studio (RCP) stand-alone package was to provide a lightweight executable that would help DB2 and Informix DBAs perform day-day simple admin tasks. So, all references to JDT (Java Development Tools) do not exist which also helps in keeping the stand-alone image lightweight.
The stand-alone package is only 196 MB (network download size) and available on 32/64 bit platforms for Windows XP/Vista and Linux RedHat/SUSE which you can download here
. The installation process itself is trivial. It’s a self-extracting binary that lays out the files in appropriate directories. You will notice that the installer creates a default workspace for you in $HOME/IBM/Data Studio 2.2 stand-alone directory. You can change that at a later time. Since the stand-alone version is completely self-contained, a JRE v1.6 (Java runtime) is also bundled and installed with the product. The built-in help and welcome experience provide appropriate context-sensitive help and tutorials.
As depicted in the table below, the stand-alone version is rich with features that enable DBAs perform their day-day tasks effectively. The main difference between the two packages is that the IDE package also has support for Java stored procedures, Web Services, SQLJ development and XML because it targets developers as well.
Default perspectives are also different. For the stand-alone, you will be presented with Database Administration; for the IDE, the default is the Data Perspective. Both packages support Data Development Project creation, with the IDE flavor able to create/debug Java stored procedures (in addition to SQL stored procedures, supported by both.)
A key highlight of both no-charge offerings is to let you know when another offering can help you perform a task. The IDE version is installed via IBM Installation Manager (IM) and by definition can shell share with other Eclipse-based products such as Optim, Rational and InfoSphere.With the stand-alone package, if you want to shell-share with another product, you will need to switch to the Data Studio IDE package. Not to worry though if you started with the stand-alone package but then want to shell share with other products. Any/all work done with the stand-alone can be reused after moving to the IDE package.
Please remember to read the system requirements
before you download. It references important information like Java Runtime versions, Linux download tips, etc. Also, you can check out the discussion forum
if you have questions.
It is our sincere hope that you give this a spin and drop us a line (or add a comment below) about what you think about Data Studio.
-- Srini Bhagavan
I’m the performance architect for, among other things, the pureQuery platform. My team has the responsibility for not just ensuring that our products perform well, but also to help produce verifiable performance numbers that we can share with confidence.
I’m happy to say that we are ready to share our performance numbers for pureQuery access to DB2 for Linux, UNIX and Windows. (We already have some great numbers published for z/OS for both Java and .NET).
The goal of this particular performance test was to measure throughput improvement using static SQL execution, which is possible to do even for existing JDBC applications with no change to the application source code. The increased throughput comes mainly as a result of saving the cost of preparing the SQL when using static vs dynamic SQL. We typically don’t see the same level of interest in static execution from DB2 for LUW customers as we do from DB2 for z/OS customers because the LUW platform does not have the same memory constraints as z/OS – and therefore LUW customers might be more likely to throw hardware at the problem to achieve greater dynamic cache hit ratios and hence improve throughput.
However, static SQL also provides predictable performance because the access plan is pre-determined and I often find users are happier with predictable response times rather than ultra-fast response which can deteriorate over time.
Static SQL execution also provides much more than predictable performance. By using it, you can significantly improve problem determination and traceability. You can also reduce the risk of static SQL injection from dynamically executing applications. You can read about some of those benefits in this article. And there are additional benefits to pureQuery usage such as literal consolidation or the ability to make emergency fixes to application SQL without changing the application, which you can read about in Sonali’s article on 2.2 features.
OK, now that I’ve hopefully convinced you that there are many, many reasons to consider pureQuery and static SQL execution for DB2 LUW environments, I would like to go ahead and share our performance results.
The measurement environment
Our measurements were done with a typical 3-tier environment of a client, application server, and database server as shown here.
A word about the “ERWW” application we use. ERWW is an OLTP application based on an order entry and tracking system that is designed to exercise the database tier much more than the application tier (that is to say, there is not a lot of business logic in the application). The ERWW workload models a wholesale supplier managing orders, and consists of seven transaction types. The frequency of transactions is set to simulate a realistic scenario; the mix used in the benchmark environment was 47 percent update transactions, 53 percent read-only transactions. The workload is triggered by a Java client program which generates HTTP requests for the required transaction mix.
Before I go into our results, I have to offer up the standard disclaimer that any of you who are familiar with performance work are used to hearing. The tests that we ran were done in a controlled environment where we were able to carefully control extenuating factors that can influence the results. In particular, the type of application you run can significantly affect the results in terms of the mix of database-intensive work versus application-intensive work. The ERWW workload is a very database intensive workload and most of the work is done by the database server processing SQL requests. Therefore, by using pureQuery to optimize the database server side processing, we are in fact optimizing a large chunk of the workload. Consequently the performance gains for this workload are significant. We chose ERWW because it was readily available to us, and not because we thought it would give us the best results. I guess what I am trying to say is that your results will vary.
OK, now that that’s out of the way. We measured static execution using both client optimization of an existing JDBC application and also as a ‘new’ version of the application written in pureQuery annotated method style. The performance is reported in Normalized Throughput Rate - Transactions Per Second (ITR). The ITR is the notional throughput rate assuming that the CPUs are 100 percent busy. For example, consider an application with a transaction rate of 200 transactions per second at 75 percent CPU consumption. The ITR for this application would be 200 * 100/75 = 267 tps. This is the notional transaction rate that could be achieved if the CPUs were 100 percent busy, and no other bottleneck is hit first.
We measured the JDBC workload with both 90% and 95% package cache hit ratios. To achieve a 90% package cache hit ratio with the ERWW workload, the DB2 Package Cache (PCKCACHESZ) was sized to 180 x 4k pages, and for a 95% hit ratio it was sized to 210 x 4k pages.
Here are the results with a 90% cache hit ratio. The results are shown on the vertical axis as the database ITR improvements over the baseline of JDBC.
As you can see, client optimization almost doubled throughput over the existing JDBC application. The new application that uses pureQuery method style API more than doubled the database transaction throughput.
The results with a 95% cache hit ratio are shown here.
Note that we achieved significant throughput improvements even with a high package cache hit ratio.
In summary, pureQuery and static execution can offer many benefits, one of which may be improving the performance of your data servers with your applications. By changing the dynamic SQL to static SQL, pureQuery should help you either achieve better throughput on your existing hardware, or reduce CPU consumption of your existing hardware, allowing you to load more tasks onto it. I highly recommend that you also check out the bigger picture around Java acceleration (including the other benefits I mention) as shown in this video.
I've been around the block enough times to see that the bricks look the same. I have seen the same performance issues repeated at untold companies with the biggest issue being identification and performance tuning for Java applications.
To be specific, it is the Java communications with DB2 for z/OS that has profoundly perplexed me and my z/OS colleagues. Java, from my perspective, has sometimes been a performance nightmare on z/OS systems. By using the Distributed Data Facility (DDF) for communication, we use a one-size-fits-all approach by using a single Workload Manager service policy. This isn’t because we want to, but because we have to. To make matters worse, these connections come in sufficiently generic that there is NO WAY to figure out what Java application created this thread or multiple threads.
I can't tell you the number of times I would hear a Java programmer wander into my cube and tell me that they have a problem but they don't know where it is. "..Jeff, can you look at...” in which case I'd say, "STOP! Look at what? A thread? I have hundreds of them. Which one?". And you wonder why DBAs get a reputation as being difficult to work with! But that is another story...
Recently, I joined the IBM Data Studio Enablement team. One of our charters is to articulate the value of Data Studio, pureQuery, DB2 Optimization Expert, and a few other tools. OK, I am an old "green screen" guy. A teammate accused me of not embracing our products to which I answered that I am reluctant to jump on any bandwagon unless I see a true value - Not just as an IBMer, but as a z/OS DB2 System Programmer. How can I "sell" a product if I am not "sold" on the value myself?
Guess what? I found the product combination that is going to change the way Java and DB2 for z/OS work in performance. It is Data Studio Developer with pureQuery. Why? Because I can now uniquely identify the Java thread correlation ID with a unique name which means I can now see it on OMEGAMON for DB2 and I can redirect this work to a service class in Workload Manager specifically used for Java work. (See this demo for an example of how you can see the unique names in OMEGAMON.)
The tooling in Data Studio Developer allows the programmer to quickly develop the Java structures necessary to access DB2 for z/OS. In addition, Java application developers can now easily bind a package, which accesses DB2 for z/OS, used by the Java application. Finally!! We’ve had this DB2/application access technique forever with COBOL using CICS. Now we have it with Java, too.
As a result of having a statically-bound package for Java code with a type 4 driver, I can now set up a Workload Manager service policy for Java DB2 calls as it passes through subsystem type DDF using a PK rule. To go one step further, WLM could be set up with a JAVAHIGH and a JAVALOW service classes. Then, these classes can be prioritized, have a time goal applied for period 1 and a velocity goal applied for period 2. Then, using the naming rule PK for package name, these service classes can be referenced to specific service classes.
Not too many products really flip my switch, but this combination of Data Studio Developer and Data Studio pureQuery Runtime is one of them. Dynamic SQL tuning with Java and DB2 for z/OS has been my nemesis for a very long time. pureQuery gets Java code as close to "well-tuned" as I have ever seen. I would recommend Data Studio Developer and pureQuery to anyone struggling with an out-of-control distributed environment going against DB2 for z/OS as the DB-tier.
-- Jeff Sullivan
We have been busy working on an update to Data Studio just in time for DB2 10.1 for Linux, Unix and Windows
. This release includes enhancements throughout the product as well as the added support for DB2® V10.1 for Linux, UNIX, and Windows databases which include the following features:
- Adaptive compression for table rows
- Special registers for temporal tables in server profiles
- Time-based data management with temporal tables
- Data management using multi-temperature storage
- Data security with row and column access control (RCAC)
These DB2® V10.1 features are fully supported by Data Studio making it easier for you to take advantage of them.
Other product enhancements include updated syntax checking, additional support for pureScale, improvements to SQL routine support, improved performance information gathering, as well as enhancements to query tuning, shell sharing, and the Data Stuido web console, here are the details:
- We updated the syntax checking in the SQL and XQuery editor for all DB2 for Linux, UNIX, and Windows Command Line Processor (CLP) commands to provide validation of your commands and utilities as well as SQL.
- For pureScale we added support to help verify DB2 Cluster Services Status and Manage DB2 Cluster Services Configuration to further assist in the use of pureScale in your enterprise.
- We have made the following improvements around SQL routine support:
- You can now use the routine editor to create, edit, deploy, run and debug routines directly from the Data Source Explorer without having to create projects, simplifying the task of routine development when there is no other need for a development project.
- Likewise, you can re-compile a routine directly from the Data Source Explorer to enable/disable debugging.
- You can now create unit routine test cases with saved parameters and pre/post SQLs and then compare the current results with a saved base line result. This helps catch regressions when editing routines.
- To make it easier to tune your SQL before deployment, there is now an improved user interface for profiling DB2 LUW routines and to simplify integration, you can invoke InfoSphere Optim Query Workload Tuner from the profiling view.
- You can now use the Configuration Checker to verify that all essential server components are in place for routine development on DB2 Z.
- We made it easier to gather database performance information for stored procedures in DB2 for Linux, UNIX, and Windows databases.
The query tuning features in Data Studio have been enhanced to include the following:
- We added a Start Tuning wizard to the Task Launcher for a better up and running experience
- Our access plan graphs can now display three new nodes:
- Show the rebalancing of rows between SMP subagents with REBAL
- Show multi-distinct processing with MGSTRM
- Show a zigzag join with ZZJOIN
In addition to the Data Studio query tuning enhancements, the client for InfoSphere Optim Query Workload Tuner is now part of the Data Studio v3.1.1 product. After you install Data Studio v3.1.1, all that you need is a license to enable InfoSphere Optim Query Workload Tuner, simplifying the install process for those of you who use both Data Studio and InfoSphere Optim Query Workload Tuner.
On top of all the new functionality, the Data Studio v3.1.1 clients are now built off of Eclipse 3.6, which means the Data Studio full client is now compatible (for shell-sharing) with newer Eclipse based offerings such as the latest IBM Rational Application Developer and Rational Developer for System z.
For the Data Studio web console, we have included the following enhancements:
- New alerts so that you can configure for the following:
- High availability disaster recovery (HADR) databases
- DB2 pureScale instance status
- When the DB2 pureScale cluster facility is not found
- For DB2 V10.1 for Linux, UNIX, and Windows databases, we also added the following enhancements:
- Your storage group information is provided in alerts both under the 'Storage' category as well as on the 'Current Table Spaces' dashboard.
- You can create and manage user-defined alert types using custom scripts so you are no longer limited to the out-of-box alerts.
- You can import existing tasks from the DB2 Task Center into the Job Manager to ease the move from the DB2 Task Center to the Job Manager in Data Studio.
- You have more control over your jobs as we have added the ability to cancel a job that has started and specify a timeout period for your jobs.
Visit the Data Studio product page
and download from developerWorks
. Data Studio is bundled with a number of IBM data servers such as DB2 for Linux, Unix and Windows, DB2 Connect and IBM Informix. For a good overview, check out the "Getting started with IBM Data Studio for DB2
I had two recent visits with customers where I was explaining pureQuery. When I finished what I thought was a nice polished presentation on the subject, both times someone said, "So, I have to use those pureQuery APIs in order to turn my dynamic SQL into static SQL." Ugh. You know that feeling where it seems like you must be speaking in a foreign language because the words just aren't being understood? I felt some relief when Rafael Coss told me that he gets this every time he explains pureQuery, and he has a great knack for making the complex seems simple.
Just in case you are also of the impression that the pureQuery APIs must replace existing JDBC, Spring, Hibernate, etc. calls to the database, the answer is no. The conversion from dynamic to static SQL using client optimization, does not require any changes to your application. Plain old JDBC calls can remain in your programs and with pureQuery Runtime, we can capture the SQL and it can be statically bound to DB2 (z/OS or LUW). This explanation usually creates the "Ah ha" moment.
So, while pondering this, I have come up with a new way to explain pureQuery. I now plan to hold off introducing the APIs until after I finish talking about client optimization and the great capabilities you get when you use client optimization:
- How Optim Development Studio tooling provides the pureQuery Outline to visualize the relationships among Java code, SQL statements and database objects
- How SQL injection can be reduced/eliminated
- How framework-generated SQL can be reviewed and possibly tuned
- How SQL can be revised
- How non-captured SQL can be blocked from reaching the data sever
(By the way, Patrick Titzler's tutorial
, although a little dated, still is the best source I know of to understand the process of client optimization.)
Then, after a deep breath and a new title slide, I plan to talk more about ORM frameworks and our pureQuery APIs. (By the way, if you're curious about how pureQuery relates to ORM frameworks, check out Rafael's ChannelDB2 video
Hopefully the "Ah ha" moments won't get delayed any more. :-)
I was looking at Scott Ambler’s surveys
on IT project success rate. It is very interesting how project success as seen through Scott’s surveys present a more hopeful picture for project success than from the Standish Group’s Chaos Report
, which in its 2006 refresh reported a 35% success rate and a 46% “challenged” rate. (Nice blog entry summarizing a variety of research on the topic in Dan Galorath’s blog
and 2006 Standish numbers from an SD Times article
.) Standish defined success as “on time, on budget, meeting the spec”, while challenged means they had cost or time overruns or didn’t fully meet the user’s needs. But I digress…
Scott’s data indicates that projects that use evolutionary development methodologies, e.g. Agile
or Rational Unified Process
, fare better than those using traditional waterfall or ad-hoc processes. That’s not surprising given the emphasis on tight collaboration among stakeholders and continuous evolution and validation. Really, it’s pretty intuitive. So I was thinking about key characteristics of iterative methodologies and how they relate to database and data access development. (I know, Scott has already thought about this too.
See his Agile Data
site. And Rafael did a Webcast
on it earlier in the year.) But more specifically, I wanted to look at how our Data Studio portfolio supports evolutionary development methodologies. Yes, there’s more to do, but I think what we offer goes a long way towards accelerating solution delivery with high quality results. Vijay and I are going to do a Webcast on this April 28th titled Accelerating Solution Delivery for Data-Driven Applications
. Hope you’ll join us.
In some ways, this is also the companion Webcast to Rafael’s Performance Optimization
webcast. In his blog
, he talked about how from a lifecycle perspective performance optimization can broken down into doing it right the first time or fixing it after that fact. His Webcast focused on the latter and this one on the former.
What are your stories about evolutionary methodologies and database development? Have you used Data Studio software in this context?
The fix packs for products in the Data Studio family 2.1 have arrived!
The fix packs includes enhancements and fixes to the Version 2.1 release of IBM Data Studio Developer, IBM Data Studio Administrator, IBM Data Studio pureQuery Runtime and of InfoSphere Data Architect 7.5.1. These fix packs are intended to fix problems you may have experienced in the 2.1 release. For additional information or detail on included fixes please check out these links.Data Studio Administrator Fix Pack 1 InfoSphere Data Architect Fix Pack 1Data Studio Developer Fix Pack 1
For those who already have these products installed on Windows, you will use IBM Installation Manager to apply the fix pack:
- Go to the link to download the compress file for Fix Pack 1 and then extract it to a temporary directory. For example, C:\temp
- Start the IBM Installation Manager by going to Start > All Programs > IBM Installation Manager > IBM installation Manager
- In the IBM Installation Manager, click File > Preferences... this will launch Preferences wizard. What you want to do here is click "Add Repository..." and then you can put the path for fix pack 1. For example, C:\temp. Click OK after you have entered the path and click Apply on the Preferences page and OK.
- Back in the IBM Installation Manager start page, click Update Packages.
- Select IBM Data Studio (for Data Studio Administrator or Developer) or IBM Software Delivery Platform (for InfoSphere Data Architect) as the package you want to update and click Next.
- On the licensing page, read the license agreement and select "I accept the terms in the license agreement" and click Next.
- On the summary page, verify the installation information and click Update. This will begin the installation of the fix pack on your system.
- When the installation is complete you can click Finish, and close IBM Installation Manager.
For those who don't have the products installed, go ahead and download the trial versions of the products from the following links:Data Studio Developer Data Studio Administrator InfoSphere Data Architect
(By they way, you can find all these links together on the Data Studio Community Space
- Unzip the Data Studio package you just downloaded and click setup.exe. This will launch/install IBM Installation Manager.
- When prompted to select packages to install, click Check for Other Versions and Extensions. This will show you both the newest IBM Installation Manager, plus the fix pack!
Now you are ready to go!
We also have fix packs for DB2 Performance Expert and DB2 Performance Expert Insight Feature:DB2 Performance Expert Version 3.2 Fix Pack 1DB2 Performance Expert Extended Insight Feature Version 3.2 Fix Pack 1
-- Tina Chen
It's been less than 5 months since we announced our 1.2 releases of Data Studio, which I blogged about
back in July.
Since then, we have talked to thousands of people, provided demonstrations to hundreds, and visited dozens of customers. People are starting to understand Data Studio and the value of Integrated Data Management better.
With this latest release, announced today, we are really targeting the DBA with enhancements across the portfolio to help DBAs improve application performance, security, manageability, and TCO. In this release, the enhancements are particularly targeting Java applications that access DB2 data, but you'll see we're starting to branch into .NET as well.
The announcements today are for:Data Studio Administrator
2.1, in which we've really focused on both usabilty and functionality. We've done lots of usability testing with DBAs and have provided a more natural approach for doing many tasks, including copy and paste of database changes, flatter traversal of the data source explorer, better sorting and filtering of objects, and new task assistants for utilities, commands and configuration parameters, so you won't have to leave your environment to go out to the command line or control center to perform those tasks.Data Studio Developer
and Data Studio pureQuery Runtime
2.1, which extends the power of pureQuery for developers and DBAs to collaborate together to:
- Eliminate SQL injection risk for Java database applications by giving you the ability to indicate that only SQL that has been captured and approved my be executed.
- Optimize SQL performance by providing developers with the ability to profile the SQL to see immediately how many times a SQL statement is executed, and how long it takes to run (elapsed time), giving developers an easy way to start identifying potential hot spots in the application before coming to the DBA.
- Improve quality of service for OpenJPA and .NET applications. Steve Brodsky blogged about the integration of pureQuery with OpenJPA, which actually was avalable with the 1.2 pureQuery release with WebSphere Application Server v7. For the many many people who ask when can we see the benefits of static SQL with .Net applications, we have taken an initial step in this release by allowing client optimization .NET applications; in other words, the ability to capture dynamically executing SQL and bind them into packages.
Last but not least, DB2 Performance Expert for Linux, UNIX, and Windows 3.2
and the new DB2 Performance Expert Extended Insight Feature
3.2. This is an announcement particularly close to my heart as many of you who have sat in on my talks probably know. Whenever I sit down with DBAs and talk about the problems with diagnosing performance problems in a Java application environment, they always nod their heads in agreement. There is a real pain point here by not having the same diagnostic capabilities for Java as many DBAs are familiar with for COBOL/CICS applications.
If you extend DB2 Performance Expert with the Extended Insight feature (separate PID and separately priced but prereqs DB2 PE), you can enable new end-to-end database monitoring for Java applications for DB2 servers on Linux, UNIX, and Windows. This monitoring capability will really help improve availability of mission-critical database applications by making it much easier to detect performance issues and figure out whether the problem is one in the database or somewhere else in the software stack.
Also, you can set thresholds (your SLAs, so to speak) so you can easily see how the application is performing against those targets. If you haven't read it yet, I encourage you to see the article
that the Germany team who develops this feature wrote. It's a great introduction to this new capability, and it's really just our first step. This whole concept of providing greater insight to DBAs and developers is planned to be rolled out across more databases and more data access environments.
Just a head up. We're not done. We have more announcements coming soon!
Is "agile data" just another buzzphrase? Does it even make sense to try to apply agile development principles to the database?
An expert in agile development, Scott Ambler
, sees agile data as an essential component for application development that goes against a database. You can learn more about agile data here: http://www.agiledata.org/
I think one of the classic challenges that agile data faces is about dealing with a "brittle" database. What do I mean by brittle? Basically, I am talking about how difficult and time consuming it can be to refactor the database schema to improve software. Check out the results of this survey question: "How long does it take to safely rename a column in a production database?"
Source of this survey:
Source: Data Quality Techniques
survey by Ambysoft, September 2006.
The database and/or your software development techniques around the database are "brittle" if it takes longer than one week to make a simple rename change. Almost half of these respondents fell into that category. I would venture to say that more interesting refactoring would therefore take most shops much longer than a week.
Another part of the agile data challenge is about being able to quickly tell what the impact of a change is going to be. If we want to rename a column, what are all the database objects (tables spaces, views, stored procedures, etc ...) that will be impacted, and is there a tool to help me automate a script to make these changes?
If this sounds interesting to you and you want to learn more about agile data and how Data Studio can help, come listen to a replay (until May 09) of a webcast
I did last week on how Data Studio can help make data more agile.
If you listen to the replay or are exploring agile data I am very curious to get your feedback. Just call me an agile guy. What do you think of applying agile techniques to the database? Are you doing it? If so, what is your experience? What tools are you using? What tools do you need?
What do you think?
-- Rafael Coss
Recently I finished conducting a day long Proof of Technology session in New York on Data Studio Developer and pureQuery and I thought I'd share my experience.
For those of you who have never attended an IBM Proof of Technology, it is usually a day long event at an IBM location and is a combination of presentations and hands-on exercises designed to help attendees learn and play with the technology. The computers at these sites are pre-loaded with the software and exercises that complement the presentations. Your IBM sales rep or tech sales contact is the one who would nominate you to attend one of these.
Back to the pureQuery PoT -
It walks attendees through some of the basics of Data Studio Developer, all the way to advanced pureQuery concepts.
Here are details of some of the modules:
- Basics of Data Studio Developer (including a primer on the Eclipse environment). This is especially useful if you are not familiar with the Eclipse environment.
- pureQuery concepts and exercises
- Tooling in Data Studio Developer for pureQuery
- Bottom-up code generation using pureQuery
- Deploying existing Java applications using Static SQL without changing a line of code (aka client optimization)
- Explain capabilities within Data Studio. Check the Explain plan as you develop Java programs, stored procedures or in the Integrated Query editor.
Judging from the questions and comments from the attendees, it seemed as if they found it worthwhile.
I always like the feedback and validation (and sometimes invalidation) of our ideas.
Things I learnt during this trip:
- Challenges associated with deploying applications when using SQLJ. This is significantly simplified using pureQuery.
- The PoT has way more material than one could cover in a day. Attendees cherry-picked some of the later exercises.
- NYC can get quite cold at night without a thick jacket! (Call me a California wimp.)
My name is Thuan Bui and I am part of the Data Studio Enablement team. One of my jobs is to create demos that illustrate the capabilities and business values of the Data Studio portfolio. A key one we just recorded and put on the Web is a two-part demo
that uses a story to bring to life the things you can do with these products and how the integration can enhance teamwork.
The demos we build are not mocked up screen shows (at least most of the time). Because people use these for live demos as well as recording, we really have to come up with more than just a story. We need to create a relevant database schema, load it with data, build supporting applications and so forth. And of course we use Data Studio to help us with all that :) . (And, yes, we occasionally find bugs and usability issues that we report back to the development team.)
This scenario-based demo shows how and why Data Studio portfolio is used throughout the entire data lifecycle including design, development, deployment, and management stages. We start showing how pureQuery client optimization
is used to stabilize performance for an existing JDBC application, then show how to use Rational Data Architect for data design tasks, then feed the model to Data Studio Administrator for data model changes and deployment, and how to use pureQuery Outline for impact analysis of a potential schema change. (By the way, if you have no idea what I mean about pureQuery outline, see this article
.) We use Data Studio Developer tools for SQL, Java application and Web services development and deployment, and finally show how to use the web-based Data Studio Administration Console for database and system health monitoring.
One of the challenges in producing this demo is that we have lots of different components to highlight, with the right level of information, within a time limit. We try not to make it too long nor give too many details so that both technical and non-technical viewers can consume it and don’t lose interest.
Although the story is for a fictional enterprise, our goal is to try and show problems and resolutions that could apply to companies in the real world. Perhaps the most challenging thing for those of us inside IBM is to come up with scenarios that will resonate with you, the people who have to deal with the data management lifecycle every day. Let us know if you think we’re hitting the mark or if there’s something more or different we should be showing, maybe an example from your own experience.
Our next demo will focus on the story for z/OS environments.
We’re looking forward to hear from you – just add a comment using the Add a comment link below or send an email to firstname.lastname@example.org.
Lately I've been feeling as if I haven't been talking to customers enough and those of you who know me know I love to talk. It's tougher and tougher to fly these days, so maybe I can use this blog to have some virtual meetings. I'm really interested in hearing from DBAs about your experiences using our existing data management tools and what you would like to see in terms of a strategic direction for tools. Send me your questions, comments, concerns, and I'll try to blog regularly with my answers and to further the discussion. You can either send via the comment box here or directly to me at email@example.com.
Using software can be like trying to get to a freeway onramp that you know well by driving through an unfamiliar part of a city. You know what the outcome should look like (cruising on the open highway), but to get there you have to learn new landmarks ("Ah, so I turn left at the donut shop!"), new shortcuts ("So, I saved 5 minutes by taking Jefferson. I'll have to remember that."), the streets that go only one way (*pant, pant* "No, I guess I can't drive that way down Sycamore after all."), and more.
The information roadmap for InfoSphere Optim Query Tuner and InfoSphere Optim Query Workload Tuner will help you get to know the streets and landmarks of those two products, so that you can tune single SQL statements or workloads and get to your open highways faster. You'll find it at http://www.ibm.com/developerworks/data/roadmaps/roadmap_ioqt_ioqwt.html
If you have any suggestions for links to include in it or have any other comments about it, please let me know.
Information Development for InfoSphere Optim Query Tuner and InfoSphere Optim Query Workload Tuner
We heard from many users that they sometimes have trouble finding the information they need for specific products in the multi-product Integrated Data Management Information center. To address this problem, we needed to create individual information centers for the products, and yet still provide a way of tying the information together for the people who need to find information for multiple products. So this is what we did…..
We separated the documentation for most of the solution into individual information centers; the separation is generally according to product. For those users who do need information across products, it's all easier to get to and keep track of because we added a header to our information centers that has tabs that you can use to navigate from one set of information to another.
Much like the Data Studio Task launcher has tabs to organize the tasks, the information centers now have tabs to organize the information by data lifecycle phase. The tabs are: Design, Develop, Test, Administer, Monitor, Tune, Backup & Recover, Archive & Retire,
If you are in the Data Studio information center, and you want to see the InfoSphere Data Architect information center, just click on the Design tab and choose the product or version that you are interested in. To get back, click on the Develop tab or the Administer tab and choose Data Studio. (Hint: In either case, you will get to the same information center because the Develop and Administer tasks are both part of the Data Studio set of information.)
With this new tabbed header, you can get to information for previous product releases, too. This new design makes it easy to find the version of the information that you want, the product that you want, and the key task that you want.
If you want to bookmark the information center for your favorite product now, here is a list of the URLs. And you only need to bookmark one because with this new design, you'll easily be able to find your way to the rest!
Data Studio/InfoSphere Optim Information Architect