Thanks to those of you who responded to my previous blog asking for feedback on using Java on z/OS for your database tools. It was really helpful.
I need your help again.
We are having internal discussions about plans for the Data Studio administration console, a no-charge download that includes both a replication dashboard and high-level monitoring of database health and availability. It is the database health and availability monitor that I need your feedback on. I need to hear from both DB2 for LUW and DB2 for z/OS users, so don’t be shy!
If you aren’t familiar with the health and availability monitor, there's a good tutorial here. Just as a reminder, health and availability monitoring enables you to easily assess the high-level health of DB2 for LUW and DB2 for z/OS systems. It includes a health overview, which lets you look over a landscape of database servers, and a dashboard that lets you focus on a single server. In addition, there is a time slider that lets you view changes over time in both the dashboard and an alert history. Health and availability monitoring also includes quick analysis and suggests possible resolutions for many database server conditions and scenarios.
The intended purpose of the health and availability monitor is to serve as a first level, “at a glance” type monitor. It’s not intended to provide the deep diagnostics that a monitor like DB2 Performance Expert or Tivoli OMEGAMON XE Performance Expert for DB2 provides, but it can allow you to quickly glance over your systems and immediately spot whether one of your databases needs attention. Our thought that was customers would most likely want to use this in their test environments, as they may not want to invest heavily in monitoring non-production servers.
OK, so here are my questions (these aren’t formal survey-style questions, so feel free to improvise):
- If you have not installed Data Studio administration console for database health and availability monitoring we’d like to know why not. Is it because you were not aware of it? Do you already own a monitor for all your environments, including test? If you are willing, it would be great if you could install it locally and give it a try. Then you can also be one of the people to give us feedback on it.:-)
- If you have installed it:
- Was it easy to install? Do you like the overall usability of it? What about quality? Were there features you found useful? Were there features you felt were lacking?
- Does your organization have a need for this type of "at glance" monitor and have people who benefit from using it? Or do you typically install a more sophisticated monitor on all your database servers, including test?
- Are you still using it? If not, why not?
Please take a few minutes to dash off an email with as much information as you can. Don’t forget to tell us a bit about your environment and if and how that influences your answers. I really appreciate your help.
You can send your feedback to dstudio at us.ibm.com, and I’ll be sure to get your responses.
-- Bryan Smith
I've been around the block enough times to see that the bricks look the same. I have seen the same performance issues repeated at untold companies with the biggest issue being identification and performance tuning for Java applications.
To be specific, it is the Java communications with DB2 for z/OS that has profoundly perplexed me and my z/OS colleagues. Java, from my perspective, has sometimes been a performance nightmare on z/OS systems. By using the Distributed Data Facility (DDF) for communication, we use a one-size-fits-all approach by using a single Workload Manager service policy. This isn’t because we want to, but because we have to. To make matters worse, these connections come in sufficiently generic that there is NO WAY to figure out what Java application created this thread or multiple threads.
I can't tell you the number of times I would hear a Java programmer wander into my cube and tell me that they have a problem but they don't know where it is. "..Jeff, can you look at...” in which case I'd say, "STOP! Look at what? A thread? I have hundreds of them. Which one?". And you wonder why DBAs get a reputation as being difficult to work with! But that is another story...
Recently, I joined the IBM Data Studio Enablement team. One of our charters is to articulate the value of Data Studio, pureQuery, DB2 Optimization Expert, and a few other tools. OK, I am an old "green screen" guy. A teammate accused me of not embracing our products to which I answered that I am reluctant to jump on any bandwagon unless I see a true value - Not just as an IBMer, but as a z/OS DB2 System Programmer. How can I "sell" a product if I am not "sold" on the value myself?
Guess what? I found the product combination that is going to change the way Java and DB2 for z/OS work in performance. It is Data Studio Developer with pureQuery. Why? Because I can now uniquely identify the Java thread correlation ID with a unique name which means I can now see it on OMEGAMON for DB2 and I can redirect this work to a service class in Workload Manager specifically used for Java work. (See this demo for an example of how you can see the unique names in OMEGAMON.)
The tooling in Data Studio Developer allows the programmer to quickly develop the Java structures necessary to access DB2 for z/OS. In addition, Java application developers can now easily bind a package, which accesses DB2 for z/OS, used by the Java application. Finally!! We’ve had this DB2/application access technique forever with COBOL using CICS. Now we have it with Java, too.
As a result of having a statically-bound package for Java code with a type 4 driver, I can now set up a Workload Manager service policy for Java DB2 calls as it passes through subsystem type DDF using a PK rule. To go one step further, WLM could be set up with a JAVAHIGH and a JAVALOW service classes. Then, these classes can be prioritized, have a time goal applied for period 1 and a velocity goal applied for period 2. Then, using the naming rule PK for package name, these service classes can be referenced to specific service classes.
Not too many products really flip my switch, but this combination of Data Studio Developer and Data Studio pureQuery Runtime is one of them. Dynamic SQL tuning with Java and DB2 for z/OS has been my nemesis for a very long time. pureQuery gets Java code as close to "well-tuned" as I have ever seen. I would recommend Data Studio Developer and pureQuery to anyone struggling with an out-of-control distributed environment going against DB2 for z/OS as the DB-tier.
-- Jeff Sullivan
You might want to check out a new article Tina Chen wrote about What's New in Data Studio Administrator 2.1
The article highlights how many database administration functions have been added to Data Studio. For those new to Eclipse, I think this article will help you cut through and start tackling real database tasks. If you're a Data Studio Administrator 1.2 or DB2 Change Management user, you'll notice the radical UI improvements, making the product easier to learn and easier to use. Check it out.
I just got off the phone with a DBA who just started to use DB2 Performance Expert (PE) a couple of months ago. This customer, whom I will call “Milo,” after my cat, called me to tell they were able to finally diagnose a pesky performance problem that had been hiding for all these months.
Let me back up to give you some history. When I first talked to Milo, they were having performance problems on all their environments. Before using PE, they usually ran a series of scripts to diagnose performance problems. These scripts would help them collect performance metrics for each partition. To add to the complexity they had a bunch of different environments they were monitoring. This means that Milo would end up with reports all over the place --in different systems and in different directory structures. Then when he downloaded them to his workstation he would need to print them out.
Milo's desk was covered with printouts. He had stacks of printouts on the left, stacks of printouts on the right -- they were even stacked on his bookcase. Milo's baseball bobble heads were holding up printouts because Milo ran out of room in his cube. Each area of his cube represented a different environment. Milo even made jokes that he was drowning in his own printouts, and he felt like the performance reports were multiplying if you left them alone. We both got a good chuckle out of that.
Fast forward to life after getting PE. With PE, Milo was easily able to pinpoint the performance problem in their large warehousing environments. PE was able to monitor all their different environments, which helped them diagnose a ton of problems because they were able to see all the performance metrics and how they correlated. For example, PE allowed them to view each partition and compare the partitions.
However, there was one sticky problem in their large warehouse environment (environment A) which they were unable to diagnose. But since they began using PE, that slowdown didn’t show up for months. Then one day, that pesky performance problem finally revealed itself again. Apparently one of the DBAs ran the old performance scripts on environment A, at which time the problem reappeared so the team could finally work on isolating the problem.
Milo told me laughing We couldn't figure it out….. the slowdown happened almost like clockwork. We had our scripts scheduled to capture the performance data and somehow it just happened. We used PE to diagnose the problem, only to discover it was our own poorly written performance scripts that caused the problem. You see, environment A was built on smaller UNIX boxes, with less memory and other resources. When they ran the performance scripts they caused they system to run out of memory, thus impacting their DB2 system. Since PE was able to show them what else is using resources on the system outside of DB2 they were able to see the problem immediately. Their scripts didn't check OS resources, but PE does.
The person who wrote the original scripts has been gone for a long time. The scripts ran fine and never caused performance problems on the other larger environments, so there was never any reason to examine them. Ooops :-)
Cheers - Alice Ma
Edited on 2/6/2009 to correct the query and to acknowledge that the existence of Java stored procedures may not necessarily mean you have the SDK. Thanks for keeping me honest, folks.
We are investigating implementing some server-side functions in our data tools that would run in a Java runtime on z/OS, and I would appreciate getting your feedback to help us with this planning work.
- Do you have the "IBM SDK for z/OS Java 2" installed on all of your z/OS systems where you would be running your IBM DB2 tools? If you're a DBA, you may need to ask your system programmer or, if you know you have Java stored procedures, you are more likely to have it. If you're not sure if you have any Java stored procedures, you can run this query:
SELECT SCHEMA, NAME, CREATEDBY, LANGUAGE, ROUTINETYPE, SPECIFICNAME, WLM_ENVIRONMENT
WHERE LANGUAGE = 'JAVA';
- If yes, what level(s) of the SDK do you have installed?
- Do you have zAAPs on the LPARs you would run the tools on?
- If no (that is, you don't have the SDK installed), do you foresee any problems installing that as a prereq for a DB2 for z/OS tool product?
- Any comments about our using Java on z/OS?
You can send your feedback directly to me at bfsmith at us.ibm.com.
Thanks a lot!
--- Bryan Smith
…and at the same time maintain or improve query performance.
Previously I have blogged about how DB2 Optimization Expert
can help developers produce better performing queries early on during development. So with good query tuning it should be possible to drive down CPU costs for database applications.
However, there’s another way that DB2 Optimization Expert can help drive down CPU costs and it has to do with the general maintenance that DBAs are tasked to perform.
A common practice in many DB2 for z/OS shops is to execute RUNSTATS with the TABLESPACE <database-name.table-space-name> TABLE ALL
option. If you have ever attended one of Bryan Smith’s presentations on utilities, you have heard him say using this option is expensive and wastes CPU resources.
Why is this option expensive? The TABLE ALL option gathers statistics on all
columns of the table(s) in the named table space. The CPU resources required increases as the table size increases and is also dependent on the number of columns defined. Are all of the column statistics really needed? Probably not, because if the column is not referenced in the WHERE clause of a query, then unneeded statistics have been gathered and unnecessary CPU resources have been consumed. Also the TABLE ALL option does not gather COLGROUP or histogram statistics, which might improve filter factor estimates and improve query performance.
The DB2 for z/OS V9.1 Utilities Guide
has a paragraph in the Improving RUNSTATS performance section:
“Run RUNSTATS on only the columns or column groups that might be used as search conditions in a WHERE clause of queries. Use the COLGROUP option to identify the column groups. Collecting additional statistics on groups or columns that are used as predicates improves the accuracy of the filter factor estimate and leads to improved query performance. Collecting statistics on all columns of a table is costly and might not be necessary.”
Easier said than done. How does one perform the analysis required to only gather the statistics that are truly required to maintain query performance and reduce the CPU requirements of RUNSTATS? Without a tool, it’s a very manually intensive process and can be even more of a challenge when working with dynamic SQL. Just to give you a feel for this effort, here are the steps you might need to do:
- Capture the SQL for a given object or set of related objects
- Identify all of the columns which are used in the WHERE clause
- Identify which of the above columns are used in an index
- Capture usage metrics for the predicates
As you can see, this could be a time-consuming and arduous task.
Fortunately, the Statistics Advisor in DB2 Optimization Expert for z/OS can make this analysis less daunting – a lot less daunting. Statistics Advisor will analyze a single statement or group of statements (aka workload) and provide a set of recommended RUNSTATS statements.
To illustrate how easy it is to gather the necessary statistics in OE, I captured a workload from the DB2 catalog for a specific collection. I was presented with 852 queries and then invoked Workload Statistics Advisor on this set of queries to receive the recommended RUNSTATS statements for objects in the workload. The process was completed in less than thirty minutes. For dynamic SQL, I would follow basically the same process, but the workload would have been captured from the dynamic statement cache, or from the profile monitor for DB2 for z/OS V9.1 subsystems.
To validate the CPU usage assertions, I actually executed RUNSTATS twice on one of the table spaces used in the above workload. The first execution was with the commonly used TABLE ALL option: RUNSTATS TABLESPACE DBASE1.TSPACE1 TABLE(ALL) INDEX(ALL)
SHRLEVEL CHANGE REPORT YES
The second execution used the statement recommended by OE’s Statistics Advisor: RUNSTATS TABLESPACE DBASE1.TSPACE1 TABLE(QUAL1.TB1) COLUMN(EMP_NO,PROJ_NO) INDEX(QUAL1.TB1X3 HISTOGRAM NUMCOLS 1 NUMQUANTILES 20, QUAL1.TB1X2, QUAL1.TBIX1 KEYCARD) SHRLEVEL CHANGE REPORT YES
There was a 28% reduction in CPU time between the two runs
. Not only has the CPU consumption been reduced, but additional filter factor statistics have been gathered that should improve query performance.
One topic I have not covered is SAMPLING. It is a technique that can be used to further reduce RUNSTATS CPU consumption by limiting the number of rows evaluated. Statistics Advisor supports SAMPLING and its use is controlled via user-managed preferences.
So in summary, DB2 Optimization Expert’s Statistics Advisors can help DBAs gather the right statistics resulting in less CPU consumption and improved maintenance window throughput. And the additional stats gathered may improve query performance, which is the ultimate goal of running RUNSTATS after all.
I work with LOTS of customers who use or want to use DB2 Performance Expert for LUW. The first question I get asked usually has to do with the basic architecture - what pieces run where and on what platform?
The first thing to understand is that DB2 PE is a client/server application. The PE server
runs on LUW platforms (AIX, SUN, Linux, Windows, etc). Most customers install the PE client
on Windows then connect to the PE server running on AIX, SUN, Linux, Windows, etc.
Look at the diagram below.
As shown there, we recommend that you install the PE server on its own DB2 instance -- not on the monitored box -- and have that server remotely monitoring the other DB2 instances on the network. Why? Well, you don't want to impact the monitored DB2 instances. I like to call this window shopping... you look but you don't want to touch. Note that in the 3.2 release, you get a license of DB2 ESE for just this purpose so you don't need to use one of your own DB2 installations for this.
PE can monitor different DB2 levels all from a single PE Server. You can also mix and match your operating systems, too. For example, the PE server can be installed on AIX and remotely monitor your DB2 instance running on AIX, SUN, Linux, Windows, etc..
If you want the gory details on specific levels of hardware and software server support for both the client piece and the server piece of DB2 PE, see this technote
I'll be back soon with answers to more FAQs. Feel free to submit your questions here using the comment link below or send an email to dstudio at us.ibm.com.
-- Alice Ma
In these trying economic times, we have to find ways to reduce costs everywhere we can. And if you can reduce costs while at the same time improving staff productivity, well, that looks pretty good.
For organizations running WebSphere and DB2 applications on z/OS, and there are quite a few, we think we have just that opportunity in pureQuery software. Stephen Brodsky and others have been writing about the value of pureQuery on this blog
and we published some good numbers on pureQuery performance
as well. In case you prefer listening to reading, I recently recorded a podcast on pureQuery. You can find the podcast on iTunes by searching the podcast directory for “Did you say mainframe?" which will bring up the Did you say mainframe? category. My podcast is the one entitled "Enhancing Java environments with pureQuery."
Also, Stephen and I are doing a webcast together on February 4th focused on pureQuery in the mainframe environment. You can register here
. I hope you’ll join us and that this creates an opportunity for your organization to save money.
A question that keeps coming up again and again.... and again is:
"Should we use stored procedures, or SQLJ or pureQuery for z/OS database access? "
The answer, as usual, is - "It depends."
Let me try to give some perspective on the question about stored procedures, SQLJ, and pureQuery.
Stored procedures are primarily meant to reduce network traffic and encapsulate a series of SQL operations with logic, but have also been used heavily to lock down the SQL issued from applications - similar to static SQL.
One thing to consider is the additional overhead one would incur for invoking stored procedures, This cost can be pretty high from what I understand. If there are a bunch of statements grouped together, along with business logic, or if results from one statement need to be used as input to another, then stored procedures would be more appropriate. If you only have one or very few SQL statements, where there is no logical need to group them together, then the overhead of the stored procedure call becomes more of a burden on the pathlength. There are more efficient ways to lock down SQL statements.
pureQuery and SQLJ provide options to lock down statements (using static SQL) without having to incur the additional stored procedure overhead.
The other consideration is cost. With pureQuery and SQLJ, the workload is eligible to run on zIIP
processors. Stored procedures will run on the general purpose CPU, unless they are native SQL procedures, which became available in V9 of DB2 for z/OS.
So, in cases where you have considered and ruled out stored procedures, you now have to think about SQLJ vs. pureQuery. Ok, so pureQuery is not free. However, from a total cost of ownership point of view, there are advantages to pureQuery. For example, Data Studio Developer
has some slick tooling for pureQuery. The code generation, content assist etc., which are popular with developers, are designed around the pureQuery API. Also, if you have existing JDBC applications that you want to optimize and/or provide additional security for, you can get to static SQL without having to change any code using the pureQuery Runtime
and Data Studio Developer tools.
Deployment is also significantly simplified with pureQuery. With SQLJ, emergency changes to the application and redeployment were challenging because both developers and DBAs need to get involved in altering the SQL, rebinding, and redeploying the application. With pureQuery there is no back and forth between development and DBAs. The DBA can edit the package directly and the tooling ensures that the modified SQL provides equivalent results.
My colleague Holly Hayes came up with this summary, which I like. Achieving static SQL execution to increase z/OS capacity, lock down access path, enhance security:
Characteristics of COBOL stored procedures
Characteristics of SQLJ
- No additional license cost
- Common practice to encapsulate SQL and logic together and separate ownership of SQL
- Not desirable for single SQL requests due to added path length of stored procedure invocation
- Execution on general purpose CPU (if not native stored procedures on V9) increasing z/OS costs
- SQL lockdown is a design time decision - i.e. need to write code accordingly
Characteristics of pureQuery
- No additional license cost
- SQLJ has not had widespread adoption and some Java developers are resistant
- Tooling is significantly limited - in comparison with pureQuery
- SQL lockdown is a design time decision - i.e. need to write code accordingly
Added license costs, but...
- Can be used with existing Java applications to convert to static without modifying the application - gives DBAs more control
- Will typically run on a specialty engine rather than general purpose CPU reducing DB2 hardware and software costs
- DBA can modify SQL (not just the execution mode, but also the statement) without modifying application for emergency fixes
- Enables Data Studio Developer tooling for tracing SQL to Java source, impact analysis, and hot spot analysis
- Enables use of Data Studio Developer for pureQuery access layer generation
- Allows for separation of business logic and data access
- Can lock down SQL to DBA-approved list (static and dynamic execution)
- Can also be used inside Data Studio Web services without writing any code
- SQL lockdown is a deployment time decision - i.e. no code changes needed to deploy dynamically or statically
By the way, I strongly encourage you to check out the webcast
thatHolly and Steve Brodksy will be doing on using pureQuery with WebSphereapps on z/OS to reduce overall costs, reduce time to market for newJava apps, improve security, etc, where you can find out a lot more detail about the above key points.
-- Vijay Bommireddipalli
Hi! It’s been a while since I posted an entry, and with the announcement of DB2 Optimization Expert for z/OS V2.1
, I wanted to give you some highlights of what this release offers.
The biggest news about this release is the ability to run in an integrated desktop with IBM Data Studio, Rational, InfoSphere, and other products built on Eclipse V3.4.1, which includes products such as:
• IBM Data Studio Developer
• Rational Application Developer
• Rational Software Architect
• InfoSphere Data Architect
This integration is accomplished with shell sharing, and Michael Hsing has a recent entry on this topic
, and it also includes links to other shell sharing articles.
So what does that integration provide? If you are a user of any of the above products, you can now invoke DB2 Optimization Expert without leaving the environment of those products. To me this is ideal for developers who want to perform query analysis on SQL contained within their code without leaving the IDE. My last blog entry provided information on those tools and advisors and can be reviewed here
. You can also see the advisors in action in this demo
that Thuan blogged about earlier.
One important note: The ability to shell-share with other Eclipse-based products such as Data Studio is only available with DB2 Optimization Expert. The Optimization Service Center included with the DB2 z/OS Accessories Suite does not include this capability.
For those of you who are DBAs and not developers, I encourage you to try DB2 Optimization Expert 2.1, since the integrated desktop is our stated direction. However, since this release contains the same features and functions as V1.2.2, you can continue using it and slowly migrate to the new release.
The primary focus of this release was shell sharing; our future release will focus on enhancements to our tools and advisors, as well as improved usability and workflow.
Stay tuned, more is coming.
Hi all! If you have been reading this blog for a while, you must remember Steve Brodsky’s entry on pureQuery and pureXML. If you are a new reader of this blog or don’t remember Steve’s post, you can find it here
In his entry, Steve described how both technologies were born and why both of them got the “pure” in the name. He finished the post by describing some of the integration points between pureQuery and pureXML. That’s where I jump in! Motivated by his post, I decided to create some code snippets to show you how you can plug pureQuery and pureXML together to create Java applications that persist data into a DB2 pureXML database. My initial plan was to put that in a blog post, but as I started writing it down, more ideas were flowing in my mind than I could actually fit a single blog entry, so I decided to work on a more complete article.
The article contains the code samples (available for download) that will get you started developing with pureQuery and pureXML, but its main focus is on the different approaches that one can use when developing such applications.
In a typical application development scenario with three layers – SQL, data access API and business logic - I suggest three different approaches to handle the XML data, each one focusing on a different layer. There are certainly more approaches you can use, and you can even mix and match some of them, but my main goal was to get you started with these two great technologies and to open your mind to different ways of thinking when it comes to integrating XML into your Java applications.
Without further ado, here are the approaches I suggest in the article:
- Give control to the SQL layer: With this approach, use the SQL layer to transform between XML and relational format so that the data can be used by the existing facilities provided by pureQuery.
- Give control to the data access API: With this approach, use an XML mapping framework (this article uses the mapping libraries of J2SE V6) to map between XML documents and Java objects, integrated in pureQuery's API custom handlers.
- Give control to the application layer: Implement your own mapping framework integrated in the Java beans that will represent your data in the business logic.
And here is the link to the article itself, which was just published on the IBM developerWorks website: Handle pureXML data in Java applications with pureQuery
I hope you find it a good read! We continuously get questions from you, our customers, regarding this topic, so I hope I have answered them. After you are done reading it, go play with pureQuery (you can get it by downloading the trial version of Data Studio Developer
) and pureXML yourself and make sure you give us your feedback, either here on the blog or on the Data Studio forum.
-- Vitor Rodrigues
Hi, I belong to the DB2 Performance Expert development team, and one of my roles is to support customers and help them to get the most out of the product. DB2 Performance Expert for Linux, UNIX, and Windows 3.2
and the new DB2 Performance Expert Extended Insight Feature
for end-to-end database monitoring of Java applications is now available. The Performance Expert Extended Insight Feature gives DBAs a new view into the performance of Java database applications. It helps to:
- Improve availability of mission-critical Java database applications by detecting negative trends sooner
- You can look from different workload scope levels at your Java database applications and get transaction response time values. For example,you can look at the end users of your Java database applications and see what response time each individual user gets. If you see high response time differences, then you can drill down and see whether they execute different SQL statements. Or maybe one user uses another WebSphere Application Server machine than the others and that machine is differently configured which leads to longer response times in general.
- Manage applications to meet service level agreements more easily and effectively
- You can set warning and problem thresholds on response time values that meet your service level agreements. Graphical displays and signal lights warn you immediately when these thresholds are breached.
- Reduce time needed to isolate performance issues from days to minutes using graphical displays
- Response time graphs show you the average and maximum end-to-end response time over time on your selected workload scope level ( e.g. per application or per user). Any response time changes or peaks can be detected easily. Additionally, a histogram shows you the distribution of the response times of all transactions.
- Improve collaboration between DBAs and other members of the IT organization by providing key performance indicators across the software stack
- Key performance indicators include the average end-to-end response of a transaction as well as part of the response time that were spent in the application, the driver, the network and the data server. By looking at these time spent metrics you know immediately where in the software stack the time was spent in order to identify problems.
It’s sometimes difficult to explain the real value of what we call end to end database monitoring in words (although you can read about it in this article
), so I wanted to use this as an opportunity to introduce you to a short demo
that I think is worth more than a thousand words.
Thuan mentioned this video in Monday's posting. It's a live demo given by the lead architect of DB2 Performance Expert, Torsten Steinbach. In the demo you learn how easy it is to identify SQL statements that are responsible for a bad response time and to identify who issued the SQL statement and where the SQL statement spent it is time ( application, driver, network or data server ). Take a look at the video to get a first impression on this great new feature of DB2 Performance Expert.
Please let me know by adding a comment to this blog or sending an email to firstname.lastname@example.org if you have any specific questions about DB2 Performance Expert you would like me to address in future blog entries.
Just want to let you know that we recently posted another demo
for Data Studio on DemoZone. This is a two-part, scenario-based demo focused on a DB2 for z/OS environment. In part 1, we wanted to show some cool features available in DB2 Optimization Expert for z/OS for tuning and optimizing query performance. The second part is an extension to the previous demo set,
with details of steps in setting up and using the client optimization feature in Data Studio Developer and pureQuery Runtime for performance stability. Let us know your comments on this video and what specific demos you’d like to see.
I found DB2 Optimization Expert for z/OS easy to use. Of course, when you’re trying to create a demo that shows off the features, it can be challenging. We wanted to come up with a query that would enable us to get tuning recommendations from all the advisors in the product to showcase its capabilities. That was a bit difficult but we managed to do it. You can read more about the various advisors in Ray’s blog entry
DB2 Optimization Expert is currently available only for z/OS. As query tuning capabilities are a key skill for all database professionals, you might expect to see more database platforms supported in the future.
In addition to the demos on DemoZone, we are posting other less formal, more feature-oriented demos on Channel DB2. In particular, there is a series of demo for new features in Data Studio 2.1 including What’s New for Data Studio Developer
and DB2 Performance Expert Extended Insight Feature
We’re looking forward to hearing from you – just either add a comment to this blog or send an email to email@example.com.
Howdy! In case you missed it, we just announced a new release of HPU (High Performance Unload) for DB2 for LUW... V4.1. In case you've never looked at our HPU products (for DB2 for z/OS and DB2 for LUW), they can be great productivity enhancers and possibly even save you some resources.
One of the great things about HPU is that it has both a utility-like interface and an SQL interface. The SQL interface is perfect for application developers since they aren't used to invoking utilities. Once invoked, HPU can access the underlying table space or backup / image copy directly, producing multiple data type conversions and unload file formats suitable for most any target data store. When extracting a high volume of data in this way, or by sampling the source, the elapsed time and CPU savings are humongous versus using SQL (or Export or DSNTIAUL).
HPU for DB2 for LUW is also partition-aware, allowing you to unload from multiple partitions with a single execution of HPU into a single output file/pipe or multiple files/pipes. It also provides a re-partitioning capability that unloads and re-partitions the output for new data distribution on the same or different system.
The hot new feature in the 4.1 release for DB2 for LUW adds the ability to migrate data directly (unloading, transferring, and loading) from one database to another without the need for intermediate disk storage. This capability delivers the fastest way to migrate your data. The new release also has other usability improvements and now supports Windows 64-bit platforms.
For more information on HPU, visit http://www.ibm.com/software/data/studio/high-performance-unload/
-- Bryan Smith
Same product, same features, same organization, different name
I often say the above phrase when I have to explain to everyone what the difference is between 'RDA' and 'IDA'. We announced the renaming of IBM Rational® Data Architect to IBM InfoSphere™ Data Architect today, December 16, 2008. See the announcement letter
. It really is the same product, still part of the Data Studio family, and even built on top of the same Eclipse level.
So why the name change now?
The name change features the InfoSphere Data Architect role in IBM InfoSphere Foundation Tools, an open set of tools that help prepare an organization to adopt an information agenda. Read more about the Foundation Tools in this executive brief.
There has always been integration between our data architect product and the InfoSphere Family and its predecessors. The very first release featured function to assist in data integration design, but at that time the DB2 Information Integrator branding was too limiting for our offering. Given the broad database support, we opted to give it a Rational brand featuring its integration with the Rational Software Development Platform. InfoSphere Data Architect still is and will continue to be fully integrated with the Rational portfolio, and in particular with the architectural components including Rational Software Architect, WebSphere Business Modeler and other Rational products.
What exactly is InfoSphere?
I was talking with one of the InfoSphere reps to about how he explains InfoSphere to clients when I was at the IOD conference in Las Vegas this year. He started to explain to me about moving. When I say "moving" I mean like you've bought a new house and need to physically move your items from one location to another. So let's describe the process of what you do when you move.
- First you take stock of what you have. Look around your room and figure out what articles you actually own and what is junk.
- Then after you have taken stock, you start to figure out what items you need and what items you can throw out. Maybe that old can opener that Aunt Sally gave you for Christmas 5 years ago isn't so useful now that you have a new expensive electronic can opener that can open just about anything.
- Now you are ready to pack all your items into boxes and get it ready for the move.
- Finally your items are delivered to your home and you start to unpack everything and then put them in the proper places.
The process of moving is very similar to how you use InfoSphere tools to realize your Information Agenda
- Understand - Use tools like InfoSphere Information Analyzer, InfoSphere Data Architect, InfoSphere FastTrack, and InfoSphere Business Glossary to understand your data and take stock of what data you own.
- Cleanse - Use InfoSphere QualityStage to discard data that isn't relevant and throw out what is not needed. (You can also decide you want to fix it, which QualityStage also does for you.)
- Transform - Use InfoSphere DataStage and InfoSphere Change Data Capture to capture the data and package it for delivery.
- Deliver - Use InfoSphere DataStage and InfoSphere Federation Server to deliver the trusted information to the proper places where it belongs.
As I mentioned, InfoSphere Data Architect has always been integrated with InfoSphere tools:
- You can export InfoSphere Business Glossary into InfoSphere Data Architect and vice versa
- You can export the InfoSphere Data Architect physical data model into InfoSphere Metadata Workbench. The physical model is the basis for QualityStage and DataStage work.
- You can use InfoSphere Data Architect to infer or specify relationships between schemas and to generate DDL for federated views.
- And more…
We'll be looking to add more integration points as we build out the InfoSphere story, but for now I think I better add the following to my email signature to save me having to repeat this all the time:Same product, same features, same organization, different name
-- Anson Kokkathttp://www.ibm.com/software/data/studio/data-architect/