Managing the data lifecycle
IOD was a busy time for the Optim team. I hosted three sessions and also was in the pedestal area and had a chance to interact with several DB2 and IDS customers. While all the solutions certainly got their attention, two solutions seemed to bubble up to the top in terms of customer discussion and questions. Optim pureQuery was one of them, mainly from the DB2 for z/OS crowd. There is definitely a trend in the market of customers wanting to take advany tage of the wealth of data stored on the DB2 for z/OS platform rather than transport that data to another platform. This trend indicates that JAVA is the preferred language when developing these new applications; however, finding an acceptable data access layer both in terms of ease of use as well as high performance, has been a challenge. Many companies are looking at things like Hibernate to address their ease of use challenges, but unfortunately Hibernate doesn't address their high performance requirement. Several customers wanted to understand how pureQuery could help them in this situation and were very excited once we talked to them about not only the benefits of Static SQL, but also the client optimization features of pureQuery, and most importantly how we could visualize the SQL that Hibernate was generating to the developer and DBA. Seems to be a perfect fit for providing the high performance that DB2 z/OS customers expect for these new JAVA applications.
The second solution that seemed to get a lot of attention was our Performance Expert solution. When it comes to problem diagnosis everyone is always looking for a better mousetrap and it seems that solutions in this area get hot and then run their course. Well, Performance Expert seems to be the up and coming hot product. I believe this is due to the sheer wealth of information this product collects. If , or should I say when, a problem occurs in production the most difficult challenge is gathering that information, especially because that moment in time is now past and you really need historical data to determine what went wrong. Performance Expert collects information and stores this information in intervals, making it incredibly easy to get the information needed all synchronized to the right time. You can see a short video of this capability here. The other thing I found interesting was all the various reports and consoles customers were writing based on the data found in the Performance Warehouse. From capacity planning to SLA reporting, it seems to me that there is a lot of customization going on that would probably make a great birds-of-a-feather session in future conferences.
As I mentioned earlier, all the solutions got their attention. I hosted a joint session with Randy Wilson from Blue Cross Blue Shield of Tennessee where Randy described a very interesting outage situation (not fun!) and how Optim High Performance Unload saved the day for them in terms of being able to supplement their recovery and provide the historical data they needed. Interesting how customers really think outside the box and come up with creative uses for our solution. After that session I had quite a few discussions with customers on how they could use High Performance Unload for situations like Randy's where back-level DB2 or dropped tables caused challenges in terms of recovery.
Well it was a busy week and now comes all the customer follow-up that happens after IOD!
Let me start by introducing myself. My name is Manas Dadarkar and I am the technical lead for Data Studio (the complimentary tooling for DB2 and IDS) and for High Performance Unload (HPU) for DB2 for Linux, UNIX, and Windows. I took on this role recently and am right in the middle of planning a lot of exciting things for Data Studio and HPU.
However, I will leave Data Studio for another day (blog posting), so today let’s talk about HPU, including the announcement today of the new release, 4.1.2, under the new name, Optim High Performance Unload for DB2 for Linux, UNIX and Windows.
For people not familiar with HPU, it's a product that offers high speed unload of DB2 data, thereby allowing DBAs to work with very large quantities of data easily and efficiently. In limited lab testing using a single large table, we have seen multiple magnitudes (6-10 times) performance improvement using HPU as compared to the DB2 EXPORT utility. The usual disclaimers apply: Your results will vary.
Since version 4.1, HPU has also supported automatic migrations. You can now migrate data directly from one database to another, including unloading, transferring, and loading of the data between the source database and target database without the need for intermediate storage. You can get more information about HPU here. It's also featured in our Day in the Life of a DBA video demo.
With Version 4.1.2, what's new is being able to unload or restore using incremental backup images. If you use incremental backups, it’s likely that those backups will provide you with a more current version of a dropped table. Check out the full announcement here.
For those of you who have moved to DB2 LUW 9.7 (or will be soon), you don’t need to wait for the 4.1.2 release, which is planned to be electronically available on November 20. We’ll shortly be releasing the support for 9.7, including the ability to work with DB2’s industry leading XML data support. I’ll post a short blog with the link once that is available and we’ll also put the word out on Twitter.
Also, if you are going to IOD2009, you can hear directly from a customer about HPU by attending session 1499 A day in the life of a DBA: How to keep your sanity.Manas Dadarkar