I should start by introducing myself. My name is Vijay Bommireddipalli, and I am part of the Data Studio Enablement team. Our team works with customers and IBM Business Partners to get up and running on the latest and greatest stuff coming out of the development teams of Data Studio.
I just finished a couple of single day customer events that are part of a multi-city U.S. roadshow about Data Studio and IBM Mashup Center. The theme of this roadshow is about enabling a variety of enterprise data assets to end users in a services oriented paradigm, allowing business users instant access to information and freeing up IT from application development overload. These roadshows are *free* events that anyone can attend, so I'dlike to give you a little taste of what you can expect if you decide to come to one of the remaining shows in Chicago, Dallas, New York, or Boston. (See the front page of the Data Studio community space for links to registration or contact your favorite IBM sales rep.)
The morning session focuses on Mashups. Here my colleagues, Tom Deutch and Chris Gruber, showed how Mashups and other Web 2.0 technologies are changing the role of IT in an enterprise. IBM introduced IBM Mashup Center, which enables a situational application environment inside the enterprise. Using IBM Mashup Center, IT can relieve themselves from being a bottleneck for application requirements and become an application enabler. You can read more about IBM Mashup Center here. The afternoon session focuses on one aspect of Data Studio, which is creating the most efficient data access mechanism to backend data sources. This focuses on architectural approaches to data access in general. The idea is that IT can enable data assets by taking best practices, efficiency, and security etc. into account while relieving themselves of churning out end user applications. This is a key aspect of eliminating the applications backlog that most IT departments typically seem to face. These data assets can then be accessible to a variety of applications (including Mashups).
The way we present the material is walk the attendees through building Mashups on the screen, all the way to accessing and organizing data in their enterprise backend sources, showing a number of live demos of the technologies along the way. The attendees at a recent session in California seemed to "get" that picture. It was good to see the reaction of the attendees and the affirmation that some of the problems we are solving with IBM Data Studio and Mashup Center are actual problems they face on a daily basis.
On Thursday (Aug 21st), I will be in Chicago on the next leg of this roadshow. Hope to catch you at one of the future road shows. I also hope to catch some live blues music while in Chicago :)
Technology Group (ITG) has recently published research on the Cost/Benefit
Case for IBM DB2 Advanced Enterprise Server Edition and InfoSphere Optim
Operational DBA Tools Comparing Costs and Capabilities with Oracle Database 11g.
For the many companies considering what platform they will choose for a new application or for a
re-engineering effort or for customers that are considering a platform
migration, ITG lays out the case for considering DB2. The paper features key
DB2 differentiators like compression, automation, workload management, XML
support, and pureScale.
there have been many articles that feature the benefits of DB2 vis-à-vis
Oracle, I am particularly excited about this paper because it includes
considerations for the value of the surrounding tooling. ITG includes an assessment of the capability,
value, and differentiation of the InfoSphere
Optim development, administration, and performance management tools. ITG favors
the InfoSphere Optim portfolio in the areas of business alignment,
cross-disciplinary integration, and predictive analysis. For example,
InfoSphere Optim Performance Manager gives DBAs the ability to monitor service
level objectives such as database transaction response time by workload so the impact of emerging bottlenecks is apparent. InfoSphere
Optim Query Workoad Tuner can then provide actionable expert advice to improve
the performance of workloads performing below service level objectives which
ITG characterizes as having superior breadth and sophistication compared with
I hope you’ll download and read the paper at http://ibm.co/AESE_WP.
Holly Hayes, Optim Tools Product Manager
In my last blog
I talked about the tools associated with InfoSphere Foundation Tools, including my product, InfoSphere Data Architect
. However I wanted to really show you that most of what I was talking about has substance, and that there is true integration among the tools – it’s not just marketing!
In Denis Vasconcelos's latest article, Understanding leads to Trust: Sharing a Common Vocabulary across InfoSphere Foundation tools
, he has really hit home the message about how a common understanding of business terms can help improve communication and enforce standards across IT and business organizations. His article shows you how to import your existing business concepts into a business glossary (InfoSphere Business Glossary
with InfoSphere Metadata Workbench
) and then use that glossary within InfoSphere Data Architect to do such things as enforce naming standards in data models, which of course will mean that applications built on the resulting database will also be using correct terms that are meaningful to the business.
I like how the article shows how all of these products are interconnected, and how the various technologies have been designed to make sure that you are doing the most with your metadata.
Read the article and let me know your thoughts... I am especially interested to know if this set of tools meets your objective of managing metadata effectively. If there is something missing, let me know. I really think we have a unique offering with these set of tools, and something that really stands out from the rest of the crowd.
IOD was a busy time for the Optim team. I hosted three sessions and also was in the pedestal area and had a chance to interact with several DB2 and IDS customers. While all the solutions certainly got their attention, two solutions seemed to bubble up to the top in terms of customer discussion and questions. Optim pureQuery
was one of them, mainly from the DB2 for z/OS crowd. There is definitely a trend in the market of customers wanting to take advany tage of the wealth of data stored on the DB2 for z/OS platform rather than transport that data to another platform. This trend indicates that JAVA is the preferred language when developing these new applications; however, finding an acceptable data access layer both in terms of ease of use as well as high performance, has been a challenge. Many companies are looking at things like Hibernate to address their ease of use challenges, but unfortunately Hibernate doesn't address their high performance requirement. Several customers wanted to understand how pureQuery could help them in this situation and were very excited once we talked to them about not only the benefits of Static SQL
, but also the client optimization
features of pureQuery, and most importantly how we could visualize the SQL that Hibernate was generating to the developer and DBA. Seems to be a perfect fit for providing the high performance that DB2 z/OS customers expect for these new JAVA applications.
The second solution that seemed to get a lot of attention was our Performance Expert
solution. When it comes to problem diagnosis everyone is always looking for a better mousetrap and it seems that solutions in this area get hot and then run their course. Well, Performance Expert seems to be the up and coming hot product. I believe this is due to the sheer wealth of information this product collects. If , or should I say when, a problem occurs in production the most difficult challenge is gathering that information, especially because that moment in time is now past and you really need historical data to determine what went wrong. Performance Expert collects information and stores this information in intervals, making it incredibly easy to get the information needed all synchronized to the right time. You can see a short video of this capability here
. The other thing I found interesting was all the various reports and consoles customers were writing based on the data found in the Performance Warehouse. From capacity planning to SLA reporting, it seems to me that there is a lot of customization going on that would probably make a great birds-of-a-feather session in future conferences.
As I mentioned earlier, all the solutions got their attention. I hosted a joint session with Randy Wilson from Blue Cross Blue Shield of Tennessee where Randy described a very interesting outage situation (not fun!) and how Optim High Performance Unload
saved the day for them in terms of being able to supplement their recovery and provide the historical data they needed. Interesting how customers really think outside the box and come up with creative uses for our solution. After that session I had quite a few discussions with customers on how they could use High Performance Unload for situations like Randy's where back-level DB2 or dropped tables caused challenges in terms of recovery.
Well it was a busy week and now comes all the customer follow-up that happens after IOD!
Updated December 15 to correct a typo.
I am the software architect for InfoSphere Data Architect
, and I wanted to spend a few minutes telling you what we’ve been cooking at the Lab over the past few months to deliver Fix Pack 1
for IDA 7.5.2, which was made available on December 11.
In this Fix Pack, we’ve added new features and improvement in a number of key areas, which I’ll highlight here. Diagramming improvements:
We are excited to have started incorporating the ILOG diagramming technology into InfoSphere Data Architect to provide an enhanced diagram layout. The new diagramming capability will offer choices of layouts as well the option to specify the spacing of objects, all very important steps towards offering greater control and flexibility of visualization.Import DB2 physical objects from other tools:
For years, InfoSphere Data Architect has offered the capability to import models from other tools. Significant in Fix Pack 1, we have provided unique capability to help import DB2-specific properties for physical database objects, such as index and storage, faithfully into IDA from other tools like CA ERwin and CA Gen. Whereas generic export/import capabilities may cause you to lose this information, this enhancement in IDA will enable you to preserve your existing data design efforts. Import from COBOL source files and copybooks:
Although this capability was on a temporary leave, our z/OS friends will be happy to know that this capability is back in with this Fix Pack. This "legacy data" often contains critical information, so they need to be included in the data modeling process. Filtering improvements for productivity and performance:
It is now possible to have a couple different approaches of specifying filters for model comparison and synchronization: you can specify filter options at the workspace level to streamline and improve overall comparison performance, and you can specify filter options at individual comparison invocation to improve ease of use of the comparison editor. Integration with InfoSphere Discovery (forrmerly Exeros):
Those familiar with our history and approach know that we have a strong focus in building linkage points across IBM products. This ability to easily share and collaborate the metadata is crucial to the acceleration of projects. In IDA 18.104.22.168, we continue this focus with the introduction of the integration with InfoSphere Discovery
. With this new capability, you can import the discovered metadata from InfoSphere Discovery directly into InfoSphere Data Architect and share and use it in a wide range of scenarios including the Optim Data Archiving
and Data Privacy
solutions, as well as InfoSphere Foundation Tools
For more details on the contents of this Fix Pack, be sure to read the Release Notes
Let me close by saying that the InfoSphere Data Architect team loves getting your input so please keep them coming. Feel free to post your comments and questions on the IDA forum
. We are excited about what we are going to do in 2010!
It was just over a month ago that I posted the information about our new releases
under the Optim name. Today we announce the z/OS versions of some of those products.
Optim Query Tuner is designed for single-query tuning, and Optim Query Workload Tuner provides both single query and workload tuning capability. Both offer seamless integration with Optim Development Studio
. Optim Query Workload Tuner is a renamed enhancement of IBM Optimization Expert for z/OS and is the upgrade path included with your subscription and support when you are ready to move to the next release. Note that future query tuning enhancements will be made to these products. OSC is still available but will not be enhanced. It will be replaced by similar capabilities in Data Studio under development today.
And a reminder, we did add new discussion forums, including the Optim Query Tuning solution discussion forum
This product is for deployment natively on z/OS systems (for example, with a WebSphere Application Server for z/OS deployment). Some of the capabilities we added were in response to requests from z/OS customers, including the ability to replace literals with parameter markers, making more statements eligible for static execution. You can find out more about this capability in Sonali’s article on developerWorks
Thanks for reading.
Have you ever worked with Backup and Recovery Tools for DB2 for z/OS and wondered what is the equivalent on DB2 for LUW?
With the announce of DB2 Recovery Expert V3.1 on June 7, we now have a complete set of backup and recovery tools for DB2 for Linux, UNIX and Windows. DB2 Advanced Recovery Solution is a term we use to talk about these 3 separate products: DB2 Merge Backup, DB2 Recovery Expert, High Performance Unload.
DB2 Recovery Expert helps customers with the more granular recovery needs in DB2 and also works as a log analysis tool to read active and archive db2 logs. Think about those times when you accidentally deleted a production table instead of a test table and you just wanted to get that back... Also think about a developer who ran his application last night and it deleted 1000 rows by accident... Use DB2 Recovery Expert to help you recover what you need and pinpoint mistakes to help generate UNDO SQL to put the database back the way it was before.
All this is pretty powerful stuff when you are a DBA and you need the 'advanced recovery' capability that is provided with these tools. If you want to find more about DB2 Recovery Expert refer to the main webpage:
Lately I've been feeling as if I haven't been talking to customers enough and those of you who know me know I love to talk. It's tougher and tougher to fly these days, so maybe I can use this blog to have some virtual meetings. I'm really interested in hearing from DBAs about your experiences using our existing data management tools and what you would like to see in terms of a strategic direction for tools. Send me your questions, comments, concerns, and I'll try to blog regularly with my answers and to further the discussion. You can either send via the comment box here or directly to me at email@example.com.
As I mentioned in my previous blog on the RDA 7.5 announcement, I promised to let you know when the trial code is ready. Well, it's ready now. You can download the 30-day trial off developerWorks
For the highlights of the new release, check out the What's New
documentation and check out my earlier blog
on this topic that focuses on the data privacy and integration aspects of the new release, or even better, listen to my webcast
I can show you all the new features in person. Come meet me at the IOD 2008
conference. I'll be busy at the conference - you can find me at either of the following sessions:
- Birds of a Feather Session on Data Architecture (3280) Mandalay Bay North Convention Center - Tropics B Wed, 29/Oct, 06:00 PM - 07:00 PM
- Data Modeling with Rational Data Architect (1325) Mandalay Bay South Convention Center - Breakers I Thu, 30/Oct, 08:30 AM - 09:30 AM
Get your hands dirty and join me for the following hands on labs:
- Database Modeling with Rational Data Architect (2559A/B)
Mandalay Bay South Convention Center - Breakers A Tue, 28/Oct, 10:00 AM - 01:00 PM
Mandalay Bay South Convention Center - Breakers C Wed, 29/Oct, 10:00 AM - 01:00 PM (This one looks full but come by anyway since sometimes people don't show up)
- Integrating IBM Rational Data Architect and IBM Optim (2712) Mandalay Bay South Convention Center - Breakers E Thu, 30/Oct, 10:00 AM - 01:00 PM
And you don't have to take my word about it. Don't miss this customer session:
- Constant Contact Manages Metadata: RDA, Cognos BI & Information Server (2083) Mandalay Bay South Convention Center -Jasmine F Thu, 30/Oct, 08:30 AM - 09:30 AM
See you there!
-- Anson Kokkat
My name is Thuan Bui and I am part of the Data Studio Enablement team. One of my jobs is to create demos that illustrate the capabilities and business values of the Data Studio portfolio. A key one we just recorded and put on the Web is a two-part demo
that uses a story to bring to life the things you can do with these products and how the integration can enhance teamwork.
The demos we build are not mocked up screen shows (at least most of the time). Because people use these for live demos as well as recording, we really have to come up with more than just a story. We need to create a relevant database schema, load it with data, build supporting applications and so forth. And of course we use Data Studio to help us with all that :) . (And, yes, we occasionally find bugs and usability issues that we report back to the development team.)
This scenario-based demo shows how and why Data Studio portfolio is used throughout the entire data lifecycle including design, development, deployment, and management stages. We start showing how pureQuery client optimization
is used to stabilize performance for an existing JDBC application, then show how to use Rational Data Architect for data design tasks, then feed the model to Data Studio Administrator for data model changes and deployment, and how to use pureQuery Outline for impact analysis of a potential schema change. (By the way, if you have no idea what I mean about pureQuery outline, see this article
.) We use Data Studio Developer tools for SQL, Java application and Web services development and deployment, and finally show how to use the web-based Data Studio Administration Console for database and system health monitoring.
One of the challenges in producing this demo is that we have lots of different components to highlight, with the right level of information, within a time limit. We try not to make it too long nor give too many details so that both technical and non-technical viewers can consume it and don’t lose interest.
Although the story is for a fictional enterprise, our goal is to try and show problems and resolutions that could apply to companies in the real world. Perhaps the most challenging thing for those of us inside IBM is to come up with scenarios that will resonate with you, the people who have to deal with the data management lifecycle every day. Let us know if you think we’re hitting the mark or if there’s something more or different we should be showing, maybe an example from your own experience.
Our next demo will focus on the story for z/OS environments.
We’re looking forward to hear from you – just add a comment using the Add a comment link below or send an email to firstname.lastname@example.org.
Hi, everyone. It seems as if lately I am always on the road. Today, I am actually back here at SVL, where, fortunately or unfortunately, depending on your point of view, it is raining, and raining hard. Tomorrow I head off to the UK for regional user group meetings. I also have Rome to look forward to, and that’s why I’m writing today, to give you the highlights at IOD EMEA for products in my portfolio.
When I look at the sessions, one major theme I see is around performance. Regardless of how ‘cool’ new applications and new technologies are, the issue of performance never goes away and often becomes even more critical as users become even more intolerant of slow response times or unavailability. So, although performance may not have the “coolness” factor of some topics, it’s bread and butter for most of our customers. The key is to make performance management more streamlined, less costly people-wise, and more focused on prevention rather than constant reaction.
If you’re going to IOD EMEA, I invite you to make a point of joining us at the following sessions to learn more about what we’re doing to help realize those goals. The sessions here are listed in chronological order, but you will of course need to check for changes at the conference venue itself. Some of the speakers may change as well. I look forward to seeing you there. If you haven’t signed up for an executive one on one with me through your IBM sales rep and are interested in doing so, here is a link to where you can find more information on how to do that.
|Date and time ||Session ||Title ||Speaker |
|Wednesday, May 19 |
|TSB-3181 ||Java Developer Best Practices for DB2 Performance ||Dave Beulke |
|Wednesday, May 19 |
|BLD-3375 ||How British Petroleum Manages Enterprise Data Growth with IBM Optim ||Jim Lee, David Sohl |
|Thursday, May 20 |
|TSB-3350 || |
End-to-End Monitoring and Problem Determination with Optim Performance Solution
|Torsten Steinbach, Holger Karn |
|Thursday, May 20 |
|TSB-2947 ||Query Optimization and Query Tuning with Optim Query Tuner ||Gene Fuh |
|Thursday, May 20 |
10:30AM -1:30 PM
|3259 ||Hands on Lab! Optim Performance Manager – Live! ||Ute Baumbach and Michael Skubowius |
|Thursday, May 20 |
11:45AM -12:35 PM
|TSB-3347 ||Venedim: A first hand experience with Optim Performance Management solutions ||Jean-Marc Blaise, Torsten Steinbach, Holger Karn |
|Thursday, May 20 |
11:45AM -12:35 PM
|TSB-3313 ||Unipol Slashes Costs and Improves Application Performance with Optim (features Query Workload Tuner) ||Client speaker and Bryan Smith |
|Friday, May 21 |
|TSB-2760 ||Why are DB2 DBAs (z/OS & LUW) Interested in Data Studio and Optim Tooling? ||Bryan Smith |
|Friday, May 21 |
|TSB-3341 ||Optim and Data Studio Portfolio Strategy: Optimize performance and availability while lowering costs ||Curt Cotner |
|Friday, May 21 |
|TSB-2890 ||Improve Database Archive Performance: Optim Data Growth Best Practices ||Pamela Hoffman |
|Wednesday through Friday (repeated session) ||TSB-3870 ||Usability lab: Optim Performance Manager User Experience for DB2 Performance Management ||Dirk Willuhn, Ute Baumbach |
Hi, I’m the lead architect for Optim pureQuery Runtime
, and I want to start using this blog to help address questions that I get from people as they learn about or use pureQuery capabilities. In this first blog, I’ll discuss the new SQL replacement capability.
By using client optimization, an administrator can modify the SQL from a captured application. The enhanced tooling to support this capability is described in Sonali's article, What's new and cool in Optim Development Studio 2.2
. The intended usage of this feature is to let a DBA make a change to an SQL statement without the need to edit and recompile an application. This could be useful, for example, in late night or weekend emergencies when an application can't easily be changed. It is also useful in cases where a third party application embeds or generates sub-optimal SQL and a change to the application is not possible without contacting the vendor. In any of these cases, you should aim to change the application directly at the first practical opportunity to use the improved SQL.
When I talk about this capability to people, there are two questions that frequently come up:
- What is the extent of the change I can make to the SQL?
- Isn't there a security risk to allow editing of the SQL? How can we control access?
Great questions. I'll discuss each of them separately.What can I change in the captured SQL?
There are restrictions on what you can change when creating the replacement SQL. The Optim Development Studio pdqxml editor will prevent many of the restricted changes, which is why it is strongly recommend that you use this editor to create the replacement SQL. The primary restrictions on the replacement SQL are:
- You may not change the SQL statement type. For example, you can’t change a SELECT to an INSERT.
- The number and types of any input parameters or output result columns must be unchanged. For example, if your SELECT statement is expecting two columns of CHAR and INT as result, your changed SELECT statement must also expect a result of two columns of CHAR and INT.
At first glance, these may seem to greatly narrow the capability. But when you think about it, any changes like those described above couldn't work very well anyway without a corresponding application changes, at which point you would likely just change the SQL directly in the application.
Nevertheless, there are quite a number of useful changes that you could make that would not violate the restrictions. You can:
- Influence access path / index usage:
- Add an ORDER BY clause
- Add OPTIMIZE FOR 1 ROW
- Other "tricks" to influence the DB2 Optimizer (like adding OR 0 =1 to a predicate)
- For Oracle - add a comment hint to end of the statement
- Influence fetch size for distributed queries:
- Add FETCH FIRST n ROWS ONLY clause
- Add an OPTIMIZE FOR n ROWS clause
- Add FOR FETCH ONLY or FOR UPDATE clause
- Add a predicate that narrows the data returned - just be sure it uses literals and not parameters
- Change the locking behavior:
- Add WITH ISOLATION clause
- Add SKIP LOCKED DATA clause
- Add FOR UPDATE clause
- Directly manage the schema name for object references:
- Add or change the schema qualifier on a table or other object reference.
How can we control access?
- Help manage EXPLAIN DATA:
Some people have expressed concern that there is a security risk involved with the ability to change captured SQL. While there is some potential for abuse, there are means for controlling changes. I’ll discuss some control points within the context of how this feature can be used with either static or dynamic SQL.
Client optimization is most frequently used to convert dynamic SQL execution to static execution. To use SQL replacement here, you must modify the SQL before performing the bind operation. Any changes made to the SQL in the capture file after the bind are ignored. So control over the capture file contents needs to be managed across the capture/configure/bind process. In many instances, this would be done by the same administrator. Ultimately the bind is performed by an administrator with all the authority to perform all the SQL contained in the file. This is not very different than a traditional 3GL program deployment. And any bound SQL is then visible for inspection in the DB2 catalog.
There are scenarios where client optimization is used (for example, to gather performance metrics) even when the eventual execution mode is dynamic SQL. In these cases, the capture file is examined at execution time for the existence of replacement SQL. If present, that SQL is used in the prepare and execute. But you can disable that execution-time replacement. A pureQuery configuration property, enableDynamicSQLReplacement, controls whether this is allowed. The default is false, so you have to do something to turn on the execution time replacement. The gives control at a datasource or application level..
An important point here is that all the basic building blocks of security remain in place.
That is, SQL privileges are necessary to execute the bound packages or the dynamic statements. But even with that being true, additional care must be taken. To prevent unexpected changes to the file you must control write access to the file. It can be locked down on the executing server by making it read-only. It is also important to control the updateability of execution-time properties that can affect the application’s execution. The capture file, and any application properties files need to be thought of, along with any executables, as a collection of related resources, all of which need protection.
I hope you’ve found this useful. Let me know if there are other questions you have about pureQuery, and I’ll do my best to answer them.
-- Bill Bireley
We released several fixpacks on April 19th that I want to call your attention to. Not only do they provide defect fixes, but also some cool new function.Data Studio Health Monitor
- This fixpack adds support for health monitoring for DB2 for z/OS as well as alert notification support. Customers that we previewed the function with really liked the easy setup and the ability to share it with teams members in development and other operations staff. It monitors and alerts application blocking conditions like database down or log full plus users can view system log messages and see what application and utilities are running. Data Studio Health Monitoring is available at no charge and support DB2 for z/OS and DB2 for Linux, UNIX, and Windows databases. Check it out now! For more information, see the IBM Data Studio Health Monitor Version 22.214.171.124 Release Notes
and product overview
pages. You can download Data Studio Health Monitor V126.96.36.199 for an upgrade from version 2.2.1 or for a fresh install at IBM software downloads
.Optim Query Tuner
- Query Tuner users will definitely want to pick this up! This fixpack for Optim Query Tuner offerings provides a new Access Plan Explorer. Many customers who have very complex SQL wanted a textual access plan navigation method in additional to the visual explain. This new tool displays access plans as tables or hierarchical trees. There are also additional usability and SQL capture enhancements. See What's New in Optim Query Tuner
and the Release Notes
pages.InfoSphere Data Architect
- This is a standard fixpack with defect fixes. For more information, see the IBM InfoSphere Data Architect Version 188.8.131.52 Release Notes
and product overview
pages. View the download document
to learn how to download InfoSphere Data Architect V184.108.40.206 to upgrade from version 7.5.3.
Let us know how you like these enhancements :-)
In one of my last blogs, I wrote about the untold story of data privacy focusing on non-production systems. This past week, IBM made a significant acquisition of Guardium to improve support for compliance and the protection of privacy across all systems. As discussed in the previous blog, we often forget about the systems not in production and think that the current security in place is enough yet that is not the case.
The addition of Guardium to the mission of privacy, protection and compliance continues IBM’s mission in helping organizations at the enterprise level. Just like with data masking, often organizations don’t think of monitoring use, access and change to archives, but it is yet a risk. The combination of Guardium and Optim will provide that capability to monitor the SQL accesses and usage of data not only in production data sources, but also those in archives, development, test and training environments.
With Guardium comes an enterprise solution supporting most popular databases and application frameworks. Like the Princeton Softech and Ascential acquisitions of years ago, IBM plans to continue support for databases beyond those that are “Blue” and with the proof that they will based on now 4 years post Ascential and 2 years past Princeton Softech, I believe they will keep that promise. Most enterprises contain multiple technologies and enterprise solutions require support across them. Guardium continues IBM software’s vision of meeting the enterprises needs.
Hello everyone - first let me introduce myself. I joined the Data Studio development team more than 2 years ago and since then I've been working on the Data Web Services capability. During that time I gained a lot of knowledge about connecting database assets to the Web, Java and J2EE development as well as XML processing ... until one day a strange blue box named "DataPower" crossed my way. This blog is about the results of this encounter ...Information as a Service
A lot has been said about making your data accessible to an SOA environment, the Web, the Cloud, and so on... and IBM, as well as other vendors, provides a wide variety of software tools and frameworks to make all that possible.
But let's look at a common case where data is represented as XML and stored in a relational database:
Very often data is represented as XML messages which flow over the wire. The XML may contain data which comes from, or is going to be written to, a database. It may also contain data and metadata to perform remote procedure calls (RPC) as we see with SOAP. There might be complex business logic at the remote server associated with a message flow - for example in form of a stored procedure in DB2 - but we may also just have a simple mapping from XML into relational structures, or we may just store the XML "as-is" in the database - thanks to the pureXML capabilities in DB2.
The bottom line is that there are many cases where you need nothing else but a simple mapping definition from XML to an SQL statement or a stored procedure call, and another mapping definition to map result sets and output parameters to XML structures.Make it fast and secure with WebSphere DataPower Appliance
And there are again many approaches on how such a mapping and network wiring can be done, but today I want to introduce you to a fairly new way of doing that by using the IBM WebSphere DataPower Integration Appliance XI50
. Because DataPower has - among many other features - the capability of connecting to databases, which includes superior support for DB2 on all platforms.
DataPower allows you to define mappings from XML to database calls via XSL. Furthermore, you can define message flows, network protocol endpoints, security, quality of service properties, WS-* features and much, much more via the award-winning Web GUI interface. And all that without writing a line of code or even code generation! Everything is well contained and controlled inside the DataPower appliance without the need of changing your database server, your legacy data or your application/business logic contained in stored procedures. Scenario 1: Enrich messages with database content
In this case, messages passing through a DataPower appliance can be enriched with data stored in a database. It's also possible to write parts of or the complete message into the database - for example using the database as an audit log.Scenario 2: DataPower as an SOA Gateway
Here we declare the database as the target for the message flow. The message data can be stored into or retrieved from the database, or more complex business logic can be invoked in form of a stored procedure.Scenario 3: High-performance batch INSERT with DB2
DataPower supports the DB2 batch INSERT feature which can speed up the insertion of multiple records into the database. This can be leveraged for XML shredding. You can easily break up a large XML document into multiple records (rows) by defining the shredding rules in your XSL script. The batch INSERT takes care of writing the individual records quickly into the database.
Find out more in this developerWorks article on "Using WebSphere DataPower SOA Appliances to enable the Information as a Service pattern
The native DataPower XML processing stack provides you wire-speed XML parsing, validation and transformation. DataPower supports a wide variety of different network protocols which allows connectivity to many different back-end and front-end systems - but at the same time you can be sure that your data is kept safe and reliable with the many security capabilities provided with the appliance.
Last but not least, a well designed management API as well as connectivity to registries like the WebSphere Service Registry and Repository (WSRR) allows easy life-cycle management of your artifacts.Make it even easier by using IBM Data Studio Developer
At this point you may say - "All that sounds great, but why in the world is he posting this in the Data Studio blog?".
Well, you may already be familiar with the Data Web Services feature in Data Studio Developer
(if not, see links below) where you can easily expose stored procedure calls and SQL statements as Web service operations by creating the appropriate J2EE runtime artifacts. But did you know that you can now also create service runtime artifacts for DataPower? The Data Studio Developer tooling generates all artifacts for a Data Web service - including the WSDL file and - in case of DataPower - the XSL scripts which perform the mapping from the XML message into database calls. There is no need to code the XSL by hand. All that's left do to is the deployment of the artifacts on the DataPower appliance.
I wrote a tutorial on this topic to show you how easy it is to use Data Studio Developer to create the Data Web service artifacts and how to configure DataPower to host the generated services. Check it out. Additional ReferencesDataPower Resources
To get some more ideas on using DataPower and DB2 pureXML I recommend the following two articles:
Even a Masters thesis
evolved out of that work around XML standards in health care. WebSphere DataPower SOA Appliances on DeveloperWorksData Web services in Data Studio Developer
Hey, DB2 for Linux, UNIX, and Windows DBAs! We’ve enhanced the Optim Database Management Professional Edition
with new editions of:
If you missed the summer announcement, this solution combines database administration, change management, performance monitoring, and high-speed unload capabilities into a conveniently packaged and attractively priced solution. Manas blogged
about the new release of Optim High Performance Unload 4.1.2, which includes extended offline capabilities to help you reduce the risk of business disruption. Check out the full announcement here
Optim Database Administrator V2.2.2 helps prevent errors and data loss when upgrading databases to support new applications. It helps you:
- Automate and script structural database changes
- Perform extended alters that require dropping and re-creating tables and managing data preservation
- Perform schema compare and synchronization with custom mapping features
- Migrate database objects, data, and privileges and generate maintenance utility commands
- Manage database objects, privileges, and utilities with an embedded components of Data Studio software
New and enhanced capabilities in Version 2.2.2 improve overall administrative tasks by supporting federated objects, enable off-peak task scheduling providing the ability to produce scripts that can run with the DB2 command line processor, add basic pureScale support, and improve analysis performance for large DB2 environments (think 1000s of tables). The announcement letter is here
I wanted to let you know about a new two-part article series written by Alice Ma and me which is intended to help you use DB2 Performance Expert for Linux, UNIX, and Windows
to your best advantage.
- Part 1 focuses on health monitoring, describing how you can set up PE to notify you when problems occur even if you are not sitting in front of the PE Client, and how you can easily see what is going on on your database using System Health data views.
- Part 2 focuses on more advanced concepts, e.g. how to take performance baselines, how to monitor DB2 WLM and how to customize PE to monitor partitioned environments effectively.
Performance baselines are useful if you plan a change in your environment, such as applying a new DB2 fixpack or changing a DB2 configuration parameter. Take a performance baseline before and after the change, and you can easily compare whether the change improved or harmed the performance of your DB2 system.
I hope that the concepts described in the articles will help you to more effectively use DB2 Performance Expert. Note that although there is not a downloadable trial version of Performance Expert, if you are interested you can contact your IBM Sales Representative to get it.
-- Ute Baumbach
Modified on by vpetrill
Yesterday was an exciting day with some very important announcements. IBM announced DB2 for Linux, UNIX and Windows Version 10.5 with the new column-organized tables that utilize BLU Acceleration; a unique and advanced columnar data store with advanced compression and hardware exploitation. IBM also announced updates to a number of database management solutions to help develop, monitor, manage and tune your DB2 LUW database. All of these solutions support the new column-organized tables.
With IBM InfoSphere Data Architect V9.1 you will be able to model column-organized tables.
IBM Data Studio V4.1 includes support for creating, altering, dropping and converting tables to column-organization. It also has enhanced support for multiple high availability disaster recovery (HADR) standbys, new checkpoint support in the script editor, and a wizard for creating federated objects.
A new advisor in IBM InfoSphere Optim Query Workload Tuner for DB2 for Linux, UNIX, and Windows V4.1 helps identify which row-organized tables could benefit from being converted to column-organization.
IBM InfoSphere Optim Performance Manager for DB2 for Linux, UNIX, and Windows V5.3 includes new statistics to support monitoring of column-organized tables. A significant usability enhancement has also been made to allow Optim Performance Manager users to tune queries using InfoSphere Query Workload Tuner within the Optim Performance Manager browser user interface.
IBM InfoSphere Optim Configuration Manager for DB2 for Linux, UNIX, and Windows V3.1 supports configuration management of column-organized tables as well as monitoring of the new DB2 pureScale rolling updates and application isolation for DB2’s pureScale clustering technology. Users can also use Optim Configuration Manager to determine how frequently database objects are accessed, making it easier to leverage the DB2 multi-temperature storage capabilities.
IBM InfoSphere Optim pureQuery Runtime for Linux, UNIX, and Windows V3.3 includes enhancements to SQL management as well as improved error reporting.
All these tools are now bundled with both the Advanced Enterprise Server Edition of DB2 and the new Advance Workgroup Edition as well.
A new DB2 Advance Recovery feature includes these updated solutions:
IBM DB2 Merge Backup for Linux, UNIX, and Windows V2.1 which has been enhanced to support column-organized tables.
IBM DB2 Recovery Expert for Linux, UNIX, and Windows V4.1 which now supports the DB2’s pureScale clustering technology, adaptive compression, multi-temperature storage, and remote log analysis.
IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX, and Windows V5.1 includes support for DB2 10.5 and integration with IBM InfoSphere Optim Data Masking Solution.
As we get closer to the release date, we will publish more detail about these exciting new enhancements. Stay tuned.
Today we announced a major enhancement to our performance monitoring and management solution for DB2, with the 4.1 release of Optim Performance Manager for DB2 for Linux, UNIX, and Windows (I’ll use ‘OPM’ in the rest of this blog entry). This is a major new version of OPM that includes a a significantly improved up and running experience and quick problem resolution.
The biggest change you’ll see out of the box is the new Web-based user interface and redesigned problem resolution workflow Our beta customers have given us great feedback in the development and refinement of this interface, and the result seems to be pretty well-received. One of our beta clients states that “The browser interface is easy to use, with intuitive dashboard displays and easy to understand presentation of information.” Even better, since it is Web-based, you can monitor databases anywhere without having to install software on various PCs.
The repository server collects performance metrics from the monitored database and stores them into a DB2 database. You can navigate through the stored data by time and see reports or dashboard data from the chose time period. This allows post-mortem problem detection and resolution, or for proactive monitoring and trend analysis. There are also interactive reports, such as for table space disk growth and for Top n SQL statements, that you can generate from this stored information.
The team has done a lot of work on getting up and running with the monitoring solution must faster. There is an integrated installer and there are predefined monitoring profiles for a variety of workloads, such as BI, OLTP, SAP, QA and Development. I’m really happy with the reports coming from the beta that installation and configuration is “easy.”
Finally, you can launch Optim Query Tuner from several of the dashboards, including the Active SQL and Extended Insight Dashboards, to do in-context query tuning on individual problem queries.
To realize the full power of the new integrations and lifecycle capabilities of this release, you should definitely check out the new package available in this release, called Optim Performance Manager Extended Edition (OPM EE) that builds on the base capabilities in OPM by inclusion of Extended Insight (previously a separately orderable feature), integration with Tivoli monitoring solutions, and configuration tooling for DB2 Workload Manager.
If you like the value of Extended Insight, which provides key metrics and visualizations of SQL as it travels through the software stack for dynamic Java applications, you’ll really like that we’ve extended the capabilities in this release of OPM EE to include CLI applications. We also include out of the box, customizable, workload views for SAP, Cognos, DataStage, and InfoSphere SQL Warehouse help get you going.
To round out our monitoring story to support the strong message we tell with static SQL, OPM EE now includes monitoring support for static SQL from Java applications. So if you want to take advantage of static SQL from Java, either by using the pureQuery API or by using client optimization for any JDBC application, you can get the Extended Insight information that you could previously only get for dynamic.
We’ve also made it possible to import pureQuery application metadata into OPM so that detailed information about the application source (Java package name, method name, line number) can be displayed on the Extended Insight Dashboard for any individual SQL statement. This particular feature will require a pureQuery Runtime license.
Integration with Tivoli monitoring solutions smoothes the handoff between system operators and the detailed database performance analysis performed by DBAs. The integration enables the ability to drill into the deep database diagnostic capabilities of OPM EE directly from the Tivoli Enterprise Portal. One of our beta clients who does extensive work with clients using Tivoli found this integration very useful, and points out that “Outsourced operations will love the Tivoli integration as it allows them to monitor multiple WAS and DB2 instances from a single point of control.”
Finally, OPM EE provides new tooling to significantly ease the configuration of DB2 workload manager. Although the existing WLM configuration tooling is still shipped with InfoSphere SQL Warehouse, this new tooling is integrated into OPM EE. Key monitoring information vital to workload management is presented in context so that you can do related configuration and validation within a single tool
There is really way more than I can possibly cover a blog entry. Here are links where you can find more information and see the user interface in action.
Recently I finished conducting a day long Proof of Technology session in New York on Data Studio Developer and pureQuery and I thought I'd share my experience.
For those of you who have never attended an IBM Proof of Technology, it is usually a day long event at an IBM location and is a combination of presentations and hands-on exercises designed to help attendees learn and play with the technology. The computers at these sites are pre-loaded with the software and exercises that complement the presentations. Your IBM sales rep or tech sales contact is the one who would nominate you to attend one of these.
Back to the pureQuery PoT -
It walks attendees through some of the basics of Data Studio Developer, all the way to advanced pureQuery concepts.
Here are details of some of the modules:
- Basics of Data Studio Developer (including a primer on the Eclipse environment). This is especially useful if you are not familiar with the Eclipse environment.
- pureQuery concepts and exercises
- Tooling in Data Studio Developer for pureQuery
- Bottom-up code generation using pureQuery
- Deploying existing Java applications using Static SQL without changing a line of code (aka client optimization)
- Explain capabilities within Data Studio. Check the Explain plan as you develop Java programs, stored procedures or in the Integrated Query editor.
Judging from the questions and comments from the attendees, it seemed as if they found it worthwhile.
I always like the feedback and validation (and sometimes invalidation) of our ideas.
Things I learnt during this trip:
- Challenges associated with deploying applications when using SQLJ. This is significantly simplified using pureQuery.
- The PoT has way more material than one could cover in a day. Attendees cherry-picked some of the later exercises.
- NYC can get quite cold at night without a thick jacket! (Call me a California wimp.)