With the holidays just around the corner, this is the right time to add to the spirit by announcing the new release of Data Studio
22.214.171.124 (the complimentary tooling for DB2 and IDS). Along with improved quality, following are the two significant functionalities we added:
- Administration of DB2 pureScale: If you aren't already aware, DB2's new version provides continuous availability through the use of highly reliable IBM PowerHA technology on IBM Power systems and a cluster- based shared disk architecture. DB2 pureScale provides practically unlimited capacity for any transactional workload. Thanks to a proven, scalable architecture, you can grow your application to meet the most demanding business requirements. Using Data Studio, you can do the following administrative tasks on DB2 pureScale
- Cluster Members: Start, Stop, Configure, Quiesce, Unquiesce
- PowerHA pureScale Server (CF): Start, Stop, Configure
- Basic single query tuning features: The following features have been incorporated from the deprecated Optimization Service Center for DB2 for z/OS:
- Capture queries from all data sources that Optimization Service Center for DB2 for z/OS supports and from XML files that are exported from DB2 Query Monitor for z/OS.
- View formatted queries.
- View access plan graphs.
- Capture information about the data server that queries run against, a feature which corresponds to Service SQL in Optimization Service Center for DB2 for z/OS
- Generate reports on the performance of queries.
- Run the Query Statistics Advisor to analyze the statistics that are available for the data that a query accesses, check for inaccurate, outdated, or conflicting statistics, and look for additional statistics that you might capture to improve how the data server processes the query.
Note that even if the above features have a legacy in the Optimization Service Center for DB2 for z/OS, you also get the same functionality for DB2 for Linux, UNIX, and Windows in Data Studio. For integration with Optim Development Studio
or for more advisors, such as Query Advisor, Access Path Advisor and Index Advisor, you need Optim Query Tuner
The above new features are only available from the stand-alone package of Data Studio, which is being refreshed. Hereis a link to the download document
that includes links to both the Data Studio stand-alone refresh and IDE Fix Pack.
Or, if you want to go ahead and download Data Studio stand-alone right now, here is a direct link
. And don’t forget the new free e-book, Getting Started with Data Studio for DB2, which you can download here
We have a lot of exciting things planned for 2010. So stay tuned and happy holidays!!
Updated December 15 to correct a typo.
I am the software architect for InfoSphere Data Architect
, and I wanted to spend a few minutes telling you what we’ve been cooking at the Lab over the past few months to deliver Fix Pack 1
for IDA 7.5.2, which was made available on December 11.
In this Fix Pack, we’ve added new features and improvement in a number of key areas, which I’ll highlight here. Diagramming improvements:
We are excited to have started incorporating the ILOG diagramming technology into InfoSphere Data Architect to provide an enhanced diagram layout. The new diagramming capability will offer choices of layouts as well the option to specify the spacing of objects, all very important steps towards offering greater control and flexibility of visualization.Import DB2 physical objects from other tools:
For years, InfoSphere Data Architect has offered the capability to import models from other tools. Significant in Fix Pack 1, we have provided unique capability to help import DB2-specific properties for physical database objects, such as index and storage, faithfully into IDA from other tools like CA ERwin and CA Gen. Whereas generic export/import capabilities may cause you to lose this information, this enhancement in IDA will enable you to preserve your existing data design efforts. Import from COBOL source files and copybooks:
Although this capability was on a temporary leave, our z/OS friends will be happy to know that this capability is back in with this Fix Pack. This "legacy data" often contains critical information, so they need to be included in the data modeling process. Filtering improvements for productivity and performance:
It is now possible to have a couple different approaches of specifying filters for model comparison and synchronization: you can specify filter options at the workspace level to streamline and improve overall comparison performance, and you can specify filter options at individual comparison invocation to improve ease of use of the comparison editor. Integration with InfoSphere Discovery (forrmerly Exeros):
Those familiar with our history and approach know that we have a strong focus in building linkage points across IBM products. This ability to easily share and collaborate the metadata is crucial to the acceleration of projects. In IDA 126.96.36.199, we continue this focus with the introduction of the integration with InfoSphere Discovery
. With this new capability, you can import the discovered metadata from InfoSphere Discovery directly into InfoSphere Data Architect and share and use it in a wide range of scenarios including the Optim Data Archiving
and Data Privacy
solutions, as well as InfoSphere Foundation Tools
For more details on the contents of this Fix Pack, be sure to read the Release Notes
Let me close by saying that the InfoSphere Data Architect team loves getting your input so please keep them coming. Feel free to post your comments and questions on the IDA forum
. We are excited about what we are going to do in 2010!
I was at the Lab in San Jose this week doing some planning and meeting with Business Partners. The folks here say the cold is pretty unreasonable, but I’m heading back to the upper Midwest, so I don’t think they have much room to complain.
Anyway, today we are shipping a set of Fix Packs and refreshes to our portfolio of database admin and tuning products. Some individual product architects will be blogging in more detail about what is in the Fix Packs, but I wanted to give you a head start on downloading by giving you a summary list of all the download documents that tell you how to find what you need. Reminder:
two product releases that Holly Hayes blogged
about in November also are electronically available today. Links to their download documents are here:
Have a great holiday.
I wanted to let you know that today we published a new e-book called Getting Started with Data Studio (for DB2).
It's part of the DB2 on Campus
effort, but we wrote it with the idea that anyone (not just college students) learning DB2 and Data Studio
could use this free book to get comfortable with the user interface and for performing everyday database administration tasks and for routine and Data Web Services development. You can use this with any edition of DB2, including Express-C
, which you can also download for free. And although the book was written with DB2 for Linux, UNIX, and Windows in mind, those of you who use DB2 for z/OS can use this as well since there is significant overlap in capability.
Anyway, I hope you like the book. And, more importantly, I hope it encourages more people to get familiar with and try out the broader range of offerings for integrated data management. Several of these other offerings are also available for download on a limited trial basis. You can find links to all the Optim trials (and to Data Studio) on the Integrated Data Management community space
In one of my last blogs, I wrote about the untold story of data privacy focusing on non-production systems. This past week, IBM made a significant acquisition of Guardium to improve support for compliance and the protection of privacy across all systems. As discussed in the previous blog, we often forget about the systems not in production and think that the current security in place is enough yet that is not the case.
The addition of Guardium to the mission of privacy, protection and compliance continues IBM’s mission in helping organizations at the enterprise level. Just like with data masking, often organizations don’t think of monitoring use, access and change to archives, but it is yet a risk. The combination of Guardium and Optim will provide that capability to monitor the SQL accesses and usage of data not only in production data sources, but also those in archives, development, test and training environments.
With Guardium comes an enterprise solution supporting most popular databases and application frameworks. Like the Princeton Softech and Ascential acquisitions of years ago, IBM plans to continue support for databases beyond those that are “Blue” and with the proof that they will based on now 4 years post Ascential and 2 years past Princeton Softech, I believe they will keep that promise. Most enterprises contain multiple technologies and enterprise solutions require support across them. Guardium continues IBM software’s vision of meeting the enterprises needs.
Hey, DB2 for Linux, UNIX, and Windows DBAs! We’ve enhanced the Optim Database Management Professional Edition
with new editions of:
If you missed the summer announcement, this solution combines database administration, change management, performance monitoring, and high-speed unload capabilities into a conveniently packaged and attractively priced solution. Manas blogged
about the new release of Optim High Performance Unload 4.1.2, which includes extended offline capabilities to help you reduce the risk of business disruption. Check out the full announcement here
Optim Database Administrator V2.2.2 helps prevent errors and data loss when upgrading databases to support new applications. It helps you:
- Automate and script structural database changes
- Perform extended alters that require dropping and re-creating tables and managing data preservation
- Perform schema compare and synchronization with custom mapping features
- Migrate database objects, data, and privileges and generate maintenance utility commands
- Manage database objects, privileges, and utilities with an embedded components of Data Studio software
New and enhanced capabilities in Version 2.2.2 improve overall administrative tasks by supporting federated objects, enable off-peak task scheduling providing the ability to produce scripts that can run with the DB2 command line processor, add basic pureScale support, and improve analysis performance for large DB2 environments (think 1000s of tables). The announcement letter is here
We recently returned from IOD, which was quite an intense experience for those of you who have never been. We both gave lots of demos and talked to lots of customers about the Optim Query tuning solutions. And you can imagine that any session on query tuning, with or without a tools focus, was really packed. It seems as if people can never hear enough about query tuning, because it’s actually pretty interesting to do, and because it can have such an impact to the day to day life of a DBA (or whoever in your organization is tasked with reviewing and turning queries and query workloads). Ray blogged earlier
about how important it is that developers and DBAs collaborate more in the query tuning process. Not only can developers build up their skills, they can hopefully come to the DBA with some of the basic stuff taken care of, or at least a better understanding of what the issues are. The earlier in the cycle that issues are discovered, the less expensive and labor-intensive the tuning process is.
Anyway, we wanted to share with everyone who could not make the conference some scenario-based demonstrations of how query tuning solutions can work together. We are co-presenting at an upcoming Virtual Tech briefing (complimentary!) on November 19th.
The focus of this is on z/OS, so we’ll talk about query tuning from both a development and DBA perspective, and discuss how to use Optim (such as Optim Development Studio
, Optim Query Tuner
and Optim Query Workload Tuner
) and other z/OS tools to work through some typical scenarios. The scenarios we are planning to cover include:
- A development time scenario in which query tuning capabilities and reporting are available directly from the development environment.
- Invoking query tuning capabilities from a monitoring environment such as OMEGAMON and Query Monitor
- How you can use Query Workload Tuning in a version to version migration scenario
- Workload analysis capabilities and an index advisor walkthrough.
We hope you can join us. Register today
Ray and Saghi
IOD was a busy time for the Optim team. I hosted three sessions and also was in the pedestal area and had a chance to interact with several DB2 and IDS customers. While all the solutions certainly got their attention, two solutions seemed to bubble up to the top in terms of customer discussion and questions. Optim pureQuery
was one of them, mainly from the DB2 for z/OS crowd. There is definitely a trend in the market of customers wanting to take advany tage of the wealth of data stored on the DB2 for z/OS platform rather than transport that data to another platform. This trend indicates that JAVA is the preferred language when developing these new applications; however, finding an acceptable data access layer both in terms of ease of use as well as high performance, has been a challenge. Many companies are looking at things like Hibernate to address their ease of use challenges, but unfortunately Hibernate doesn't address their high performance requirement. Several customers wanted to understand how pureQuery could help them in this situation and were very excited once we talked to them about not only the benefits of Static SQL
, but also the client optimization
features of pureQuery, and most importantly how we could visualize the SQL that Hibernate was generating to the developer and DBA. Seems to be a perfect fit for providing the high performance that DB2 z/OS customers expect for these new JAVA applications.
The second solution that seemed to get a lot of attention was our Performance Expert
solution. When it comes to problem diagnosis everyone is always looking for a better mousetrap and it seems that solutions in this area get hot and then run their course. Well, Performance Expert seems to be the up and coming hot product. I believe this is due to the sheer wealth of information this product collects. If , or should I say when, a problem occurs in production the most difficult challenge is gathering that information, especially because that moment in time is now past and you really need historical data to determine what went wrong. Performance Expert collects information and stores this information in intervals, making it incredibly easy to get the information needed all synchronized to the right time. You can see a short video of this capability here
. The other thing I found interesting was all the various reports and consoles customers were writing based on the data found in the Performance Warehouse. From capacity planning to SLA reporting, it seems to me that there is a lot of customization going on that would probably make a great birds-of-a-feather session in future conferences.
As I mentioned earlier, all the solutions got their attention. I hosted a joint session with Randy Wilson from Blue Cross Blue Shield of Tennessee where Randy described a very interesting outage situation (not fun!) and how Optim High Performance Unload
saved the day for them in terms of being able to supplement their recovery and provide the historical data they needed. Interesting how customers really think outside the box and come up with creative uses for our solution. After that session I had quite a few discussions with customers on how they could use High Performance Unload for situations like Randy's where back-level DB2 or dropped tables caused challenges in terms of recovery.
Well it was a busy week and now comes all the customer follow-up that happens after IOD!
Since all the action is in Las Vegas this week, the halls are quiet (or so I hear, since I am working from home today) and meetings are cancelled. This gives me time to provide you all with a little commercial for a little spot on the "internets" called the Integrated Data Management community space
. This space started out as the Data Studio community space by Grant Hutchinson
. When I joined the team, I took over management of the space including the move to IDM to be more inclusive of the whole Optim portfolio. If you've never worked with developerWorks spaces
, they are kind of cool. Anyone (internal or external) who gets approval can self-publish their own web space using the templates provided by developerWorks.
Anyway, I feel the IDM space needs a commercial because so many people still don't know about it (even people on my own team!), and I must send out the link several times a day. The intended purpose of the space is to try and put in one place links to all the blogs, discussion forums, articles, product documentation, classes, downloads, videos, demos, upcoming events, etc related (mostly) to the Optim portfolio.
Admittedly, the site isn't perfect as I am limited by the dw templates, and I still have work to do to get more of the heritage Optim materials integrated. But most people who discover it do think it's very useful. There is even a place to post messages, but that has not been used thus far. I think using our existing discussion forums is probably the best way to get questions answered anyway. Not sure where the forums are? Well, the links to all the forums are on the space!
As of today, the space is divided in to three tabs: Home, Articles and tutorials, and Downloads. Here is the Home tab:
As you can see in the screenshot above, you can add any of the portlets to your myYahoo or igoogle page (or as an RSS feed) by clicking on the orange plus icon. The screenshot below shows my Yahoo! page with the 'latest news' and forums portlet added.
Here is the downloads tab. It also includes some additional related software such as DB2 Express-C and the Replication Dashboard.
Anyway, I hope you like the space. Let me know if there's anything you want to see added, changed, or removed. And spread the word, please?
Let me start by introducing myself. My name is Manas Dadarkar and I am the technical lead for Data Studio (the complimentary tooling for DB2 and IDS) and for High Performance Unload (HPU) for DB2 for Linux, UNIX, and Windows. I took on this role recently and am right in the middle of planning a lot of exciting things for Data Studio and HPU.
However, I will leave Data Studio for another day (blog posting), so today let’s talk about HPU, including the announcement today of the new release, 4.1.2, under the new name, Optim High Performance Unload for DB2 for Linux, UNIX and Windows.
For people not familiar with HPU, it's a product that offers high speed unload of DB2 data, thereby allowing DBAs to work with very large quantities of data easily and efficiently. In limited lab testing using a single large table, we have seen multiple magnitudes (6-10 times) performance improvement using HPU as compared to the DB2 EXPORT utility. The usual disclaimers apply: Your results will vary.
Since version 4.1, HPU has also supported automatic migrations. You can now migrate data directly from one database to another, including unloading, transferring, and loading of the data between the source database and target database without the need for intermediate storage. You can get more information about HPU here. It's also featured in our Day in the Life of a DBA video demo.
With Version 4.1.2, what's new is being able to unload or restore using incremental backup images. If you use incremental backups, it’s likely that those backups will provide you with a more current version of a dropped table. Check out the full announcement here.
For those of you who have moved to DB2 LUW 9.7 (or will be soon), you don’t need to wait for the 4.1.2 release, which is planned to be electronically available on November 20. We’ll shortly be releasing the support for 9.7, including the ability to work with DB2’s industry leading XML data support. I’ll post a short blog with the link once that is available and we’ll also put the word out on Twitter.
Also, if you are going to IOD2009, you can hear directly from a customer about HPU by attending session 1499 A day in the life of a DBA: How to keep your sanity.
You’ve heard a lot here about pureQuery value
: better performance, faster development, improved manageability, and increased security. Now we’re making it even easier to purchase pureQuery. We’re announcing two new DB2 Connect
offerings that bundle Optim pureQuery Runtime
- DB2 Connect Application Server Advanced Edition and
- DB2 Connect Unlimited Advanced Edition for System z.
Since Optim pureQuery Runtime, in essence, extends and enhances your database drivers, this packaging makes perfect sense. You install pureQuery Runtime on the same system as your database driver and pureQuery enables SQL capture from executing applications, transparent literal consolidation and replacement, transparent conversion to static execution, heterogeneous batching of SQL requests, SQL lockdown to approved list to mitigate the risk of SQL injection, SQL replacement for emergency application fixes, and client execution metrics collection for hot spot identification.
Purchasing is made easier because most DB2 for z/OS customers already have DB2 Connect on their approved products list, speeding up procurement. And you can easily upgrade from DB2 Connect Application Server Edition or from DB2 Connect Unlimited Edition for System z to the respective Advanced Edition offering. Check out our announcement letter here
And for those of you going to the Information On Demand Global Conference
, be sure to check out how State Farm (State Farm Optimizes App Performance with Optim Development Studio, pureQuery, TDM-1718) and Handelsbanken (Evaluating Java Data Access Technologies at Handelsbanken, TDM-2177) are realizing pureQuery value.
As I have now written a few blogs, some folks have asked, "who are you and why are you blogging", so here it goes :-)
My name is Eric Naiburg and I am a program director at IBM responsible for product marketing and strategy of the Integrated Data Management products. I started my career working with databases as a product manager at a small company Logic Works where I was responsible for a product called ERwin. I am sure most of you have heard of it. Before getting into product management though, I learned about architecture and modeling in a very different way than computer science, I built about 50 houses in central New Jersey with a few partners. After spending about 5 years with Logic Works which later became Platinum Technologies and is now CA, I left to join Rational Software and build out Rational Rose Data Modeler which was the precursor to InfoSphere Data Architect. Over 7 years with Rational and IBM Rational, I held several positions including product management, solutions marketing and product marketing. I also was privileged enough to work with a good friend and colleague Robert Maksimchuk where we wrote 2 books:
About 3 years after IBM had acquired Rational, I decided to take some time and try my hand working for a much smaller company where I joined Dr. Ivar Jacobson and his consulting company as Vice President of Sales and Marketing. I spent more than 2 years helping to grow Ivar’s business, while learning great things from him, but in the end, I decided that IBM was the place for me and last September, I decided to come back. I had a great opportunity to join the Optim (Princeton Softech) team about a year after they had been acquired and here I live happy and enjoying working with a new group of people, new products and a team with a ton of energy.
As I wrote about in my last blog, Data Privacy goes well beyond just protecting production environments and in his latest article, Shawn Moe discusses how to bring privacy directly to the development environment. http://www.ibm.com/developerworks/data/library/techarticle/dm-0910optimprivatizeddata/index.html This article goes into great detail on the definition of privacy policies early and how to ensure they are followed throughout the software development lifecycle starting with requirements and data models and ending with a masked non-production environment.
In the past few weeks, I have traveled around New England and met with several organizations that are suddenly looking at making data masking a key priority in their data management activities. I asked them why it has taken so long and the common answer was "we had limited budget and had to start with production. We knew there was a need, but we had to prioritize. Now we have been told that it is mandatory and we need to reduce risk immediately". So, what has brought on this risk and need to insurance? Well, states are starting to both regulate and enforce data privacy standards, PCI seems to be going from paper process to reality of not being able to do business and SOX is starting to be enforced too. So, what does that mean for you? Start looking at what sensitive information is being passed all around your organization, limit the amount being passed and mask anything that is sensitive.
Follow Shawn's process and see how your organization can protect the privacy of data from the start!
Goodbye for now and hope to see you at IOD
Today I was talking with a customer who has Java applications that call COBOL and SQL stored procedures to do some additional business logic. My customer asked me if IBM offers a tool that can assist programmers and DBAs to do seamless debugging between Java applications and stored procedures.
This isn’t the first time that I have heard this request from customers because it can be a real pain point.The scenario is this: during development, developers need to debug a Java application that invokes a COBOL stored procedure and a SQL procedure. To debug that, you might need multiple tools - one that debugs the Java application, one for debugging COBOL stored procedures and another for the SQL. In fact, by co-installing (shell sharing) Optim Development Studio
with Rational Developer for System z
, you actually can do this end to end debugging without switching contexts. ODS provides the SQL procedure debugging capability, and RDz provides Java application and COBOL stored procedure debugging capabilities.
Next week, please join me on a follow up to our previous session
on on building and deploying SQL stored procedures for DB2 for z/OS. In next week's session
, Marichu Scanlon and I will go into more details on debugging, including some hints and tips. And we have our Rational friends on board to demonstrate debugging how to debug COBOL stored procedures.
May the demo gods be smiling upon us.
I'm part of the User Experience team that is focused on ensuring the usability of the Optim products and solutions. I wanted to tell you about the Optim Usability Sandbox at IOD. The usability sandbox sessions are a chance for you to give us usability feedback on products and solutions you use or are interested in. These sessions differ from others at IOD-- they are held with just a few participants at a time, and they're interactive--you get to tell us what you think about the latest release of a product or future designs we're considering (after signing a non-disclosure agreement). It's a chance to impact a product and let us know what's important to you.
We have usability sandbox sessions lined up on the following products and topics within the Optim portfolio:
- Data Studio and Optim: Integrated Data Management
- Optim Development Studio
- Optim Database Administrator
- A New Experience for DB2 Performance Management
- Optim Query Tuner
- Optim Data Privacy Solution
- InfoSphere (Exeros) Discovery and Optim Integrated Data Management Solutions
- Optim Database Relationship Analyzer
Besides the Optim sessions, there are several other usability sandbox sessions for Content Management, DB2, IMS, and more. They're all listed here
. Each usability sandbox session will be held various times throughout the week. The Usability Sandbox is in room "Breakers L." We look forward to seeing you there!
While I was in Rome for IDUG EMEA, I had a chance to talk to a lot of customers about Data Studio and Optim products. One of the customers expressed some concern about the difficulty of maintaining the Optim software on each desktop that runs our Eclipse-based software offerings. I'm sure a lot of other customers are worried about this topic, and it caused me to realize that this is something we don't discuss much in our conference presentations or in our blog/Web content.
The good news is that we've actually done quite a lot in this area and are continuing to do more. Our install process for the Optim products allows you to operate in one of three modes:
- An install image can be uniquely installed on each desktop. This does require each user to install the product, but we support the Rational Installer update process, which allows the users to easily check to see if updates are available in the internet and automatically install them without having to manually download updates or feed CDs into their PCs.
- You can also choose to install the Optim products on a file server that acts as the production image of the product. A small launcher file is invoked on the end user's desktop, which will load the Optim product into the end user's PC from the file server. With this approach, you only have to maintain the copy of the product that is stored on the file server, and individual users are upgraded instantly each time you apply maintenance to the file server image.
- You can also choose to package the installer using tools like Microsoft's Systems Management Server (SMS) product to deploy the installation to multiple end user desktops. Here is a link to the documentation on this mode of installation in the Installation Manager Information Center. We’re working on getting this information into the Information Management installation documentation as well. We’re also working on creating sample scripts to aid in the packaging and deployment, reducing the development time that is typical when using these tools.
All three modes require less manual labor than the traditional technique of using CDs or DVDs to install and upgrade products. The last two modes fully automate the process for the end users, so that an organization only has to perform maintenance on the centralized file server image (mode 2) or maintenance occurs automatically as a deployable update (mode 3).
In my dual role as CTO and VP of Data Servers, I spend a fair amount of time on the road talking to people about what's going on both with database server technology and with Integrated Data Management, both from a businesses and technical perspective. Anyway, I like these road trips because they help me validate the business reasons behind the technical offerings we make. Because I meet with so many different people in different job roles (CTOs, CIOs, DBAs, architects, developers..) it helps me to calibrate the tradeoffs and priorities to get a product or new functionality out the door and to place that new capability in the context of real customer problems. In addition, no matter how many times I give the "IDM vision" pitch, I almost always get feedback that helps me to make it just a little better the next time, such as finding out what part of the strategy might not be clear when people hear it for the first time. To be honest, these roadtrips also help me keep my technical chops since I have to make sure I'm up to date on the latest functionality and be prepared to answer some tough questions from the audience.
For this reason, I'm really looking forward to my all day appearance at the upcoming Tridex DB2 Users Group on October 15
. I start off with the vision pitch, because I find that helps to place the more technical talks in the correct perspective. Then, I'll go into detail on the integration work we are doing with WebSphere. This work is critically important in making the whole database/application server layer more efficient and easier to manage, as well as providing DBAs with more control. Then I go into a deep dive on Query Workload Tuner, which is our follow-on product to DB2 Optimization Expert. Finally, I'll be closing with a talk on using pureQuery for high volume applications, in which I discuss more about the performance aspects of using pureQuery such as heterogeneous batching, static support, and more.
If you're in the New York area, I think you would get a lot of value from attending this free event. Note that they do require you to register ahead of time, so go to web site
, download the invitation, fill out the registration form attached to that invitation, and email it in.
Hope to see you there.
I want to start out by introducing myself as this is my first of hopefully many entries to this blog. My name is Eric Naiburg and I am responsible for Product Marketing Strategy of the Optim solutions. I have recently celebrated my 1 year anniversary with the Optim group and rejoining IBM. Prior to this role, I spent more than 2 years away from IBM working for Dr. Ivar Jacobson as VP of Sales and Marketing for Ivar Jacobson Consulting and a brief stint at CAST Software as Director of Product Marketing.
In my previous role at IBM, I work for Rational Software actually spending 4 years at Rational pre-acquisition and 3 years as part of IBM. I held many roles at Rational including product manager for the modeling solutions, product marketing of solutions and desktop products just to name a few. Before joining Rational, I was the product manager for the ERwin data modeling tool with Logic Works which was acquired by Platinum Technologies and later CA.
I have also published 2 books with Addison Wesley. UML for Mere Mortals and UML for Database Design. Both of these books were fun to write and provided great learning experiences for me and my co-author Robert Maksimchuk.
So, now to the Blog….
Data Privacy - "The Untold Story"
Data protection and privacy continue to be a tremendous focus and risk for the IT community today. While organizations are making great strides to protect data privacy in production application environments, but the “untold story” of implementing similar strategies in non-production (testing, development and training) environments is often overlooked. When I talk to people at conferences and in meetings, I ask them the question, “how are you protecting the privacy of your data in development and test environments”. The scary result is often they look to the floor, have a nervous smile on their faces and say, “we know it is an issue and we know we need to do something, but it isn’t on the top of our priority list at this time”.
That is why I call it the “untold story”. We know of the threat, but don’t do much about it until it is too late and a breach or loss happens. There is significantly more non-production data floating around organizations than there is data in production. It is used for testing, development, training and more. Additionally, when the data is used, it may be copied into spreadsheets for use in automated testing tool or for manual testing inputs exposing the data outside of the database itself and now even a bigger risk.
Because the data is being used and moved, it needs to be protected. Since testers, developers and others can see the data, encryption just isn’t enough, the data must be de-identified or masked. Non-production data does not have to be real; it however does have to be realistic. The process of masking creates realistic data from your production data and if done correctly will ensure the referential integrity across a single database or entire system. Masking does not prevent the loss or theft of data, but it makes it of no value if that occurs.
So, to keep your organization from being the next negative headline of data theft, mask your non-production data making it realistic, but not real.
My kids told me the other day that they can tell it's Fall because my calendar is full of travel for work. The good part about this travel is that there are lots of opportunity to meet with customers as well as other IBMers at two big conferences: IDUG Europe
and Information on Demand (IOD)
IDUG Europe is in Rome this year, and the only bad part is that I can't stay in Italy longer! I have to leave Thursday morning so I can attend some Boy Scout leader training Fri-Sun. Hopefully, I can sneak out in the early morning and see some of the cool sights again like the Forum. IOD is in... SURPRISE, Beautiful Las Vegas! OK, I guess that's no surprise. It's interesting to hear the reactions from folks on Las Vegas. It seems like a love or hate relationship. As artificial and overindulgent as Vegas is, I enjoy it. I enjoy the food (ask me for restaurant suggestions), the weather of the Fall there, the safety of walking along The Strip anytime of the day or night, and most of all, the people watching. There's nothing better than getting a good seat with my favorite beverage and watching the crowd parade past.
OK, back to the topic of technical conferences. I have a friend who works as a developer for a DBMS competitor (no need to ask ;-)), and he's always jealous when he hears how much my fellow developers and I get to go and present at these conferences. At his company, they mostly only let product managers present to customers and leave the geeky developers back at the lab. There is very little that can substitute for face to face contact with the people who are working on the products. I'm glad that IBM lets us geeky folks out of our cages for these events. Conferences seem to be suffering from lower attendance (no surprise, considering the economy), but it's really unfortunate when you consider the value than can be gotten. The conference web sites usually have an ROI justification to help with this, and I really think that the focus, the proximity to knowledgeable experts, and the hands on labs really do justify the costs.
I just plotted out the sessions that I plan to attend at IDUG. I always like how IDUG has a high number of customer presentations. I always enjoy listening to those. IOD also has a lot of customer talks this year. Curt mentioned some of them in his blog post
Well, I hope to see you in Rome or in Vegas. In both places, I have a "new" session called, "Why are DB2 for z/OS DBAs interested in Data Studio and Optim Tooling?
" (IDUG Rome: on Monday, Oct 5th, and IOD on Monday, Oct 26th -- as always, check the monitors for last minute gate changes when you arrive at the airport). I felt this tailoring of a session for DB2 for z/OS DBAs was needed because we have a lot of stuff labeled Optim, and not all of it is of interest to a DB2 for z/OS DBA as of today. I also take a no-glitz look at the free Data Studio offering and show exactly what it can be used for on a DB2 for z/OS system. Holger Karn will join me at the IOD session to talk about some performance monitoring futures that I know will get you excited. At IOD, I have another session with Jay Bruce called, "Query Tuning on IBM DB2 for z/OS
" that discusses the task and shows tooling that can help you do it.
Hope to see you there!
I know you all are probably getting bombarded with news about the upcoming Information on Demand NA conference
in Las Vegas, but I wanted to make sure you were aware of some highlights from an Integrated Data Management perspective. I'll be joining my colleague Al Smith, originally from Princeton Softech, presenting our strategy and vision (I need to earn my flight there, after all), but I think what's even more interesting are the customer speakers that are lined up. There's a broad representation across industries of companies who are focused on different aspects of our solutions. For example:
- Scotiabank Speeds Application Testing and Protects Data Privacy with IBM Optim (BFM-2345)
- State Farm Optimizes App Performance with Optim Development Studio, pureQuery (TDM-1718)
- Efficient Test Data Management: An ROI Success Story (Travelport) (TDM-2841)
- Evaluating Java Data Access Technologies at Handelsbanken (TDM-2177)
- A Day in the life of a DBA - how to keep your sanity (Blue Cross Blue Shield) (TDM-1499)
Also, if you want to get your hands dirty, please reserve your seat in some of the hands-on labs. I know our technical enablement team has some great labs lined up that include integration of some of the products into solutions, such as:
- Accelerating Java Applications (HOL-1276) which talks about using Optim Development Studio, Optim Query Tuner, Optim pureQuery Runtime, and Performance Expert Extended Insight together. This is a great use case that DBAs should be aware of. (It actually reflects pretty closely the Java Acceleration Solution demo.)
- Model-driven data governance using InfoSphere Data Architect and Optim (HOL-1277) includes Optim Test Data Management and Data Privacy Solutions together. If any of you are responsible for data privacy or are looking for ways to help 'bake in' privacy safeguards starting at design time, then this is the lab for you
If you want to talk to me, be sure to get your seat reserved at the Meet the Experts on Tuesday afternoon (you can enroll through the SmartSite link below). It will be a very busy week, but I think you'll find it worthwhile.
To help you with planning, here are the key links:Integrated Data Management roadmap to the conference.
(Print it out and take it along with you.)IOD Conference SmartSite
to help you plan your agenda and enroll.IBM Optim on Twitter
(This is already active, but we'll use this to keep you informed of updates at the conference)IOD 2009 on Twitter
(for the bigger picture on conference activities)
Look forward to seeing you.