Hi, this is my first time blogging here. I'm an architect in the Data Studio development team, and I work on integrations, heterogeneous database access, and more. I wanted to use this opportunity to tell you about some work I did with pureQuery and Enterprise Generation Language. Enterprise Generation Language
(EGL) is a modern programming language specifically designed to help the business-oriented developers quickly write full-function applications and services based on Java and modern Web technologies. Business-oriented developers write their business logic in EGL source code using the powerful development facilities of Rational Business Developer Extension
, Rational Developer for System z
with EGL, or Rational Developer for i for SOA Construction
. From there, the tools then generate Java or COBOL code, along with all the runtime artifacts you need to deploy the application to the desired execution platform.
Data Access is one of the key components of EGL. You can access your database data using EGL SQL Records which provides a very high level of abstraction and allows you access to the data using simple verbs or you can write your own data access logic. Below are simple examples showing both scenarios.Figure 1. SQL RecordsFigure 2. A basic data access program
If you are a regular reader of this forum, you probably already know that pureQuery
is IBM's, high-performance data access platform focused on simplifying, developing, securing, managing, and optimizing applications that access data. You may have read about the benefits of using pureQuery client optimization with Hibernate, JPA, and even .NET applications. You can also use pureQuery technology with the Java code generated from EGL to
- Optimize applications that access DB2 on any platform by capturing the statements generated by your EGL application, binding the statements to database packages, and then executing the application in static mode.
- Get an insight into your EGL application using Data Studio Developer (which shell-shares with Rational Business Developer) to see a list of SQL statements originating from your EGL application with details on number of times executed and execution times. And you can of course use the outline to jump between the SQL statement and the originating line of Java source code.
- Replace SQL in the program without having to change the application code (if you don't have access to the source code, for example).
- Prevent SQL injection by allowing only SQL statements that have been captured and approved to run against the database.
Kathy Zeidenstein and I have put together a tutorial
on Rational Cafe that shows how this integration works and how the technologies can be used by EGL customers writing applications with DB2 data servers.
Enjoy, and let me know if you have any questions.
Hi, I haven't written a blog entry for a couple of months, so if you're new to our blog, I'm the team lead and architect for Data Studio Administrator
One of the features in the 2.1 release is a quick and easy way to copy object(s) from one database to another. You can read all about it in this developerWorks article
that members of my team recently published. But let me summarize here.
Suppose I've just made some changes to my development database and application, and I've completed my testing. Now I want to copy the database changes to a test environment for further validation. Let's see how I can do that using Data Studio Administrator 2.1
with Fix Pack 1.
I connect to my source database in the Data Source Explorer (what used to be called the Database Explorer). I like to drill down using the flat folder view (new in 2.1, which presents objects in folders by type) versus the hierarchical mode, so I toggle to that view using the icon in the Data Source Explorer tool bar. I expand the database and select the Tables folder. In the Object List View I copy the tables that I've changed and want to move to the test system.
Now I'm ready to paste these objects into my target.
To paste the objects into a new database, I connect to the target database, expand the database and select the Schemas folder. I click on the schema I'd like these objects pasted into, right click and choose paste.
When I do this, I have the option to also paste dependent objects and data. The wizard helps me to create a change management script to implement these changes. Optionally I can customize the change further. Finally I can deploy them along with my application changes.
Copy and paste is a quick and convenient way to move objects between environments like development and test.
We talk about the benefits of static SQL execution as a benefit of using Data Studio Developer and pureQuery. And those benefits are great. But if you're an Informix developer or DBA, you may still be wondering if Data Studio Developer can help you improve application performance, since IDS doesn't currently support DB2's static execution model.
When we announced Data Studio Developer 2.1, we added many features that benefit Informix database developers and DBAs. Guy Bowerman
covered some of the key 'base' features for both developers and DBAs, including support for UPDATE STATISTICS, improved support for triggers and tablefragmentation. But if you're a Java database developer or DBA who cares about performance, take a look at pureQuery capabilities for Informix
as well, including:
- Heterogeneous Batching: You can now develop pureQuery code and gain performance benefits and reducing network operations by using heterogeneous batching of operations.
- Query Response Time Optimization: DBAs might appreciate this one. If you have existing JDBC- based applications, you can see the response times for each SQL statement executed in your application and replace the most expensive queries with better ones, without accessing or changing the application source code.
I highly recommend downloading Data Studio Developer
, and take a look at the IDS tutorial for Data Studio Developer
that was recently updated to include some of the new pureQuery features. And stay tuned. I will soon be talking more on developerWorks about “What's new and cool” in our upcoming release.
I was looking at Scott Ambler’s surveys
on IT project success rate. It is very interesting how project success as seen through Scott’s surveys present a more hopeful picture for project success than from the Standish Group’s Chaos Report
, which in its 2006 refresh reported a 35% success rate and a 46% “challenged” rate. (Nice blog entry summarizing a variety of research on the topic in Dan Galorath’s blog
and 2006 Standish numbers from an SD Times article
.) Standish defined success as “on time, on budget, meeting the spec”, while challenged means they had cost or time overruns or didn’t fully meet the user’s needs. But I digress…
Scott’s data indicates that projects that use evolutionary development methodologies, e.g. Agile
or Rational Unified Process
, fare better than those using traditional waterfall or ad-hoc processes. That’s not surprising given the emphasis on tight collaboration among stakeholders and continuous evolution and validation. Really, it’s pretty intuitive. So I was thinking about key characteristics of iterative methodologies and how they relate to database and data access development. (I know, Scott has already thought about this too.
See his Agile Data
site. And Rafael did a Webcast
on it earlier in the year.) But more specifically, I wanted to look at how our Data Studio portfolio supports evolutionary development methodologies. Yes, there’s more to do, but I think what we offer goes a long way towards accelerating solution delivery with high quality results. Vijay and I are going to do a Webcast on this April 28th titled Accelerating Solution Delivery for Data-Driven Applications
. Hope you’ll join us.
In some ways, this is also the companion Webcast to Rafael’s Performance Optimization
webcast. In his blog
, he talked about how from a lifecycle perspective performance optimization can broken down into doing it right the first time or fixing it after that fact. His Webcast focused on the latter and this one on the former.
What are your stories about evolutionary methodologies and database development? Have you used Data Studio software in this context?
I’ve been working on DBA solutions lately -- in particular, performance optimization topics. You’ve heard some cool things about our offerings already if you follow the Data Studio blog, such as Alice’s blog
on performance diagnostics or Jeff’s blog
on pureQuery from a systems programmer perspective. One of the things I want to do is bring our different offerings together in a more cohesive context.
Performance optimization can be broken down into 2 big categories:
Regarding the latter, first you have to recognize that you have a problem (or a potential problem) and where the source of the problem is – that’s what DB2 Performance Expert and Extended Insight Feature
Then you have to fix the problem. Of course, there’s always adding more resources – more CPU, more memory, more storage. But if that’s not an option, for database performance problems you’re going to want to:
So I’m pulling this together into a scenario for a Webcast on Tuesday, April 21st entitled IBM Integrated Data Management Solutions for Performance Optimization
. You can register here
. I know, a little late notice. But hey, there will be a replay too.
-- Rafael Coss
Hello everyone - first let me introduce myself. I joined the Data Studio development team more than 2 years ago and since then I've been working on the Data Web Services capability. During that time I gained a lot of knowledge about connecting database assets to the Web, Java and J2EE development as well as XML processing ... until one day a strange blue box named "DataPower" crossed my way. This blog is about the results of this encounter ...Information as a Service
A lot has been said about making your data accessible to an SOA environment, the Web, the Cloud, and so on... and IBM, as well as other vendors, provides a wide variety of software tools and frameworks to make all that possible.
But let's look at a common case where data is represented as XML and stored in a relational database:
Very often data is represented as XML messages which flow over the wire. The XML may contain data which comes from, or is going to be written to, a database. It may also contain data and metadata to perform remote procedure calls (RPC) as we see with SOAP. There might be complex business logic at the remote server associated with a message flow - for example in form of a stored procedure in DB2 - but we may also just have a simple mapping from XML into relational structures, or we may just store the XML "as-is" in the database - thanks to the pureXML capabilities in DB2.
The bottom line is that there are many cases where you need nothing else but a simple mapping definition from XML to an SQL statement or a stored procedure call, and another mapping definition to map result sets and output parameters to XML structures.Make it fast and secure with WebSphere DataPower Appliance
And there are again many approaches on how such a mapping and network wiring can be done, but today I want to introduce you to a fairly new way of doing that by using the IBM WebSphere DataPower Integration Appliance XI50
. Because DataPower has - among many other features - the capability of connecting to databases, which includes superior support for DB2 on all platforms.
DataPower allows you to define mappings from XML to database calls via XSL. Furthermore, you can define message flows, network protocol endpoints, security, quality of service properties, WS-* features and much, much more via the award-winning Web GUI interface. And all that without writing a line of code or even code generation! Everything is well contained and controlled inside the DataPower appliance without the need of changing your database server, your legacy data or your application/business logic contained in stored procedures. Scenario 1: Enrich messages with database content
In this case, messages passing through a DataPower appliance can be enriched with data stored in a database. It's also possible to write parts of or the complete message into the database - for example using the database as an audit log.Scenario 2: DataPower as an SOA Gateway
Here we declare the database as the target for the message flow. The message data can be stored into or retrieved from the database, or more complex business logic can be invoked in form of a stored procedure.Scenario 3: High-performance batch INSERT with DB2
DataPower supports the DB2 batch INSERT feature which can speed up the insertion of multiple records into the database. This can be leveraged for XML shredding. You can easily break up a large XML document into multiple records (rows) by defining the shredding rules in your XSL script. The batch INSERT takes care of writing the individual records quickly into the database.
Find out more in this developerWorks article on "Using WebSphere DataPower SOA Appliances to enable the Information as a Service pattern
The native DataPower XML processing stack provides you wire-speed XML parsing, validation and transformation. DataPower supports a wide variety of different network protocols which allows connectivity to many different back-end and front-end systems - but at the same time you can be sure that your data is kept safe and reliable with the many security capabilities provided with the appliance.
Last but not least, a well designed management API as well as connectivity to registries like the WebSphere Service Registry and Repository (WSRR) allows easy life-cycle management of your artifacts.Make it even easier by using IBM Data Studio Developer
At this point you may say - "All that sounds great, but why in the world is he posting this in the Data Studio blog?".
Well, you may already be familiar with the Data Web Services feature in Data Studio Developer
(if not, see links below) where you can easily expose stored procedure calls and SQL statements as Web service operations by creating the appropriate J2EE runtime artifacts. But did you know that you can now also create service runtime artifacts for DataPower? The Data Studio Developer tooling generates all artifacts for a Data Web service - including the WSDL file and - in case of DataPower - the XSL scripts which perform the mapping from the XML message into database calls. There is no need to code the XSL by hand. All that's left do to is the deployment of the artifacts on the DataPower appliance.
I wrote a tutorial on this topic to show you how easy it is to use Data Studio Developer to create the Data Web service artifacts and how to configure DataPower to host the generated services. Check it out. Additional ReferencesDataPower Resources
To get some more ideas on using DataPower and DB2 pureXML I recommend the following two articles:
Even a Masters thesis
evolved out of that work around XML standards in health care. WebSphere DataPower SOA Appliances on DeveloperWorksData Web services in Data Studio Developer
After 5 years of use on my existing cell phone I decided to upgrade to one of the more feature-rich models that was available on the market. Once I purchased my phone, I really didn't know where to get started. I had all this data from my old cell phone that I wanted to transfer to my new one and start using right away. I was happy to find a seamless way to migrate the data over and even more surprised at how simple and quick it was. Now that the data was migrated over, I learned about new features that really helped to make my life more productive and started wondering how I did things before.
The new ERwin to InfoSphere Data Architect (IDA) migration guide
gives customers a cookbook approach to transferring assets from ERwin to IDA and start being productive right away. ERwin has been around for about 20 years so it's difficult to suddenly pick up a new tool and expect things to be the same. This guide helps by mapping terminology and concepts between the products and by offering best practices guidelines, including:
- Utilizing packages and subpackages as a concept similar to subject areas in ERwin
- How to properly split your model such that multiple people can work on the model
- How to use diagrams effectively in IDA for navigational purposes
This guide was written by Norma Mullin, who has lots of experience using both tools. I strongly recommend that you dowload this guide from developerWorks today. And our video team put together a great video
that was designed to be a friendly introduction to InfoSphere Data Architect for those who might be using "other tools" today. You should really check it out, too.
-- Anson Kokkat
P.S. IDA doesn't come with GPS or a 5 MP camera, but it does come with tools that would turn a data architect's head.[Read More
I totally love performance, and I like the whole process of trying to find the issues. There is nothing more cool than being the hero-of-the-hour by finding the extreme I/O usage, the I/O hot spots, the file system contention, the logical control unit bottlenecks, the horrible buffer pool management, the poor SQL, or any combination of the above.
But what makes it so interesting is the tooling surrounding performance. To be more specific, the fact that I can now tune DB2 for LUW in the fashion as I would tune DB2 for z/OS. Heresy, you say? Please hear me out.
All through the 1990's and on through today, I've been using Tivoli OMEGAMON XE
for tuning DB2 for z/OS. This tool is great for finding just about any issue or bottleneck, including looking at historical data. In 2003, I had to support DB2 for LUW, and I felt like I was stepping back in time. My biggest complaint was the fact that I had to set up scripts to pull down activity. And finding historical information on DB2 for LUW activity? Forget about it.
But that was then.
What we have now for DB2 for LUW is, quite simply, amazing. DB2 Performance Expert
is a monitoring tool any DB2 person would just love. I have my main console, which is strikingly similar to the OMEGAMON classic view. There is the drill down capability in DB2 Performance Expert where I look for troubleshooting issues. But what do I really like? The fact that I can now look at historical information without a script or agent running on the monitored server!
Doing the Monday morning tasks - you on-call DBAs know what I'm talking about here - are made so much easier with DB2 Performance Expert. I can look at what happened over the weekend so that neither I nor my manager get blindsided by applications or by my end users. In addition, I can find problems in the here-and-now without physically issuing snapshots either at the command line processor or within a script. I can find the long running SQL and easily find the tables it's going after. By knowing the tables, I can go to the next level - is this a one-off SQL situation or is this a table that needs an index, or is this a table that needs to be isolated either in a bufferpool or using file system placement? DB2 for LUW will not be the product that you "poke a stick" at for tuning.
Will DB2 Performance Expert give you the desire to hang on to your turn in the on-call rotation for another week? I can't answer that. But finding the issues simply and easily with this tool will make you the hero-of-the-hour.
The fix packs for products in the Data Studio family 2.1 have arrived!
The fix packs includes enhancements and fixes to the Version 2.1 release of IBM Data Studio Developer, IBM Data Studio Administrator, IBM Data Studio pureQuery Runtime and of InfoSphere Data Architect 7.5.1. These fix packs are intended to fix problems you may have experienced in the 2.1 release. For additional information or detail on included fixes please check out these links.Data Studio Administrator Fix Pack 1 InfoSphere Data Architect Fix Pack 1Data Studio Developer Fix Pack 1
For those who already have these products installed on Windows, you will use IBM Installation Manager to apply the fix pack:
- Go to the link to download the compress file for Fix Pack 1 and then extract it to a temporary directory. For example, C:\temp
- Start the IBM Installation Manager by going to Start > All Programs > IBM Installation Manager > IBM installation Manager
- In the IBM Installation Manager, click File > Preferences... this will launch Preferences wizard. What you want to do here is click "Add Repository..." and then you can put the path for fix pack 1. For example, C:\temp. Click OK after you have entered the path and click Apply on the Preferences page and OK.
- Back in the IBM Installation Manager start page, click Update Packages.
- Select IBM Data Studio (for Data Studio Administrator or Developer) or IBM Software Delivery Platform (for InfoSphere Data Architect) as the package you want to update and click Next.
- On the licensing page, read the license agreement and select "I accept the terms in the license agreement" and click Next.
- On the summary page, verify the installation information and click Update. This will begin the installation of the fix pack on your system.
- When the installation is complete you can click Finish, and close IBM Installation Manager.
For those who don't have the products installed, go ahead and download the trial versions of the products from the following links:Data Studio Developer Data Studio Administrator InfoSphere Data Architect
(By they way, you can find all these links together on the Data Studio Community Space
- Unzip the Data Studio package you just downloaded and click setup.exe. This will launch/install IBM Installation Manager.
- When prompted to select packages to install, click Check for Other Versions and Extensions. This will show you both the newest IBM Installation Manager, plus the fix pack!
Now you are ready to go!
We also have fix packs for DB2 Performance Expert and DB2 Performance Expert Insight Feature:DB2 Performance Expert Version 3.2 Fix Pack 1DB2 Performance Expert Extended Insight Feature Version 3.2 Fix Pack 1
-- Tina Chen
As you may recall, when we announced Data Studio pureQuery Runtime
2.1 for LUW, one of the new features in the release was ability to use pureQuery with .NET applications. This support is available for all of the DB2 servers; however, I want to focus a bit on this from a z/OS perspective, since it is primarily these customers who let us know that they heard and liked the pureQuery for Java story we were telling, but that they needed something like this for .NET as well. They wanted the advantages of static SQL - for security, manageability, and performance reasons.
So we did add that support for .NET in the 2.1 release of the Data Studio pureQuery Runtime and in the latest IBM ADO .NET provider. Although it doesn't have all the rich tools support that Java does, it provides many of the key benefits that Java shops can get - static SQL performance and consistency, static SQL authorization model, and the ability to create uniquely named packages that can help DBAs and system programmers isolate performance problems to a particular application and particular SQL statements. And, since it's using client optimization, that means your applications can get these benefits without having to change source code.
To validate the performance benefit, I'm very happy to announce that we've published the results of our performance study
(using the IRWW benchmark) of the pureQuery support for .NET. I don't want to spoil the surprise, but the numbers are very impressive with huge increases in throughput and dramatic reductions in CPU per transaction.
Also, be sure to see this developerWorks tutorial
. It's a good step-by-step guide to the process of enabling .NET applications to use pureQuery.
It's been a while since I've blogged. I've been spending a lot of time talking to customers about what they want and need from our integrated data management portfolio, and now the whole team is working on some great new capabilities and offerings to help address some of the key pain points I've been hearing about. More on that later.
Right now, I just wanted to draw your attention to our recent announcement of the pureQuery Runtime 2.1 on z/OS. This version of the product has been available on LUW since December and we are happy to announce its availability on z/OS for those shops who run their apps natively on z/OS. There are some excellent capabilities in the 2.1 Data Studio pureQuery Runtime and Data Studio Developer releases -- you can read this What's New article for a good overview, and these videos also show you many of these enhancements through Data Studio Developer. And don't forget Jeff Sullivan's blog entry which gives a lot of good reasons from a z/OS system programmer perspective why he likes pureQuery.
Use pureQuery Runtime for z/OS with stand-alone applications, applications deployed on WebSphere Application Server or other application servers, or with DB2 stored procedures. Data Studio pureQuery runtime supports both type 2 database drivers and type 4 database drivers for DB2 for z/OS V8 and V9.
Hi, all. It's been a while since I've written. I've been really, really busy doing pureQuery POCs, training our technical sales guys and whatnot. My job is to focus on data access development through many different architectures and technologies. Last year I was spending a lot of my time on SOA -- and how data plays such a key role in it. I just noticed that there's an upcoming webcast
that talks about SOA and information management that focuses on strategies to return business value from an SOA project.
As more and more companies go from exploring and evaluating SOA projects to actual implementation, I think it's really important to learn from these experiences so that you can be more likely to deliver on a successful project. Check out the webcast - it's being sponsored by the System z team and includes insights from David Linthicum
, an independent consultant who has lots of experience in this area.
While SOA as a whole is a much larger topic, I like to think that during that "tiny phase" of implementation, one can use all the help they can get to develop the most efficient data access. Web Services support in Data Studio is one of the many components that can help deliver on the SOA promise. Take a look
at this video that shows how tooling has become sophisticated enough to jumpstart some of this implementation.
This tooling makes it easy for you to re-use existing DML operations that you know are efficient and well-tested (including COBOL-based stored procedures).
If you're interested in doing some exploration with the Data Web Services technologies yourself, I recommend this IBM Redpaper
which is a condensed version of the DB2 for z/OS V9 SOA Redbook
. It's z/OS focused,uses WebSphere Community Edition for the app server (however, this can be WebsphereApplication Server
if needed), and Data Studio Developer
for tooling. Try it out and let me know what you think.
--- Vijay Bommireddipalli
Happy Friday the 13th, everyone! Hope your day was lucky and your weekend is good.
Today I want to dig down a little bit on how data modeling can help DBAs be better at their jobs and become more valuable to the organization. You have probably heard about the key benefits of the IBM Integrated Data Management portfolio, including improving productivity, saving time, reducing costs, and improving collaboration among roles. I’d like to propose that as much as any data development or performance monitoring tool, using InfoSphere Data Architect for data design and discovery can provide tangible (and not so tangible) benefits to DBAs and the organization both in the short term (rapid response to changes) and long term (improved skills, better quality data models, etc.).
Let’s look at skills and productivity in a heterogeneous environment first. Data Architect is a heterogeneous tool. If your shop is like many I see, you have to deal with multiple databases, such as Oracle and DB2 for z/OS. Or DB2 and SQL Server. Or Informix Dynamic Server and DB2 and MySQL. Whatever combination you may have, what you really want is a single tool that works with all these different data servers.
Having a single tool that generates the DDL from several different database vendors means you don't need to be an expert in all of them. It allows the DBAs to become more skilled across more products and thus allows DBAs to do more with less, and as their skills grow they become more productive and valuable to the enterprise.
Even if you don't have multiple database systems to deal with, InfoSphere Data Architect can help DBAs be more responsive to ‘agile’ development scenarios (or any scenario in which requirements come in late – all of them?). Designing databases from scratch is always difficult, but having a data modeling tool to can really ease the pain. So, if a developer says "I just remembered, we forgot to take the modules from Sally's group into account," it’s pretty easy to manage that kind of change with InfoSphere Data Architect. My recommendation is to manage all design changes through the logical data model, and then transform as appropriate to the physical model required for the target database, which, by the way, might very well be different vendors between development, test and production. You don’t have to follow this approach, but it is a best practice.
Another scenario: Let's say that your company has recently acquired a company and it’s your job to integrate their database into the main corporate database. And by the way, they have no or little documentation on what is in their databases. No problem. Use the Data Source Explorer in InfoSphere Data Architect to reverse engineer the existing database to get a nice diagram of the physical data model to see what they have already. Then you can compare what they have with what you have (for example, do we really need another customer table? Or can we just migrate their data to ours by maybe adding or modifying a column?). If you need to make changes to accommodate their data model, you can generate the necessary DDL to do so and can even deploy directly from the tool if you like.
Speaking of inheriting database, another DBA might pass you a database model that has cryptic table names or cryptic schema names. Just looking at the table name you can’t tell what it means and it doesn’t match anything else you have seen before. InfoSphere Data Architect can help you "decipher" the cryptic database model against a glossary that defines your enterprise standards. The glossary is a way for you to enforce naming conventions or decipher a table you created in the past. If there are any discrepancies or a newly created table doesn’t match the glossary, the tool will help report these back to you so you can fix them.
The nice thing about Data Architect that is different from other modeling tools is that it does include some cool capabilities in the Data Source Explorer such as being able to sample live data. And because it integrates seamlessly with products like Rational Software Architect for WebSphere Software
, it’s that much easier to collaborate with the application architects and exchange models with them.
Anyway, this is a lot of words, I know. I recommend that you check out this great new 'how to' demo
on developerWorks that goes through some basic scenarios so you can see the tool in action. If you want more details on the capabilities, go to the web site
, post a question using the comment link below, or send me a mail at ansonk at ca.ibm.com.
-- Anson Kokkat
Presentations of software products can sometimes be downright hard.
I recently participated in a customer lunch-and-learn seminar called "Recession Busting Data Management Software". The seminar was absolutely brilliant. We talk to potential customers on how to save money, right now in this down economy, with IBM software. The presentations were 20-minute brief talks on IBM Optim Data Growth Solution
and Test Data Management Solution
, Data Studio Administrator
, Data Studio pureQuery Runtime
, and DB2 Storage Optimization Feature
These are all particularly well suited for saving money and are really cool tools. My part was on Data Studio and pureQuery. It started nicely by describing pureQuery Runtime, which also included my reasons for why it is such a money saver. The ability to have WLM schedule the DDF work into other service classes with statically-bound "named" packages that are easily managed with Data Studio Developer resonates with DB2 on z/OS customers.
All went well until I ran into slides that were not coherent in the aggregate. Simply put, it was not a smooth flow.
But, quoting a very famous radio legend, here is the "rest of the story".
I had created the presentation from several other presentations and combined the slides very quickly while in a hotel room in Austin. Then, I proceeded to give a different
presentation later that afternoon at SHARE in Austin. But that wasn't the end of it. The next morning, I went to the airport at 8 AM thinking my flight was at 10 AM but in reality it was boarding as I arrived at baggage check-in. Needless to say, I did not arrive home until 7 PM instead of 11 AM. In my original plan, the day I lost waiting for the next flight was the day I was going to get to know my slides and practice them.
In retrospect, there is a bigger answer to the poorly presented presentation and missing the flight - have a plan B but ALWAYS be prepared.
Until next time,
In my last blog
I talked about the tools associated with InfoSphere Foundation Tools, including my product, InfoSphere Data Architect
. However I wanted to really show you that most of what I was talking about has substance, and that there is true integration among the tools – it’s not just marketing!
In Denis Vasconcelos's latest article, Understanding leads to Trust: Sharing a Common Vocabulary across InfoSphere Foundation tools
, he has really hit home the message about how a common understanding of business terms can help improve communication and enforce standards across IT and business organizations. His article shows you how to import your existing business concepts into a business glossary (InfoSphere Business Glossary
with InfoSphere Metadata Workbench
) and then use that glossary within InfoSphere Data Architect to do such things as enforce naming standards in data models, which of course will mean that applications built on the resulting database will also be using correct terms that are meaningful to the business.
I like how the article shows how all of these products are interconnected, and how the various technologies have been designed to make sure that you are doing the most with your metadata.
Read the article and let me know your thoughts... I am especially interested to know if this set of tools meets your objective of managing metadata effectively. If there is something missing, let me know. I really think we have a unique offering with these set of tools, and something that really stands out from the rest of the crowd.