Announced today: New pricing options for DB2 for z/OS running new workloads! All you data center folks who lament to us that pricing for "other" databases can't be compared to DB2 for z/OS - rejoice!!
Announcing today, and already found here is this gem of a news item tidbit:
IBM is also announcing the immediate availability of DB2 for z/OS Value Unit Edition, which provides a new one-time-charge offering that enables the deployment of new application workloads. This offering strengthens the role of System z as a cornerstone for key business initiatives such as SOA, Data Warehousing, Business Intelligence and packaged applications such as SAP. DB2 for z/OS Value Unit Edition and IBM Information Server enable System z clients to further deliver trusted information for their dynamic warehousing requirements.
Just updated: Here is where you can find the gory details.
Is this cool or what? Doesn't this just remove the last and final objection that the application architects have for leaving DB2 for z/OS out of the running for those new applications?
Now, lest you think I am somehow reflecting a non-developer perspective, look, I have spent most of my efforts in DB2 for z/OS developing the kinds of new technologies designed to attract new workloads, and since even I have heard the pricing objection, isn't it perfectly fair for me to mention this in my DW space? And heck, since I am a developer, not a pricing person by any stretch of the imagination, if this has gotten my attention, you know it's big news!
Bring on those new workloads! And then come to us in development and tell us what you need to bring more work onto z, OK?
As many of you know, I changed jobs early in 2008 to switch my developer focus from DB2 for z/OS over to unstructured text technologies. Since many DB2 folks are heads-down in structured data, that whole "content"side is a bit of a mystery. Sure, you know about search, and you likely have a vague understanding of what it means to have a text index supporting keyword search. But really, there's so much more...
I've been learning a ton about all of this, which is to be expected after over a year (!) in this job. I have some terrific colleagues in text analytics, in research, development and services, who continually amaze me with their breadth and depth of knowledge, as well as their passion for the topic and their eagerness to help customers.
I happen to love linguistics, so this job is a great fit for me. I love to read, and the turn of a phrase in a book or a song lyric brings me joy. I like to think about the best way to phrase things and ways to interpret sentences. The more I interact with non-native English speakers, the more I appreciate both the beauty and the limitations of language, and the inherent difficulties in both generating and understanding sentences. I truly enjoyed all my study of French in school, too. It's always been such an intellectual snobbery to say something like "the French have a word for that", but anyone who knows more than one language knows it to be true -- language translation is never exact and concepts cannot always be expressed well, even in one's native language. Bottom line is that it's so interesting for me to dig into and help shape the technology and rules around extracting meaning from unstructured text.
Last month I was talking with a long-time friend and colleague who was here with her company at the IBM Silicon Valley Lab for a technology briefing. She and I have had several conversations at conferences over the years on topics like O-O databases, Java, XML, as they were emerging towards mainstream over the years. In the briefing, we talked a bit about unstructured data in the context of the Information Agenda, and one of their company's thought leaders said that unstructured data inclusion is implied. Cool, but, um, how exactly? Their (very reasonable) response when I probed a little further was that they needed to hook up with the business guys on that. YES! That's where I think we all absolutely have to start -- what is the unstructured data, and what questions do you need answered from it for business value. Then we go into more of the logistics around that.
Specifically, just this week I've worked on a couple of items that can help me bring some meaning to what text analytics is all about to folks who haven't been exposed to it deeply. The first was working on a report for a well-known analyst group, where we describe our information access technologies and offerings for unstructured data. And the second is a new offering that I hope will become available soon, to help quantify needs and specific business value that can be derived from unstructured data. If you are curious about any specifics, a great place to start digging into and even playing with some text analytics technology is the LanguageWare capabilities, system text and UIMA.
If you're interested in this kind of stuff, please let me know, or contact your local IBM rep and ask them (and tell them to ask me if they want a starting contact!) I'm passionate and eager to help! :-)[Read More]
I think that most of you reading this work for large companies, and our U.S. large companies tend to have pretty active legal departments. One of the hot topics these days around litigation is the investigation of email to answer legal requirements for evidence. Yep, they're likely keeping all of your email, and are required to comply when asked to provide the relevant ones as part of a lawsuit. Getting that set right is a big deal.
Now, I'm not a lawyer. I do happen to come from a family of lawyers, but that's not really here nor there for this discussion. The group where I work in IBM's Information Mangement has just produced a pretty cool part of the eDiscovery puzzle. It's called eDiscovery Analyzer. As you can see in the announcement letter, it works in conjunction with other IBM products to analyze email content in a repository.
The cool part is what's under the hood here. Based on the open, unstructured information management architecture-based search and text analytics (known as UIMA to those who know and love it), this product processes the text inside as well as the associated information about all the emails. This processing in turn allows a legal email analyzer person to work with and filter based on extracted entities from the email, such as people and company names, and stuff like sender, recipient and date. Combine that with powerful free-text search and you really have some amazing capability to categorize, gather, flag... this really helps a legal staff when they're asked to provide exactly what's needed and no more.
Now... what if you had this kind of capability on other information besides legal email repositories in your enterprise. What would you do with it? What other business problems could this kind of technology solve for you?
I heard an interesting story on the news last week, about how the individual states of the U.S. were graded on how they use information. The state I live in, California, got a C+. How can this be, with our advanced technologycenters in Silicon Vallley?
I found the article online here and found some interesting things, although nothing specific about California.
The article says:
When all is said and done, a state’s skillwith information is found at the intersectionof three distinct operations: the willingnessto share data, the capacity to generategood information, and the ability toget those who should use the data to do so.
Well, that sounds a lot like stuff that I have talked about when describing IBM's Information on Demand strategy. Is your organization good at doing this? I particularly noted the last point in the article, because some of the states complain that their legislators just aren't interested in using the data! Maybe we information professionals have to make that easy (and fun?) to do.
What about the highest-graded states? The article had this to say about one of them:
In Washington State, Governor ChristineGregoire held a series of town hallmeetings on the budget to communicate resultsto citizens and follow up on the budgetarypriorities she had previously establishedwith much citizen input. “We wantto give concrete information about whethera difference has been made or hasn’t"
Yep... this is what everyone wants to know. What did we say we'd do? Did it make a difference? In fact, I've been trying to get this type of information from my financial analyst for some time!
What about states that were graded worse than California?
Some state employees in Rhode Island arestill operating with typewriters—electric, ofcourse, but still a far cry from the ability toshare information in a database. NewHampshire has such weak data-sharing systemsthat it doesn’t know how much itspends each month—kind of like an averageJoe who’s lost his checkbook.
At the opposite end of the spectrum, there’s Wyoming. Itstransportation department has linked geographicinformation systems to financialsystems and now knows with exact specificityhow money is being spent, down to thecost of the salt used between each milemarker on the state’s snowy roads.
OK, well, perhaps that is an example of too much information! :-)[Read More]
Some of my IBM colleagues have created a pretty cool idea - that we, the community of folks with an interest in IBM's information management technology, should designate a day to connect virtually online. This means not just reading content, but actually taking a step further and participating.
I've always seen online social networking tools as extensions of what is done better in person, and a pretty good substitute for when it's just not practical to be in person. This goes back years and years, to online forums, prodigy (remember that?), etc. If you think of your participation online as much like an in-person event as you possibly can, you'll benefit the most possible.
Say, for example, if you attended a talk at a conference, and you gained a lot of useful knowledge from it, and then found your self face-to-face with the speaker right afterwards, you'd say "thanks, I learned a lot from your talk". And if you were sitting at lunch and someone said "Do you know anyone here who can help me with an SQL issue", you'd point them across the room to where your favorite SQL expert sat. Or, you'd do your best to answer the SQL issue yourself.
What we're thinking is that perhaps if we picked a day and asked everyone to speak up in just one small way, we might get some folks more comfortable with participating online, and everyone would benefit - make some contacts, get some questions answered, reconnect with someone you met in person, etc.
So, this Wednesday October 1, get out there to your favorite Information Management online sites and find a way to speak up. There are more ideas and links mentioned here.[Read More]
We now have the capability in DB2 9 for z/OS to search text data that is stored in DB2 for z/OS using SQL statements. Wahoo!
You mean you missed the announcement?
And you just followed that link and still couldn't find it? It's under "utilities", no, it's not that kind of utility, but still, that's where it is.
What is added is built-in functions for contains() and score(), and also shipping a text search server which runs on a separate, non-z/OS server. For more details, see the announcements!
One prerequisite for this is to have a WLM application environment set up to run a java user-defined function. The early customers I've been working with have had the most stumbling with this part of it. So, this is something you can set up even if you are not quite to DB2 9 for z/OS yet. I'll post some more on the setup steps for that.
So, what kind of data are you going to search, and what kinds of searches are you going to do?[Read More]
Also among the 'recommended practices' that I often present on DB2 for z/OS stored procedures is this one:
- Don't call the metadata stored procedures
Many invocations of DB2 for z/OS stored procedures come from a Java(TM) or a CLI application. The software stack for these programs accessing DB2 for z/OS is through a "driver" program. These driver programs have SQL packages bound to DB2 for z/OS, and in the case of the application invoking a stored procedure, there is a fair amount of code executed in the driver program.
For a CLI program (the term CLI is often used interchangeably with ODBC) -- this is usually something running from a Microsoft(TM) application accessing DB2 for z/OS. The DB2 connect software that includes the driver for DB2 for z/OS has some smarts in it so that if the application is coded using incorrect data types for the stored procedure being invoked, the driver recovers and invokes the SQLPROCEDURECOLS metadata stored procedure on DB2 for z/OS to find out what the data types are and then re-sends the stored procedure call to DB2 for z/OS. Yes, you got it right, this means that a poorly coded application can invoke 3 stored procedure calls for every SQL CALL it's trying to do -- one to the original SP, one to SYSIBM.SQLPROCEDURECOLS, and then again to the original SP with the correct parm types! How do you recognize this? Well, you could run a client-side DRDA trace and it will show up there. Or you can look at statistics at the server. Or you can set the value DESCRIBEPARAM=0 in the db2cli.ini file on the client, and let the applications get the error SQLCODE -301 because now the driver won't do the metadata PS call and instead will let the application fail due to using the wrong datatype. Same result if you issue a -STOP PROCEDURE (SYSIBM.SQLPROCEDURECOLS) ACTION(REJECT) command on the DB2 for z/OS server.
For a Java(TM) program, the current driver is the DB2 Universal Java Driver, and it will not invoke the metadata stored procedure. So this is an excellent reason to switch to the current driver, because the older version of the driver went through the CLI code path and had the same problem as described above.
Note that if you invoke a stored procedure from the command line (the CLP), that code will always invoke the SQLPROCDURECOLS stored procedure since the command line doesn't provide anything for what data type the arguments are.
Now, if you are stuck with a CLI program that you can't modify, what can you do to improve the performance of SQLPROCEDURECOLS? Well, APAR PK57017 just shipped which reduces the size of the package for this stored procedure, so you can free up some EDM pool usage and get a small CPU usage improvement. You can also be sure you run RUNSTATS so that the data access for this SP is the most efficient it can be. I have also heard rumors of some customers creating additional indexes on the tables used by SQLPROCEDURECOLS, but I don't have any specifics on that, sorry.[Read More]
Last week in Athens I attended a presentation by Julian Stuhler of Triton Consulting
. I was of course aware of the support, but more from a DB2 internals point of view. It was great to get an external perspective on it from someone who has been working with it.
The key information is that spatial data can be points, lines, or polygons (including multi-part polygons). If you think about it, this is really powerful. One example that Julian used is that an address is a point, and a flood zone is a polygon. So now you can ask "is the house in a flood zone", which is "is this point inside the polygon"? Cool stuff! I can really imagine how this could be used by some situational applications to use data in DB2 alongside other data.
Complete documentation on the spatial features can be found in this book
I thought I'd share a nice little AQL output example that might be of help in your Big Insights text analytics programming.
Consider that you've created a very complex AQL view, one which extracts a very detailed concept from text. This might take into account many different text constructs and match a large variety of different text in documents.
If you want to take that view and use it to simply identify which documents have an occurance of this advanced concept, you can do it like this. In this case, I've assumed the complex view name is called "Division'.
create view DivisionCount as
select Count(*) as dc from Division D;
create view DivisionBoolean as
when Equals(DC.dc,0) then 'no'
from Document R,DivisionCount DC;
To test this using the Text Analytics tutorial example, I created a new document called text.txt which doesn't have any matches for Division in it. When I run it and select to see DivisionBoolean in the output, I get this as a result:
We made a change in DB2 9 for z/OS in order to better package java(TM)code. We now ship DB2 java code such as that required for our XML schema registration and text search password encyrption, to be installed in an HFS/ZFS directory like /usr/lpp/db2/db2910_base
If your installation lets SMP/E default to that directory, then the same set up you use for Java stored procedures in DB2 for z/OS V8 will continue to work. But if you change that, then you need to set a new ENVAR in your JAVAENV dataset such as "DB2_BASE=/usr/lpp/db9a/db2910_base" so we can find our code. Otherwise, you'll see this error when the WLM-SPAS tries to start up: java.lang.NoClassDefFoundError: com.ibm.db2.dsnx9.JARLoader
I know, there are an awful lot of "moving parts" to setting up for running java stored routines. You need the DB2 universal java driver, the z/OS JVM, and JCL and a JAVAENV dataset. The stored procedures redbook has a good chapter on setup. It's a complex environment, but a very powerful one, too![Read More]
We are often asked where to find the sample files for our text analytics tutorial. It's a small set of IBM quarterly reports. I've uploaded them here, just click on that to download them. Happy coding!!
I found this article
online today, which highlights the importance of enterprise search.
Company networks contain mountains of structured and unstructured data archived in numerous formats, some of them decades old and stored in secure servers.
IBM also is building a portfolio of enterprise search tools and services, under the OmniFind brand.
Of course you know that DB2 for z/OS data contains mountains of information! This is what our just-released text search support addresses for DB2 for z/OS data - character, binary, and XML. And it's built on OmniFind technology. With this support, you can do text search queries using the built-in CONTAINS() function. It's provided with DB2 9 for z/OS and the no-charge accessories suite.
Now, I know that this is just one piece of enterprise search. In fact, I joke with my colleagues that all of the work that we've put into this is "just an SQL statement". :-) But hey, it's an important piece - it can keep the DB2 for z/OS data where it is and "let the searches come to us".[Read More]
When I describe native SQL procedures in DB2 9 for z/OS, I often hear variations of these types of questions:
- Doesn't the external WLM-managed infrastructure provide some throttling of stored procedures? What's going to happen when this is gone?
- Can DBM1 handle the same amount of concurrent stored procedures as multiple WLM-SPAS?
- User routines only use below the bar storage, so how much below the bar storage is available in DBM1 for these native SQL procedures?
In order to answer this, I have to explain a little bit about how DB2 handles native SQL procedures. They are simply packages, with "runtime structures" for the SQL statements to be executed. So, when you invoke a native SQL procedure, DB2 finds and loads the package and executes the statements.
In contrast, an external stored procedure with SQL needs a complete language environment for the user program, and then that external program comes back to DBM1 to get its package loaded and SQL statements executed. That's what needs to be "throttled" - the external program execution environments and their associated TCBs. When an incoming stored procedure request is queued for WLM, the DB2 thread is suspended in DBM1. Many customers have experienced delays and DBM1 storage problems when their WLM goals weren't adjusted properly and the queued requests built up. The solution is to either adjust the WLM goals, or else adjust the limit on DB2 threads (local and/or distributed).
With native SQL procedures, the thread will just switch packages when the call statement is processed and run the procedure - no queuing. The storage used for the local variables is above the bar and managed with efficient algorithms. The maximum concurrent first-level native SQL procedures is effectively the same as your setting for maximum DB2 threads. (What I mean by first-level is that a native SQL procedure may have a nested call to another native SQL procedure, so the actual number of concurrent native SQL procedures may be even higher).
So, I guess the way I'd answer the questions is:
- Yep. When it's gone, SPs will run much more efficiently
- Yep - in fact likely more
- n/a - SQL procedures aren't "really" user routines - they are a pre-defined set of SQL statements, and they don't use below the bar storage
Of course I recommend that you test your native SQL procedures in your environment and measure for yourself, and do capacity planning based on the results of your testing. Native SQL procedures will use some DBM1 storage, after all, and how much depends on what statements and what variables are used in the program.
Oh, and if you didn't recognize it, the "What, me worry?" is a reference to the signature quote from Alfred E. Neuman. It's more than a little tongue-in-cheek.[Read More]
Among the 'recommended practices' that I often present on DB2 for z/OS stored procedures is this one:
- No more than 512 SP's in a WLM
Let me explain why I recommend this. It's actually at the bottom of the list, and that's because it doesn't come up that often. But it has, and when it does, it can cost in I/O. DB2 has a Language Environment table of load modules in each stored procedures address space. For stored procedures defined STAY RESIDENT YES, we only have room for 512 load modules in that table. A load module has to be in the table in order for DB2 to invoke it. So, starting with the 512th, we'll delete it from the table after we call it, even if it's STAY RESIDENT YES. And come to think of it, we have separate tables for TYPE MAIN and TYPE SUB.
So to be completely accurate, the recommendation could actually say something like this:
- No more than 512 different load modules for STAY RESIDENT YES SP's in a WLM application environment, that are all either PROGRAM TYPE MAIN or PROGRAM TYPE SUB and invoked during the lifetime of a single instance of a WLM-SPAS.
For that last bit, remember that different invokers of a stored procedure that end up classified in different WLM enclaves will not have their SPs run in the same instance of a WLM-SPAS.
What's a WLM-SPAS? It's what I use to abbreviate a "WLM-established stored procedures address space".
And this post has motivated me to get a more recent copy of my stored procedures recommended practices presentation out online![Read More]
I was pointed to this interesting article
from the New York Times, about a new technology invented by two software engineers, Jonathan Lindo and Jeffrey Daudel, to be able to "replay" the events that led up to a system crash. Not that I really want to see my "blue screen of death" from yesterday again, but if it would help identify the problem and get a fix, I could probably live through it a couple more times.
Reading the article, I was struck by a couple of points. They quote Lindo as saying that the inspiration came to them as "Wouldn't it be great if we could just TiVo this and replay it?" And then it says this:
Innovation by analogy is a powerful concept, says Giovanni Gavetti, an associate professor at the Harvard Business School who, with his colleague Jan W. Rivkin, has published research on how businesses can use analogic reasoning as a strategic tool. Human beings are analogy machines, he notes, dealing with new information by comparing it to things they already know something about.
That's true, I often try out analogies when I'm trying to understand or explain something. And I can really see how that could lead to innovations, as well as to some odd product evolutions. For a consumer example, I love how the iPhone lets me listen to my voicemail messages in any order, instead of sequentially, which must have been a leftover paradigm from when messages were stored on an analog tape. I can picture someone saying - "why can't I access my messages like I read my email?" - and voila - innovation.
Then I started wondering just how much you could tinker with the crash replay. Could you start eliminating concurrently-running applications, for example, to see if any of them contributed to the crash? And could you test a fix with the replay to see if it fixes the crash?
I also wonder whether IBM's customers would voluntarily seek out software like this to help them narrow down problems. It's not from IBM, and I really don't know any more about it than is in the article above. It's from a company called Replay Solutions, and it runs on several versions of the Microsoft Windows operating system. So, no mainframe support yet (grin). But you could ask them about it!