This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
On Tuesday, I covered much of the Feb 26 announcements, but left the IBM System Storage DS8000 for today so that it can haveits own special focus.
Many of the enhancements relate to z/OS Global Mirror, which we formerly called eXtended Remote Copy or "XRC", not to be confused with our "regular" Global Mirror that applies to all data. For those not familiar with z/OS Global Mirror, here is how it works. The production mainframe writes updates to the DS8000, and the DS8000 keeps track of these in cache until a "reader" can pull them over to the secondary location.The "reader" is called System Data Mover (SDM) which runs in its own address space under z/OS operating system. Thanks to some work my team did several years ago, z/OS Global Mirror was able to extend beyond z/OS volumes and include Linux on System z data. Linux on System z can use a "Compatible Disk Layout" (CDL) format (now the default) that meetsall the requirements to be included in the copy session.
IBM has over 300 deployments of z/OS Global Mirror, mostly banks, brokerages and insurance companies. The feature can keep tens of thousands of volumes in one big "consistency group" and asynchronously mirror them to any distance on the planet, with the secondary copy recovery point objective (RPO) only a few seconds behind the primary.
Extended Distance FICON
Extended Distance FICON is an enhancement to the industry-standard FICON architecture (FC-SB-3) that can help avoid degradation of performance at extended distances by implementing a new protocol for "persistent" Information Unit (IU) pacing. This deals with the number of packets in flight between servers and storage separated by long distances, andcan keep a link fully utilized at 4Gpbs FICON up to 50 kilometers. This is particularly important for z/OS GlobalMirror "reader" System Data Mover (SDM). By having many "reads" in flight, this enhancementcan help reduce the need for spoofing or channel-extender equipment, or allow you to choose lower-costchannel extenders based on "frame-forwarding" technology. All of this helps reduce your total cost of ownership (TCO)for a complete end-to-end solution.
This feature will be available in March as a no-charge update to the DS8000 microcode.For more details, see the [IBM Press Release]
z/OS Global Mirror process offload to zIIP processors
To understand this one, you need to understand the different "specialty engines" available on the System z.
On distributed systems where you run a single application on a single piece of server hardware, you mightpay "per server", "per processor" or lately "per core" for dual-core and quad-core processors. Software vendors were looking for a way to charge smaller companies less, and larger companies more. However, you might end up paying the same whether you use 1GHz Intelor 4GHz Intel processor, even though the latter can do four times more work per unit time.
The mainframe has a few processors for hundreds or thousands of business applications.In the beginning, all engines on a mainframe were general-purpose "Central Processor" or CP engines. Based on theircycle rate, IBM was able to publish the number of Million Instructions per Second (MIPS) that a machine witha given number of CP engines can do. With the introduction of side co-processors, this was changed to "Millionsof Service Units" or MSU. Software licensing can charge per MSU, and this allows applications running in aslittle as one percent of a processor to get appropriately charged.
One of the first specialty engines was the IFL, the "Integrated Facility for Linux". This was a CP designatedto only run z/VM and Linux on the mainframe. You could "buy" an IFL on your mainframe much cheaper than a CP,and none of your z/OS application software would count it in the MSU calculations because z/OS can't run on theIFL. This made it very practical to run new Linux workloads.
In 2004, IBM introduced "z Application Assist Processor" (zAAP) engines to run Java, and in 2006, the "z Integrated Information Processor" (zIIP) engines to run database and background data movement activities.By not having these counted in the MSU number for business applications, it greatly reduced the cost for mainframe software.
Tuesday's announcement is that the SDM "reader" will now run in a zIIP engine, reducing the costs for applicationsthat run on that machine. Note that the CP, IFL, zAAP and zIIP engines are all identical cores. The z10 EC hasup to 64 of these (16 quad-core) and you can designate any core as any of these engine types.
Faster z/OS Global Mirror Incremental Resync
One way to set up a 3-site disaster recovery protection is to have your production synchronously mirrored to a second site nearby, and at the same time asynchronously mirrored to a remote location. On the System z,you can have site "A" using synchronous IBM System Storage Metro Mirror over to nearby site "B", and alsohave site "A" sending data over to size "C" using z/OS Global Mirror. This is called "Metro z/OS Global Mirror"or "MzGM" for short.
In the past, if the disk in site A failed, you would switch over to site B, and then send all the data all over again. This is because site B was not tracking what the SDM reader had or had not yet processed.With Tuesday's announcement, IBM has developed an "incremental resync" where site B figures out what theincremental delta is to connect to the z/OS Global Mirror at site "C", and this is 95% faster than sendingall the data over.
IBM Basic HyperSwap for z/OS
What if you are sending all of your data from one location to another, and one disk system fails? Do you declare a disaster and switch over entirely? With HyperSwap, you only switch over the disk systems, but leave therest of the servers alone. In the past, this involved hiring IBM Global Technology Services to implementa Geographically Dispersed Parallel Sysplex (GDPS) with software that monitors the situation and updates thez/OS operating system when a HyperSwap had occurred. All application I/O that were writing to the primary locationare automatically re-routed to the disks at the secondary location. HyperSwap can do this for all the disk systems involved,allowing applications at the primary location to continue running uninterrupted.
HyperSwap is a very popular feature, but not everyone has implemented the advanced GDPS capabilities.To address this, IBM now offers "Basic HyperSwap", which is actually going to be shipped as IBMTotalStorage Productivity Center for Replication Basic Edition for System z. This will run in a z/OSaddress space, and use either the DB2 RDBMS you already have, or provide you Apache Derby database for thosefew out there who don't have DB2 on their mainframe already.
Update: There has been some confusion on this last point, so let me explain the keydifferences between the different levels of service:
Basic HyperSwap: single-site high availability for the disk systems only
GDPS/PPRC HyperSwap Manager: single- or multi-site high availability for the disk systems, plus some entry-level disaster recovery capability
GDPS/PPRC: highly automated end-to-end disaster recovery solution for servers, storage and networks
I apologize to all my colleagues who thought I implied that Basic HyperSwap was a full replacement for the morefull-function GDPS service offerings.
Extended Address Volumes (EAV)
Up until now, the largest volume you could have was only 54 GB in size, and many customers still are using 3 GB and 9 GB volume sizes. Now, IBM will introduce 223 GB volumes. You can have any kind of data set on these volumes,but only VSAM data sets can reside on cylinders beyond the first 65,280. That is because many applications still thinkthat 65,280 is the largest cylinder number you can have.
This is important because a mainframe, or a set of mainframes clustered together, can only have about 60,000disk volumes total. The 60,000 is actually the Unit Control Block (UCB) limit, and besides disk volumes, youcan have "virtual" PAVs that serve as an alias to existing volumes to provide concurrent access.
Aside from the first item, the Extended Distance FICON, the other enhancements are "preview announcements" which means that IBM has not yet worked out the final details of price, packaging or delivery date. In many cases, the work is done, has been tested in our labs, or running beta in select client locations, but for completeness I am required to make the following disclaimer:
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Availability, prices, ordering information, and terms and conditions will be provided when the product is announced for general availability.
Yesterday, I asked if you were prepared for the future? The future is now. Today, IBM announced its["New Enterprise Data Center"] vision and strategy which spans software, hardware and services in dealing withthe latest challenges that our clients are faced with today, or will face sooner or later this century.
Here's an excerpt:
Align IT with business goals These changes demand that IT improve cost and service delivery, manage escalating complexity, and better secure the enterprise. And aligning IT more closely with the business becomes a primary goal. The new enterprise data center is an evolutionary new model for efficient IT delivery that helps provide the freedom to drive business innovation. Through a service oriented model, IT will be able to better manage costs, improve operational performance and resiliency, and more quickly respond to business needs. This approach will deliver dynamic and seamless access to IT services and resources, improving both productivity and satisfaction.
IBM's Vision for the New Enterprise Data Center The new enterprise data center can improve the integration of people, process, and technology in your business to help you improve efficiency and effectiveness. As you implement a new enterprise data center strategy, your infrastructure becomes open, efficient, and easy to manage. And your IT staff can move from a focus on fixing IT problems to solving business challenges. Ultimately your processes become standardized and efficient, focused on business needs rather than technology.
A lot was announced today, so I will give a quick recap now, and cover specific areas over the rest of the week.
IBM System z10 Enteprise Class
IBM introduces its most powerful mainframe. Before you think "Wait, that's a mainframe, that doesn't apply to me"stop to consider all that IBM has done to make the mainframe an "open system" without sacrificing security oravailability:
Open standard connectivity, including TCP/IP and now 6Gbps Infiniband and 10GbE Ethernet.
Unix System Services. Yes, z/OS is certified to provide UNIX interfaces for today's applications.
HFS and zFS file systems that can be mounted, shared, and used by traditional z/OS applications and JCL.
Linux and Java. Many of today's largest websites are run on mainframes behind the scenes.
Extreme bandwidth. The z10 EC handles up to 336 FICON channels (4Gbps) for large data processing workloads
The z10 EC is as powerful as 1,500 x86 (such as Intel or AMD) servers, but consumes 85 percent less floorspace and85 percent less energy. (They should put a "green" stripe down the front of this box just to remind everyone how energy efficient this server really is!) For more on the z10 EC, see the[Press Release].
Enhanced IBM System Storage DS8000
With the XIV acquisition taking the role as the best place to put unstructured files for Web 2.0 applications,the IBM DS8000 can focus on its core strength, managing databases and online transactions for the mainframe.There's enough here to justify its own post, so I will cover this later.
IT Service Management Center for z (ITSMCz)
Trust me, I don't make up these acronyms. IT Service Management are the policies and procedures for managingan IT environment, such as following the best practices documented in the IT Infrastructure Library (ITIL).In the past, IBM tools have focused on Linux, UNIX and Windows on distributed servers, but today ITSMCz bringsall of that to the mainframe! (or perhaps more correct to say "brings the mainframe to all that"!)
IT Transformation & Optimization - Infrastructure Strategy and Planning services
I don't make up the names of our service offerings either. However, one thing is clear, it is time for peopleto re-evaluate their current data center, and come up with a new plan. The average data center is 15 years old.According to Gartner Group, more than 70 percent of the world's "Global 1000" organizations will have to make significant modifications to their data centers in the next five years. IBM can help, and is rolling outa new set of services specifically to help clients make this transition, to better align their IT to their business strategies.
Economic Stimulus Package
IBM borrowed this idea from the U.S. government. IBM Global Financing is offering special terms and ratesfor new equipment installed by December 31 this year.
Want to learn more? Read this 15-page[IBM's Vision]white paper.
IBM Developerworks that host this blog suggest posting once per day. General blogging guidelines I have found suggest 300 to 500 words per post. Most magazine and newspaper articles range around 700 words.In my book, [Inside System Storage: Volume I], I had 165 posts covering twelve months, with an average of 636 words per post.
longer posts, perhaps once a week or less
I've seen several executives adopt this approach. When they have something to say, out comes a long speech,in written form, when the occasion deems it necessary. Some of the more technical blogs adopt this approachalso, going into great detail on product specifications and supporting material to make their case.
Either way, it comes out to perhaps 2000 words per week, that can be 20 posts of 100 words each, four posts that are 500 words each, or one long post for the week. Currently, I post about 2-5 times per week, with posts 500-700 words long. I can try to mix short posts with long ones, to give you readers some variety. Post a comment below on whether you prefer that I do more/shorter or fewer/longer.
As for the future of IT...
In a recent post by fellow blogger (and author) Nick Carr titled [Alan Turing, cloud computing and IT's future], he mentions he has a free download of a 7-page PDF called "IT in 2018: from Turing's machine to the computing cloud." It's a quick read, covering many of thepoints in his most recent book, The Big Switch. Here's an excerpt:
As for computer professionals, the coming of the WorldWide Computer means a realignment of the IT workforce,with some jobs disappearing, some shifting fromusers to suppliers, and others becoming more prominent.On the supplier side, we’ll likely see booming demand for the skills required to design and run reliable,large-scale computing plants. Expertise in parallelprocessing, virtualization, artificial intelligence, energymanagement and cooling, encryption, high-speed networking,and related fields will be coveted and rewarded.Much software will also need to be written orrewritten to run efficiently on the new infrastructure. Ina clear sign of the new labor requirements, Google andIBM have teamed up to spearhead a major educationinitiative aimed at training university students to writeprograms for massively parallel systems.
Some interesting insights from Google can be read in New York Times'Freakonomics blog, where Steve Dubner interviews Google's chief economist: [Hal Varian Answers Your Questions]Hal comes up with some clever answers to some rather tough questions. It's worth a read.
It is good to have futurists like this. However, as we caution in IBM, those who seek a life througha crystal ball... must often settle for a diet of broken glass.I will close with one of my favorite quotes.
"As I've said many times, the future is already here. It's just not very evenly distributed." --- William Gibson (science-fiction author)
So, yes, I may sometimes look at the rear-view mirror. However, there is a common theme from Nick Carr to Steve Dubnerto William Gibson. They also look back to the past to give insights on how things might unfold in the future.
My view is that for some the future is already here. IBM already offers the product, service or solutionthat might be just what you need, but you just haven't gotten it yet. Future for you, but past for us.For others, the future is repeating a pattern we have already seen in the past. Understanding what happened back then helps us be better prepared to understand what is happening now, in the directions and trends we forecast moving forward.
EMC Corporation (NYSE:EMC) today announced it has been positioned as a leader in the Forrester Wave™: Enterprise Open Systems Virtual Tape Library (VTL), Q1 2008 by Forrester Research, Inc. (January 31, 2008), an independent market and technology research firm. EMC achieved a position as a leader in the Forrester Wave report on virtual tape libraries based on the largest installed base of the EMC® Disk Library family of systems, its broad ecosystem interoperability. Virtual tape libraries emulate tape drives and work in conjunction with existing backup software applications, enabling fast backup and restoration of data by using high-capacity, low-cost disk drives.
EMC was the first major vendor in the open systems virtual tape library market as it introduced the EMC Disk Library in April 2004 and today is a leading provider of open systems virtual tape solutions, with systems that are designed for businesses and organizations of all sizes.
While the press release implies that "EDL equals VTL", Chuck tries to explain they are in fact very different. Here is an excerpt from his blog post:
Virtual Tape Libraries vs. Disk Libraries
As many of you know, VTLs have been around for a while. They use disk as a cache -- they buffer the incoming backup streams, do some housekeeping and stacking, then turn around and write tape efficiently. When you go to restore, you're usually coming back off of tape, unless the backup image in question is sitting in the disk cache.
Now, there is nothing wrong with the VTL approach, but it was conceived in a time when disks were horribly expensive. It was also pretty clear to many of us that disks were going to be a whole lot cheaper in the near future, and this fundamental assumption wouldn't be valid for much longer.
I kept thinking in terms of disk as a direct target for a backup application. No modifications to the backup application. Native speed of sequential disks for both backup and restore. Tape positioned as a backup to the backup. Use the strengths of the underlying array (e.g. CLARiiON) for performance, availability, management, etc.
We ended up calling the concept a "disk library" to differentiate from the VTLs that had come before it. It was a different value proposition and offering, based on the emergence of lower-cost disk media.
... It's nice to see we're at 1,100+ customers, and still going strong.
For those new to the blogosphere, there is a difference between "Press Releases" as formalcorporate communications versus "Blog Posts" which are informal opinions of the individual blogger, whichmay or may not match exactly the views of their respective employer.As we've learned many times before, one should not treat termslike "first" or "leader" in corporate press releases literally! Let's explore each.
Was EDL the first "open systems" Virtual Tape Library?
This is implied by the Forrester report. Chuck mentions the "VTLs that had came before it" in his blog, and many people are aware that IBM and StorageTek had introduced mainframe-attached VTLs in the 1990s. But what about VTL for "open systems"?
(Hold aside for the moment that IBM System zmainframe is an open system itself, with z/OS certified as a bona fide UNIX operating system by the [the Open Group] standards body. Most analysts and research firms usually refer only to the non-mainframe versions of UNIX and Windows. Alternative definitions for "open systems" can be foundin [Web definitions or Wikipedia]. I will assume Forrester meantnon-mainframe servers.)
IBM announced AIX non-mainframe attachment via SCSI connectivity to the IBM 3494 Virtual Tape Server (VTS) on Feb 16, 1999, with general availability in May 28, 1999. That's nearly FIVE YEARS before the April 2004 introduction of EDL. IBM VTS support for Sun Solaris and Microsoft Windows came shortly thereafter in November 2000, and support for HP-UX a bit later in June 2001. One of my 17 patents is for the software inside the IBM 3494 VTS, so like Chuck, I can takesome pride in the success of a successful product.
(I don't remember if StorageTek, which was subsequently acquired by Sun, had ever supported non-mainframe operating systems with their Virtual Storage Manager[VSM] offering, but if they did, I am sure it was also before EMC.)
Last week, another EMC blogger, BarryB (aka [the Storage Anarchist]),took me to task in comments on my post [IBM now supports 1TB SATA drives]. He felt that IBM should not claim support, given that the software inside the IBM System Storage N series is developed by NetApp. He compared this to the situation of HP and Sun re-badging the HDS USP-V disk system. If someone else wrote the software, BarryB opines, IBM should not claim credit for it. I tried to explain how IBM provides added value and has full-time employees dedicated to N series development and support, butdoubt I have changed his mind.
Why do I bring that up? Because the EMC Disk Library runs OEM software from FalconStor. Basically EMC is assembling a hardware/software solution with components provided from OEM suppliers. Hmmm? Sound familiar? Who is calling the kettle black?
If there is a clear winner here, it is FalconStor itself.Perhaps one of the worst kept industry secrets is that FalconStor software is also used in VTL offerings from Sun, Copan, and IBM, the latter embodied as the [IBM TS7520 Virtualization Engine] offering. If you like the concept of an EDL,but prefer instead one-stop shopping from an "information infrastructure" vendor, IBM can offer the TS7520 along with servers, software and services for a complete end-to-end solution.
Can EMC claim to be "a leader" in Virtual Tape Libraries?
During the measured quarter, IBM shipped its 10 millionth LTO-4 tape drive cartridge to Getty Images, the world's leading creator and distributor of still imagery, footage and multi-media products, as well as a recognized provider of other forms of premium digital content, including music. Getty Images is using the LTO-4 drives as part of a tiered infrastructure of IBM disk and tape solutions that help support the backup needs of their digital imagery;
IBM shipped more than 1,500 Petabytes of tape storage in Q3'07 alone;
During Q3'07, IBM shipped the 10,000th IBM System Storage TS3500 Tape Library. The TS3500 is a highly scalable tape library with support from 1 to 192 tape drives and up to 6,400 cartridge slots for open system, mainframe and virtual tape system attachment.
Let's take a look at the numbers. IBM has sold over 5,400 virtual tape libraries. Sun/STK has sold over 4,000 virtual tape libraries. Both are drastically more than the 1,100 mentioned in Chuck's post. Does IDC recognize EMC in third place? No, EMC chooses instead to declare EDL as disk arrays (probably toprop up their IDC "Disk Tracker" numbers), so they don't even earn an honorable mention under the virtual tape librarycategory. This of course includes the number of mainframe-attached models from IBM and Sun/STK. So, if EMC did call these tape systems instead, they might showup in third place, and as such EMC could claim to be "a leader" in much the same way an athlete can claim to be an "Olympic medalist" winning the bronze for third place. (If you limit thecount to just the FalconStor-based models from IBM, EMC, Sun and Copan, then EMC moves up to first or second, but then press release titles like "EMC a Leader in FalconStor-based non-mainframe Virtual Tape Libraries" can get too confusing.)
Chuck, if you are reading this, I feel you have every right to celebrate your involvement with the EDL. Despite having common software and hardware components, both IBM and EMC can rightfully declare their own unique value-add through their respective VTL offerings. Like the IBM N series, the EMC Disk Library is not diminished by the fact the software was written by someone else. BarryB might disagree.
Last year, in my post [Inaugural Brand Impact 2007 Awards], I mentioned how IBM beat out other major storage vendors for the best brand "IBM System Storage". I am proud of this, and highlighted it as one of my team's key accomplishments during my brief20-month career in marketing, which I recapped in my post[Switching Over from What and Why] when I switched over to consulting.
This year, IBM did it again. For a second consecutive year, IBM System Storage was recognized by [Liquid Agency]as the leading brand for enterprise storage. Here is an excerpt from the [IBM Press Release]:
"IBM System Storage is the most trusted storage portfolio in the world, providing our clients leading disk, tape and storage software solutions and services. This award reflects IBM's priority in delivering information infrastructure solutions to solve our client's most critical storage challenges," said Barry Rudolph, Vice President, IBM System Storage. "We are helping clients -- from large corporations to small businesses -- intelligently manage information as a strategic business asset. We are proud to be recognized as the clear market leader in delivering solutions that help our clients manage and extract value from their information."
Liquid Agency reviewed over 250 technology brands to make this assessment.
The Business/IT Alignment category is critical for many companies; getting these two key divisions in sync provides a huge competitive advantage. This year’s winner – by a landslide – is IBM's [Innov8].
This Big Blue product has a touch of the sci-fi to it: it’s an interactive, 3-D business simulator intended to close the divide between IT staff and business executives. In other words, it’s…a video game. I guarantee you that in all the decades that Datamation has done its Product of the Year awards, never has a video game won. The times they are a-changin’.
Whether a server is the “best” server is, in truth, based on your company’s individual needs and budgets. In the server world, with its myriad options and add-ons, one size definitely does not fit all. That said, IBM p 570 Server must fit plenty of needs; the box easily won the Enterprise Server category. IBM claims this workhorse doubles the speed of its predecessor without requiring a larger energy footprint.
IBM Lotus Symphony
When it comes to total numbers of users, there’s no question that Microsoft Office is the 800-pound gorilla of this category. The deeply entrenched Office makes the corporate world go ‘round. Given Office’s status, it’s a major eyebrow raiser that this category was won by relative newcomer IBM Lotus Symphony. Perhaps it’s because Big Blue’s product is free (that always helps), or because IBM is itself such an established vendor. Whatever the case, consider this vote as a huge upset.
(Note: IBM Lotus Symphony is available for [free download] for Windows and Linux.When my friend purchased a new laptop that came pre-installed with Windows Vista, he was surprised to see that Microsoft Office was not included. I pointed him to Lotus Symphony, and he is running great with his existing Word, Powerpoint and Excel documents! I use Lotus Symphony on both Windows and Linux, and IBM plans to make a version available for Mac OS X-- when that happens, I have my Mac Mini G4 waiting to try it out.)
IBM Wireless Software for Business Intelligence (BI) on the go
For most of 2007, IBM Cognos 8 Go! Mobile software supported only Blackberry units. At the end of last year, Cognos upgraded its wireless business intelligence software – which delivers business reports to on-the-go staffers – to support handhelds that run Windows Mobile OS. Naturally, this expanded the company’s user base, and likely helped Cognos 8 Go! Mobile win the Wireless Software category.
(If you have a RIM Blackberry handheld device, you can try out this[actual demo].)
Wow! That's a lot of awards. Congratulations to all my IBM colleagues who made this happen!