This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Optimizing Storage Infrastructure for Growth and Innovation
This session started off with my former boss, Brian Truskowski, IBM General Manager of System Storage and Networking.
We've come a long way in storage. In 1973, the "Winchester Drive" was named after the famous Winchester 3030 rifle. The disk drive was planning to have two 30MB platters, hence the name. When it finally launched, it would have two 35MB platters, for a total raw capacity of 70MB.
Today, IBM announced the verison 6.2 of SAN Volume Controller with support for 10GbE iSCSI. Since 2003, IBM has sold over 30,000 SAN Volume Controllers. An SVC cluster can now manage up to 32PB of disk storage.
IBM also announced new 4TB tape drive (TS1140), LTFS Library Edition, the TS3500 Library Connector, improved TS7600 and TS7700 virtual tape libraries, enhanced Information Archive for email, files and eDiscovery, new Storwize V7000 hardware, new Storwize Rapid Application bundles, new firmware for SONAS and DS8000 disk systems, and Real-Time Compression support for EMC disk systems. I plan to cover each of these in follow-on posts, but if you can't wait, here are [links to all the announcements].
Customer Testimonial - CenterPoint Energy
"CenterPoint is transforming its business from being an energy distribution company that uses technology, to a technology company that distributes energy."
-- Dr. Steve Pratt, CTO of CenterPoint Energy
The next speaker was Dr. Steve Pratt is CTO of [CenterPoint Energy]. CenterPoint is 110 years old (older than IBM!) energy company that is involved in electricity, gasoline distribution, and natural gas pipeline. CenterPoint serves Houston, Texas (the fourth largest city in the USA) and surrounding area.
CenterPoint are transforming to a Smart Grid involving smart meters, and this requires the best IT infrastructure you can buy, including IBM DS8000, XIV and SAN Volume Controller disk systems, IBM Smart Analytics System, Stream Analytics, IBM Virtual Tape Library, IBM Tivoli Storage Manager, and IBM Tivoli Storage Productivity Center.
Dr. Pratt has seen the transition of information over the years:
Data Structure, deciding how to code data to record it in a structured manner
Information Reporting, reporting to upper management what happened
Intelligence Aggregation, finding patterns and insight from the data
Predictive Analytics, monitoring real-time data to take pro-active steps
Autonomics, where automation and predictive analysis allows the system to manage itself
What does the transition to a Smart Grid mean for their storage environment? They will go from 80,000 meter reads, to 230,400,000 reads per day. Ingestion of this will go from MB/day to GB/sec. Reporting will transition to real-time analytics.
Dr. Pratt prefers to avoid trade-offs. Don't lose something to get something else. He also feels that language of the IT department can help. For example, he uses "Factor" like 25x rather than percent reduction (96 percent reduced). He feels this communicates the actual results more effectively.
Today's smarter consumers are driving the need for smarter technologies. Individual consumers and small businesses can make use of intelligent meters to help reduce their energy costs. Everything from smart cars to smart grids will need real-time analytics to deal with the millions of events that occur every day.
IBM's Data Protection and Retention Story
Brian Truskowski came back to provide the latest IBM messaging for Data Protection and Retention (DP&R). The key themes were:
Stop storing so much
Store more with what's on the floor
Move data to the right place
IBM announced today that the IBM Real-Time Compression Appliances now support EMC gear, such as EMC Celerra. While some of the EMC equipment have built-in compression features, these often come at a cost of performance degradation. Instead, the IBM Real-Time compression can offer improved performance as well as 3x to 5x reduction in storage capacity.
OVer 70 percent of data on disk has not be accessed in the last 90 days. IBM Easy Tier on the DS8700 and DS8800 now support FC-to-SATA automated tiering.
IBM is projecting that backup and archive storage will grow at over 50 percent per year. To help address this, IBM is launching a new "Storage Infrastructure Optimization" assessment. All attendees at today's summit are eligible for a free assessment.
Analytics are increasing the value of information, and making it more accessible to the average knowledge worker. The cost of losing data, as well as the effort spent searching for information, has skyrocketed. Users have grown to expect 100 percent uptime availability.
An analysis of IT environments found that only 55 percent was spent on revenue-producing workloads. The remaining 45 percent was spent on Data Protection and Retenion. That means that for every IT dollar spent on projects to generate revenue, you are spending another 90 cents to protect it. Imagine spending 90 percent of your house payments for homeowners' insurance, or 90 percent of your car's purchase price for car insurance.
IBM has organized its solutions into three categories:
Hyper-Efficient Backup and Recovery
Continuous Data Availability
What would it mean to your business if you could shift some of the money spent on DP&R over to revenue-producing projects instead? That was the teaser question posed at the end of these morning sessions for us to discuss during lunch.
Normally, IBM has its announcements on Tuesdays, but this week it was on Monday!
I am here in New York City, at the Kaufmann Theater of the American Museum of Natural History, for the
[IBM Storage Innovation Executive Summit]. We have about 250 clients here, as well as many bloggers and storage analysts.
My day started out being interviewed by Lynda from Stratecast, a division of [Frost & Sullivan]. This interview will be part of a video series that Stratecast is doing about the storage industry.
(About the venue: American Museum of Natural History was built in 1869. It was featured in the film "Night at the Museum". In keeping with IBM's focus on scalability and preservation, the museum here boasts skeletons of the largest dinosaurs. The five-story building takes up several city blocks, and the Kaufmann theater is buried deep in the bottom level, well shielded from cell phone or Wi-Fi signals allowing me to focus on taking notes the traditional way, with pen and paper.)
Deon Newman, IBM VP of Marketing for Northa America, was our Master of Ceremonies. Today would be filled with market insight, best practices, thought leadership, and testimonials of powerful results.
This is my first in a series of blog posts on this event.
Information Explosion on a Smarter Planet
Bridget van Kralingen, IBM General Manager for North America, indicated that storage is finally having its day in the sun, moving from the "back office" to the "front office". According to Google's Eric Schmidt, we now create, capture and replicate more date in two days than all of the information recorded from the dawn of time to the year 2003.
1928: IBM's innovative 80-column punch card stored nearly twice as much as its 50-column predecessor.
1947: Bing Crosby decided to do his radio show by recording it at his convenience on magnetic tape, rather than doing it live. This was the motivation for IBM researches to investigate tape media, delivering the first commercial tape drive in 1952. One tape reel could hold the equivalent of 30,000 punch cards.
1956: the IBM RAMAC mainframe was the first computer to access data randomly with an externally-attached disk system, the "350 Disk Unit", which stored 5 million 7-bit characters (about 5MB) and weighed over 500 pounds. Compare that today's cell phone that can store several GB of data in a handheld device.
1978: IBM invented Redundant Array of Independent Disks (RAID) through a collaboration with University of Berkeley.
1993: IBM introduces the [IBM 9337 Disk Storage Array], the first external disk storage system for distributed operating systems. This was based on the Serial Storage Architecture [SSA] protocol.
1995: IBM launches products that support Storage Area Networks (SAN), based on the Fibre Channel Protocol. IBM's internal codenames for disk products were all names of sharks, and so our internal mantra was that a healthy storage diet was comprised of "Plenty of Fish and Fibre".
2010: IBM ships Easy Tier, the world's easiest-to-use sub-LUN automated tiering capability, for the IBM System Storage DS8700 disk system.
Storage is growing (in capacity) at 40 percent per year, but IT budgets are only growing (in dollars) by a measly 1 to 5 percent. She cited the success at [Sprint], presented at the October 2010 launch. By combining IBM SAN Volume Controller with a three-tier storage architecture, Sprint lowered their raw capacity from 10PB to 8.4PB, increasing utilization from 35 to 78 percent. This involved shrinking from six storage vendors to three, and reducing total number of disk arrays from 166 down to 96. The resulting system has only 38 percent of their data on their most expensive Tier-1 storage, the rest is now living on less expensive Tier-2 and Tier-3 storage.
Companies are entering the era of Big Data with an insatiable appetite for collecting and analyzing data for marketplace insights. IBM [InfoSphere BigInsights], based on the Apache Hadoop, has helped customers make sense of it all. Innovative technology, expertise and marketplace insight will provide the competitive path forward in the coming decade.
Storage Challenges and Opportunities in 2011 and Beyond
I always enjoy hearing Stan Zaffos, Gartner Research VP, present at the annual [Data Center Conference] in Las Vegas every December. His analysis and research focuses on storage systems and emerging storage technologies.
Stan provided his perspective on the storage industry. He suggested a top-down approach, based on the market trends that Gartner is closely monitoring. He suggests focusing heavily on managing data growth, using SLAs to improve efficiency, and to follow Gartner's recommended actions. His statement, "If something is not sustainable, then it is unsustainable." resonated well with the audience. His key three points:
Design to meet but not exceed Service Level Agreements (SLAs)
Re-evaluate your ratio of SAN versus NAS based on growth of unstructured data content,
Explore the variety of Cloud options available.
Those of us who have been in this business a long time recognize that the problems haven't changed, just the dimensions. When in the past three decades were IT budgets generous and plentiful? When was there more than enough IT staff to handle all the requests in a timely manner? When hasn't there been a period of information growth? Gartner's analysis external control block (RAID protected disk systems) is growing revenue at 8.7 percent. Raw TBs of disk capacity is growing at 55 percent, and expected to be 100 Exabytes by 2015.
SAN has four times more revenue than NAS today, but NAS is growing faster. NAS was only 9 percent marketshare in 2010, but is projected to grow to 32 percent by 2015. SAN can offer higher price/performance for traditional OLTP and database workloads, but NAS is better suited for unstructured data, backups and archives, assisted by storage efficiency features like real-time compression and data deduplication. Which industries create the most unstructured data? The ones involved in filling out forms! This includes government, insurance agencies, manufacturing, mining and pharmaceuticals.
The phrase "good enough" should no longer be considered an insult. Too often IT departments design solutions that far exceed negotiated Service Level Agreements (SLAs), and they should instead focus on just meeting them instead. Modular storage systems are often sufficient for most workloads. Slower 7200RPM SATA disks can be one third the price of faster 15K RPM Fibre Channel drives, and often sufficient performance for the tasks required. Unified storage, such as IBM N series, can help simplify capacity planning, as storage can be re-purposed if different workloads grow at different rates. The key is to focus on meeting SLAs based on the price-vs-risk factor. Take a minimalist approach with fewer SLAs, fewer management classes, and fewer storage vendors.
Stan suggests a two-pronged approach: Capacity management through content analytics and classification, and Efficient Utilization through Thin Provisioning, storage virtualization, Quality of Service (QoS), compression and deduplication capabilities. This features will be ubiquitous by 2013. If you are worried that these technologies mean more information packed onto fewer devices, Stan's response was "If it's not there, it can't break." Storing data on fewer disks or tape cartridges means less chance something will fail.
Stan feels IT shops using Thin Provisioning should continue to charge their end-users on what they ask for (the full allocation request) rather than what the thin-provisioned amount actually is on the storage devices themselves. For example, if someone asks for 100GB LUN to be allocated to their system, but this only takes up 30GB of actual data space, chargeback the full 100GB!
It can take five years for new technology to get 50 percent adopted. The Romans took eight years to build the [Colosseum]. His research on "network convergence" found that 42 percent planned to use iSCSI, 32 percent Fibre Channel over Ethernet (FCoE) or other Top-of-Rack(TOR) converged switches, and 16 percent looking for full convergence of servers, switches and storage. Features like IBM Easy Tier automatic sub-LUN tiering were introduced later, and so have not been adopted as widely as other features like Thin Provisioning that have been around since the 1990's IBM RAMAC Virtual Array.
Stan felt that Public and Private clouds were two different approaches. Public clouds offer reservation-less provisioning. Private clouds offer improved agility, but can be more complex to set up, and has the risk of idle capacity similar to traditional IT datacenter deployments. Storage and File virtualization should be considered a pre-req for adopting Cloud technologies.
Storage IT teams need to adopt more than just technical skills. They need to learn about legal and government regulatory compliance issues, financial considerations, and would even benefit doing some "marketing". Why marketing? Because often IT departments need end-users to change their attitudes and behaviours, and this can be accomplished through internal marketing campaigns.
IBM introduced the Linear Tape File System last year, which I explained in my post [IBM Celebrates 10 Year Anniversary for LTO tape], and released it as open source to the rest of the Linear Tape Open [LTO] Consortium so that the entire planet can benefit from IBM's innovation. IBM presented a technology demonstration of its Linear Tape File System - Library Edition at the NAB conference, showing how this new IBM library offering of the file system can put mass archives of rich media video content at the users fingertips with the ease of library automation.
From left to right, here is Atsushi Nagaishi (Toshiba) and Shinobu Fujihara (IBM). Fujihara-san is from IBM's Yamato lab in Japan where some of the LTFS development was done. The Yamato Lab was not damaged by the [Earthquakes in Japan].
With the capabilities of LTFS, IBM has introduced an entirely new role for tape, as an attractive high capacity, easy to use, low cost and shareable storage media. LTFS can make tape usable in a fashion like removable external disk, a giant alternative to floppy diskettes, DVD-RW and USB memory sticks with directory tree access and file-level drag-and-drop capability. LTFS can allow the for passing of information around from one system or employee to another. And as for high video storage capacity, a 1.5TB LTO-5 cartridge can hold about 50 hours of XDCAM HD video!
A group photo of the global IBM LTFS team, from left to right, David Pease from IBM Almaden Research Center, Ed Childers from IBM Tucson, Shinobu Fujihara and Hironobu Nagura from IBM Japan.
IBM was once again #1 leader in Tape worldwide for the year 2010. With this exciting new win, tape is not just for backup and archive anymore!
"All work and no play makes Jack a dull boy!"
Often I get feedback from my readers that I focus too much on storage products in this blog, and have been asked to break out of the work world for a change. Fair enough! The first Sunday of May is designated "World Laughter Day". I am proud to be one of the founding members of the [Tucson Laughter Club] back in 2004, and we held our first World Laughter Day event on May 1, 2005 at Udall Park.
Over the past seven years, we have had four other clubs "spin off" from the main group to form their own club. However, for World Laughter Day, the five sister clubs (or five warring tribes, as some call them) put down our differences and got together for a day of fun. In keeping with the tradition of having these events outside, we were granted permission to hold our event at the University of Airzona mall.
While the Tucson Laughter Club is recognized as one of the oldest laughter clubs in the United States, there are actually over 6000 clubs worldwide in over 60 countries. Laughter clubs started in India, when Dr. Madan Kataria (a medical doctor) and his wife Madhuri (a yoga instructor) gathered people in a park to try laughter as a form of healing and exercise. Today, [Laughter Yoga] is practiced outside in parks or indoors.
Up until now, all of the World Laughter Day events in Tucson have been organized by the original Tucson Laughter Club, but this time we decided to pass the baton over to Gita Fendelman of [Curve's Laughter YogHA club] to take the lead. Here she is standing next to a large yellow sign with facts and figures about the history of World Laughter Day.
We had about 45 people join us in a large circle, and proceed with a series of breathing, stretching and laughter exercises. Many of the laughter exercises involve moving around to look at each of the other participants eye-to-eye, and with 45 people, this can be quite challenging.
The weather could not have been nicer. Clear blue skies, a slight breeze, and an unusually cool 75 degrees F. Last week we were in the nineties approach summer, so we were delighted the weather cooled down for this event.
As a certified Laughter Yoga instructor myself, I offered to help lead the events for World Laughter Day. We had plenty of other certified laughter leaders on hand, and so both Gita and Emily (from Laughter Yoga with Emily) served as co-chairs.
I brought my [CRATE battery-powered amp] and microphones so that we could project our voices loud enough to the entire group. There was no electricity anywhere near our location, so battery-powered amps are the way to go for these situations.
After two hours of laughing, we all lie down for some relaxing meditation. Some people use this time to pray for World Peace. In a delightful coincidence, later that day, US President Barack Obama announced that the [world was a better place] having eliminated one of the world's most dangerous terrorists.
I would like to thank Jeff from our local NBC News Channel 4 affiliate [KVOA] who came to interview Gita and video us while we did our laughter exercises.
Back in Februray, my blog post [A Box Full of Floppies] mentioned that I uncovered some diskettes compressed with OS/2 Stacker. Jokingly, I suggested that I may have to stand up an OS/2 machine just to check out what is actually on those floppies. Each floppy contains only three files: README.STC, STACKER.EXE and a hidden STACKVOL.DSK file. The README.STC explains that the disk is compressed by Stacker, a program developed by [Stac Electronics, Inc.]. The STACKER.EXE would not run on Windows XP, Vista or Windows 7. The STACKVOL.DSK is just a huge binary file, like a ZIP file, compressed with [Lempel-Ziv-Stac] algorithm that combines Lempel-Ziv with Huffman coding.
In my follow-up post [Like Sands in an Hourglass], I explained how there are many ways I could have tackled this project. I could either use the Emulation approach and try to build an OS/2 guest image under a hypervisor like VMware, KVM or VirtualBox, or just take the Museum approach and try taking one of my half dozen old machines, wipe it clean and stand up OS/2 on it bare metal. This turned out to be more challenging than I expected. The systems I have that are modern and powerful enough to run hypervisors don't have floppy drives, so I opted for the Museum approach.
(A quick [history of OS/2] might be helpful. IBM and Microsoft jointly developed OS/2 back in 1985. By 1990, Microsoft decided it's own Windows operating system was more popular with the ladies, and decided to break off with IBM. In 1992, IBM release OS/2 version 2.0, touted as "a better DOS than DOS and a better Windows than Windows!" Both parties maintained ownership rights, Microsoft renamed OS/2 to Windows NT. The "NT" stood for New Technology, the basis for all of the enterprise-class Windows servers used today. IBM named its version of OS/2 version 3 and 4 "WARP", with the last version 4.52 released in 2001. In its heyday, OS/2 ran the majority of Automated Teller Machines (ATMs), was used for hardware management consoles (HMC), and was used worldwide to run various Railway systems. After 2001, IBM encouraged people to transition from Windows or OS/2 over to Java and Linux. For those that can't or won't leave OS/2, IBM partnered with Serenity Systems to continue OS/2 under the brand [eComStation].)
Working with an IBM [ThinkCentre 8195-E2U Pentium 4 machine] with 640MB RAM and 80GB hard disk, a CD-rom and one 3.5-inch floppy drive, I first discovered that OS/2 is limited to very small amounts of hard disk. There are limits on [file systems and partition sizes] as well as the infamous [1024-cylinder limit] for bootable operating systems. Having a completely empty drive didn't work, as the size of the disk was too big. Carving out a big partition out of this also failed, as it exceeded the various limits. Each time, it felt the partition table was corrupted because the values were so huge. Even modern Disk Partitioning tools ([SysRescueCD] or [PartedMagic]) didn't work, as these create partitions not recognizable to OS/2.
The next obstacle I knew I would encounter would be device drivers. OS/2 comes as a set of three floppy diskettes and a CD-rom. The bootable installation disk was referred to affectionately as "Disk 0", then Disk 1, then Disk 2. Once all drivers have been loaded into memory, then it can start looking at the CDrom, and continue with the installation. In searching for updated drivers, I came across [Updated OS/2 Warp 4 Installation Diskettes] to address problems with newer display monitors. It also addresses the 8.4GB volume limit.
The updates were in the form of EXE files that only execute in a running DOS or OS/2 environment, expanded onto a floppy diskette. It seemed like [Catch-22], I need a working DOS or OS/2 system to run the update programs to create the diskettes, but need the diskettes to build a working system.
To get around this, I decided to take a "scaffolding" approach. Using DOS 6 bootable floppy, I was able to re-partition the drive with FDISK into two small 1.9GB partitions. I have the full five-floppy IBM DOS 6 set, I hid the first partition for OS/2, and install the DOS 6 GUI on the second partition. I went ahead and added a few new subdirectories: BOOT to hold Grub2, PERSONAL to hold the data I decompress from the floppies, and UTILS to hold additional utilities. This little DOS system worked, and I now have new OS/2 "Disk 1" and "Disk 2" for the installation process.
(If you don't have a full set of DOS installation diskettes, you can make due with "FORMAT C: /S" from a [DOS boot disk], and then just copy over all the files from the boot disk to your C: drive. You won't have a nice DOS GUI, but the command line prompt will be enough to proceed.)
Like DOS, OS/2 expects to be installed on the C: drive. I hid the second partition (DOS), and marked the first partition installable and bootable. The OS/2 installation involves a lot of reboots, and the hard drive is not natively bootable in the intermediate stages. This means having to boot from Disk 0, then putting in Disk 1, then disk 2, before continuing the next phase of the installation. I tried to keep the installation as "Plain Vanilla" as possible.
I had to figure out what to include, and what to exclude, and this involved a lot of trial and error. For example, one of the choices was for "external diskette support". Since I had an "internal diskette drive", I didn't think I needed it. But after a full install, I discovered that it would not read or write floppy diskettes, so it appears that I do indeed need this support.
OS/2 supports two different file systems, FAT16 and the High Performance File System (HPFS). Since my partition was only 1.9GB in size, I chose just to use FAT16. HPFS supported larger disk partitions, longer file names, and faster performance, none of which I need for these purposes.
I thought it would be nice to get TCP/IP networking to work with my Ethernet card. However, after many attempts, I decided against this. I needed to focus on my mission, which was to decompress floppy diskettes. It was amusing to see that OS/2 supported all kinds of networking, including Token Ring, System Management, Remote Access, Mobile Access Services, File and Print.
Once all the options are chosen, OS/2 installation then proceeds to unpack and copy all the programs to the C: drive. During this process, IBM had informational splash screens. Here's one that caught my eye, titled "IBM Means Three Things" that listed three reasons to partner with IBM:
Providing global solutions for a small planet
Creating and Applying advanced technologies to improve with which customers run their businesses
Constantly improving customer service with the products and services we provide
You might wonder how these OS/2 splash screens, written over 10 years ago, can appear almost identical to IBM's current [Smarter Planet] campaign. Actually, it is not that odd. IBM has been keeping to these same core principles since 1911, only the words to describe and promote these core values have changed.
To access both OS/2 and DOS partitions, I installed Grand Unified Bootloader [Grub2] on the DOS partition under C:/BOOT/GRUB directory. However, when I boot OS/2, I cannot see the DOS partition. And when I boot DOS, I cannot see the OS/2 partition. Each operating system thinks its C: drive is the only partition on the system.
Now that I had OS/2 running, I was then able to install Stacker from two floppy diskettes. With this installed, I can compress and decompress data on either the hard disk, or on floppy diskettes. Most of the files were flat text documents and digital photos. After copying the data off the compressed disks onto my hard drive, I now can copy them off to a safe place.
To finish this project, I installed Ubuntu Linux on the remaining 76GB of disk space, which can access both the OS/2 and DOS drives FAT16 file systems natively. This allows me to copy files from OS/2 to DOS or vice versa.
Now that I know what data types are on the diskettes, I determined that I could have decompressed the data in just a few steps:
Set up a DOS partition on C: drive
Insert one of the compressed diskettes into the floppy drive
Copy the STACKER.EXE program from the floppy to the C: drive
Run "STACKER A:" to decompress the floppy diskette
However, now that I have a working DOS and OS/2 system, I can possibly review the rest of my floppy diskettes, some of which may require running programs natively on OS/2 or DOS. This brings me to an important lesson. If you are going to keep archive data for long-term retention, you need to choose file formats that can be read by current operating systems and programs. Installing older operating systems and programs to access proprietary formats can be quite time-consuming, and may not always be possible or desirable.
Yes, it's Tuesday, and that means more IBM Announcements! A lot was announced today, so I have selected an eclectic mix for your enjoyment.
Microsoft Windows support on IBM Mainframes
Last year's announcement of the new IBM zEnterprise included the zEnterprise BladeCenter Extention (zBX) which could run POWER7 and x86 operating systems, but managed by the mainframe's overall Unified Resource Manager. Initially, this was intended for AIX and Linux-x86, but today, IBM [announced a statement of general direction to support Microsoft Windows] on the zBX extension by end of this year. Of course, the standard disclaimer applies: All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party's sole risk and will not create liability or obligation for IBM.
New 15K RPM drives for IBM Storwize V7000
Last October, when IBM introduced the Storwize V7000, it offered both large (3.5 inch) and small form factor (2.5 inch) drives. Unfortunately, a few people were upset that there were no 15K RPM drives for the small form factor models. There were SSD and 10K RPM drives, but nothing in between. Today, IBM [announced new 15K RPM drives of 146GB capacity] have been qualified for both the controller and expansion drawers.
New RVU licensing for IBM Tivoli products
IBM [announced it is changing over to this new RVU licensing model], from the previous PVU license, based on processor value units. What is an RVU? An RVU is a unit of measure by which the program can be licensed. RVU Proofs of Entitlement (PoE) are based on the number of units of a specific resource used or managed by the program. This makes sense, resource management software should be charged by the amount of resources you manage, not the size of the server the software runs on. This change also enables running on server virtualization and live movement of VM guest images from one type of host machine to another.
If you are contemplating a visit to an IBM [Executive Briefing Center], then April and May is a great time to come to Tucson. The weather is ideal here. The cold snap appears to be over, and spring is in the air!
The IBM Storwize V7000 was introduced last October, and has proven to be wildly successful. I saw two awesome reviews recently of the IBM Storwize V7000 disk system that I thought I would bring to your attention.
The first review is [IBM Storwize V7000] from Roger Howorth of ZDNet UK. Here are some quotes:
"Under the hood, the Storwize V7000 is built from technologies originally developed for IBM's enterprise-class storage systems, so the V7000 benefits from a comprehensive set of high-end features that have been scaled down for mid-range buyers."
"Initial configuration couldn't be simpler."
"We really liked the layout and functionality of the GUI."
"Storwize V7000 is virtual storage that offers efficiency and flexibility through built-in SSD optimization and "thin provisioning" technologies while enabling users to virtualize and re-use existing disk systems..."
"Storwize V7000 advanced functionality also enables non-disruptive migration of data from existing storage, simplifying implementation and minimizing disruption to users."
"The Storwize V7000 graphical user interface is a browser-based, easy to navigate intuitive GUI."
"ESG Lab found that getting started with the Storwize V7000 disk system was intuitive and straightforward."
"Easy Tier increases the efficiency and simplicity of deploying SSD drives."
I started attending the Arizona International Film Festival eight years ago. I took a week off work to see the films, and came back to tell people how enjoyable it was to just sit and watch thirty movies. "Dirty movies?" they would ask. "No, not dirty, thirty!" To avoid further confusion, I quickly switched to saying "I spent the week watching 25 to 35 independent films."
A few weeks ago, the Arizona International Film Festival notified me that my 2007 System Storage video has been recognized for a technical award under the category of "Innovative use of Technology for Animated Short Film." I will receive the award in person this evening, April 1, at the opening ceremony, which starts at 6:00pm, at the [Fox Theater] in downtown Tucson, Arizona.
As is the case with the Oscars and Grammies, technical awards are handed out in smaller ceremonies in advance of the primary award ceremony that recognizes the best actors, directors and films, which will be held April 9.
In this virtual world, avatars representing IBM executives and marketing managers would present our latest products to avatars of the IBM Business Partners, Industry Analysts and the Press. A short "highlights" video that stitched together bits and pieces of the 90-minute event was used by executives at conferences and road shows. I submitted this shortened version to the Airzona International Film Festival back in 2008, so I am glad the judges had finally gotten around to review it. Here it is uploaded as a [YouTube video]:
During the event, I captured the real-time video from my laptop screen using a tool called [FRAPS]. I also had some of my colleagues capture video from different angles in case we needed these in post-production. The technique of capturing computer-generated 3D video from a computer screen is known as Machinima.
I was in Bogota Columbia that week teaching a Top Gun class. I got to the IBM building only to discover the firewall would not let me get through to the Second Life website, so I took a taxi back to the hotel and ran the event from their business center. Then the unthinkable happened, and I got to experience [Columbia's worst power outage in 22 years], in which 98 percent of the country lost power. Luckily, I had enough battery charge on my laptop and was still connected to the Internet to continue with the rest of the event.
Voice-over-IP but it did not have that feature back then. The other $91 was for virtual items in Second Life. I learned how to make virtual objects and use the GNU Image Manipulation Program [GIMP] to create avatar clothing, giveaway items, and demo equipment.
Instead of hiring voice actors, I had IBMers Andy Monshaw, Eric Buckley, Funda Eceral, David Tareen, and Kristie Bell all provide their voice talents directly.
I asked the coordinator of the film festival if there was going to be a &quot;practice session&quot; for the technical award ceremony. She laughed, and said basically that I would just be walking across the stage, receive the award in my left hand as I shook hands with my right, and then turn slightly clockwise to pose while my picture is taken. If I had ever gotten a diploma from high school or college, she said, then I already knew what to expect. "Don't worry," she assured me, "you won't have to give a speech!"
(In lieu of a speech, I would like to thank Christine Heinisch, my video editor, and Katrina Smith, my cinematographer. I could not have won this technical award without their assistance.)
After the ceremony tonight, the film festival (celebrating its 20th anniversary this year) will kick off with the first of 110 films called "Journey from Zanskar", a 90-minute documentary directed by Frederick Marx and narrated by Richard Gere, starting at 8:00pm. The Arizona International Film Festival will continue through April 20, with films being shown on the evenings and weekends so that I won't have to take time off from work.
If you are in the Tucson area, come out and join me tonight at the Fox Theater!
(Update: Yes, this was an April Fools joke! I did not win any awards for this video. I apologize to my friends and family who showed up to see me receive the award that I didn't get.)
Next week, April 6, IBM will host the [Smarter Computing Virtual Event] to cover IBM's Smarter Computing initiative, with key themes of Smarter Computing - Big Data, Optimized Systems, and Cloud. Smarter Computing is a new and innovative approach to computing based on the evolving role of IT in your business and an intrinsic understanding of the economics of IT.
(I found it amusing that EMC has chosen two of IBM's themes, "Big Data" and "Cloud", for their upcoming EMC World 2011 conference. I was tempted to include their graphic, but people might have accused me of using Photoshop or GIMP to make EMC look bad. Instead, you can look at the graphic on this blog post titled [When Cloud Meets Big Data: Information Logistics Revisited] by fellow blogger Chuck Hollis from EMC. IBM has been a leader in IT for decades, so we are used to having other companies follow in our footsteps. As an [IBM wannabee], EMC is no different.)
For many on tight travel budgets, this event REQUIRES NO TRAVEL! This is a virtual event, You can participate from your desk. You will hear from key IBM executives, all of which I have heard speak myself, so I can vouch that this should be a good event.
Steve Mills - IBM Senior Vice President and Group Executive, Software and Systems (my seventh-line manager)
Tom Rosamilia - IBM General Manager, Power and Mainframe Systems, IBM Systems and Technology Group.
Robert LaBlanc - IBM Senior Vice President, Middleware Software
Helene Armitage - IBM General Manager, Systems Software, Systems and Technology Group.
This event is targeted to CIOs, IT Directors and Managers, Business Analysts, Systems and Storage Administrators, and DBAs. However, we don't check what your actual title is, so feel free to attend even if you have different job responsibilities.
I am giving you one week's notice for this event. If this is the first time you have heard of this event, then I hope that is enough time to plan for this event in your busy schedule. If you had heard of it already, perhaps this serves as a useful reminder to [Register Now!] Is a week ahead the right amount of time? For virtual events, do we need more or less advance notice? What about for events that involve travel? Feel free to enter your thoughts on this in the comments section below.
Last night, I presented an E-Talk to the Engineering Student Council (ESC) of the University of Arizona (UofA).
The ESC is the student governing body of The University of Arizona’s College of Engineering. The organization works with scholastic honorary societies, professional organizations, and project clubs to aid and encourage the professional and social development of students. This year, ESC launched a new program, Engineering Talks (E-Talks), consisting of workshops and lectures, which will focus on teaching students what it takes to work within a company, before they enter the workforce. To make this program successful, career advice from professionals working at established companies is essential.
The audience was a mix of undergraduate and graduate engineering students from a variety of disciplines, such as Petroleum, Hydrology, Mining, Biomedical, Electrical and Computer Engineering. Only a few were graduating this May. There were roughly an equal number of boys and girls, which was encouraging. When I was an engineering student at the UofA, women engineers were very rare.
A little about myself, my academic and professional career over the past 30 years, and some background of IBM as a company, how it is organized, and its 100 year Centennial celebration.
An overview of IBM's corporate strategy for Smarter Computing, explaining how IBM is solving the world's toughest challenges for analyzing Big Data, developing Optimized Systems for particular workloads, and new delivery and deployment models for Cloud Computing.
Some career advice, based on my decades of work experience at IBM and elsewhere
After the Q&A, several students stayed around afterwards to ask questions. This seems to happen every time I give a presentation to a mixed audience. I handed out plenty of business cards, and offered to make the charts available to all the students via the IBM Expert Network on Slideshare.net website.
Before we started, we asked the first survey question: "How is storage planning conducted in your shop?" Of the various responses, nearly four out of ten responded "Part of an overall IT infrastructure strategy".
Jon Toigo went first, and spent 20 minutes or so laying out the problem as he sees it. Jon travels all over visiting customers struggling with their storage infrastructures, so he gets to hear a lot of this first hand.
I then spent 20 minutes or so presenting IBM's vision, strategy and offerings to help solve these problems. I could speak for hours on this topic, but we kept it short for this one-hour webcast. To learn more, request a visit to the Tucson Executive Briefing Center.
At the end of my talk, we put out the second survey, asking the audience "What is your number one priority with respect to storage operations today?" Over one fourth of the attendees were focused on reducing storage infrastructure cost of ownership by any means possible.
I am glad we saved the last 15 minutes for Q&A, as there were a lot of questions.
The replay is now available. If you attended the event and want to hear it again, or want to share it with your colleagues, or you missed it and want to hear it, then [Register for the Replay].
IBM announced that it will offer [three free months of IBM Smart Business Cloud] computing and storage services to government agencies, charitable non-profit organizations, and other organizations involved with reconstruction resulting from the earthquakes and tsunami in Japan and the northern Pacific region.
With traditional communications down, and many data centers incapacitated, Cloud Computing can be a great way to resume operations. According to the announcement, organizations can submit their requests now until April 30, and the program will run until July 31, 2011. Options include:
Virtual machine images running 32-bit and 64-bit versions of Linux or Windows.
60GB to 2 TB of disk storage per instance.
Options for various IBM middleware (DB2, Informix, Lotus, and WebSphere)
Rational Application Lifecycle Management and Tivoli Monitoring software
The offer also includes [LotusLive] Software-as-a-Service (SaaS) for email and online collaboration. For more about LotusLive, see this [Red Paper].
IBM and the Austin Chamber of Commerce is inviting registered SXSW Interactive attendees to the networking reception being hosted by the IBM Innovation Center and the IBM Venture Capital Group. Power Systems and Watson will have a significant feature at this SXSW event to be held on March 14, 2011.
While I won't be there personally at the SXSW conference, I strongly recommend you to attend this event.
Innovators and Entrepreneurs Networking Reception
Four Seasons Hotel
March 14, 2011
Hosted by IBM Venture Capital Group, Austin Chamber of Commerce, and the IBM Innovation Center.
This reception will provide a rare opportunity to network and collaborate with your professional community of industry leaders, entrepreneurs, developers, academics, venture capitalists, members of the Austin Chamber of Commerce.
Webcast: How to Diagnose and Cure What Ails Your Storage Infrastructure
Wednesday, March 23, 2011 at 11:00 AM PDT / 11:00 AM Arizona MST / 2:00 PM EDT
Storage is the most poorly utilized infrastructure element -- and the most costly part of hardware budgets -- in most IT shops today. And it’s getting worse. Storage management typically involves nightmarish mash-up of tools for capacity management, performance management and data protection management unique to each array deployed in heterogeneous fabrics. Server and desktop virtualization seem to have made management issues worse, and coming on the heels of changing workloads and data proliferation is the requirement to add data management to the set of responsibilities shouldered by fewer and fewer storage professionals. Forecast for Storage in 2012: more pain as long delayed storage infrastructure refresh becomes mandatory.
In this webcast, fellow blogger Jon Toigo, CEO of Toigo Partners International, of [DrunkenData] fame, and I will take turns assessing the challenges and suggesting real-world solutions to the many issues that confound storage efficiency in contemporary IT. Integrating real world case studies and technology insights, our storage experts will deliver a must see webcast that sets down a strategy for fixing storage...before it fixes you.
Don't miss this event, unless you like the stress of knowing that your next disaster may be a data disaster.
Wrapping up my week's coverage of the IBM Pulse 2011 conference, I have had several people ask me to explain IBM's latest initiative, Smarter Computing, which IBM launched this week at this conference. Having led the IT industry through the Centralized Computing era and the Distributed Computing era, IBM is now well-positioned to help companies, governments and non-profit organizations to enter the new Smarter Computing era, focused on insight and discovery.
Thousands of IT professionals
Effiicent, but only the largest companies and governments had them
Millions of office workers
Personal computers (PC)
Innovative, extending the reach to small and medium-sized businesses, but resulted in server sprawl and increased TCO
Billions of people
Smart phones and other handheld devices
Efficient and Innovative, combining the best of centralized and distributed computing
1952 to 1980
1981 to 2010
2011 and beyond
To help clients with this transition, IBM's Smarter Computing initiative has three main components. This is a corporate-wide strategy, with systems, software and services all working together to realize results.
The first component is Big Data. This combines three different sources of data:
Traditional structured data in OLTP databases and OLAP data warehouses, using data management solutions like DB2 and IBM Netezza.
Unstructured data, including text documents, images, audio, and video, processed with massive parallelism using IBM BigInsights and Apache Hadoop.
Real-Time Analytics Processing (RTAP) of incoming data, including video surveillance, social media, RFID chips, smart meters, and traffic control systems, processed with IBM InfoSphere Streams
Of course, Big Data will bring new opportunities on the storage front, which I will save for a future post!
Rather than general purpose IT equipment, we have now the scale and scope to specialize with systems optimized for particular workloads, the second component of the Smarter Computing initiative. Of course, IBM has been delivering integrated stacks of systems, software and services for decades now, but it is important to remind people of this, as IBM now has a spate of competitors all trying to follow IBM's lead in this arena.
As with Big Data, the focus on Optimized Systems has impacted IBM's strategy on storage as well. I'll save that discussion for a future post as well!
I am glad that nearly all of the storage vendors have standardized to a common definition for Cloud, the third component of Smarter Computing, which shows that this concept has matured:
Cloud computing is a pay-per-use model for enabling network access to a pool of computing resources that can be provisioned and released rapidly with minimal management effort or service provider interaction. -- U.S. National Institute of Standards and Technology [nist.gov]
Of course, Cloud is just an evolution of IBM's Service Bureau business of the 1960s and 1970s, renting out time-sharing on mainframe systems, Grid Computing of the 1980s, and Application Service Providers that popped up in the 1990s. While the [butchers, bakers and candlestick makers] that IBM competes against might focus their efforts on just private cloud or just public cloud, IBM recognizes the reality is that different clients will need different solutions. Rather than rip-and-replace, IBM will help clients transition to cloud via inclusive solutions that adopt a hybrid approach:
Traditional enterprise with private cloud deployments, using solutions like IBM CloudBurst, SONAS and Information Archive
Traditional enterprise with public cloud services to handle seasonable peaks, providing offsite resiliency, and solutions for a mobile workforce
Hybrid clouds that blend private and public cloud services, to handle seasonal peak workloads, remote and branch offices
IBM's emphasis on IT Infrastructure Library (ITIL), Tivoli and Maximo products will play well in this space to provide integrated service management across traditional and cloud deployments. This is why IBM decided to launch Smarter Computing initiative at Pulse 2011 conference, the industry's premiere conference on intergrated service management.
The IBM Watson that competed on Jeopardy! is an excellent example of all three components of Smarter Computing at work.
IBM Watson was able to respond to Jeopardy! clues within three seconds, processing a combination of database searches with DB2 and text-mining analytics of unstructured data with IBM BigInsights.
IBM Watson combined servers, software and storage into an integrated supercomputer that was optimized for one particular workload: playing Jeopardy!
IBM Watson used many technologies prevalent in private and public cloud computing systems, storing its data on a modified version of SONAS for storage, using xCat administration tools, networking across 10GbE Ethernet, and massive parallel processing through lots of PowerVM guest images.
This week was the IBM Pulse 2011 converence in Las Vegas, Nevada, with over 7,000 attendees. I wasn't there, and my on-the-scene correspondent was too busy running the hands-on lab to get out and attend sessions. Fortunately, I was able to watch some of the [IBM Software live stream], and here are my thoughts and observations.
Fellow inventor [Dean Kamen] was the keynote speaker. His inventions help people, making the world a better place. Here are three examples I found interesting during his talk:
Helping third world countries
Dean started out with his favorite quote:
"A problem well defined is a problem half-solved." - John Dewey
Dean mentioned that we are fortunate, having both potable drinking water and a reliable supply of electricity, but 2 to 4 billion people on the planet do not. Sponsored by Coca-Cola, Dean and his team of innovators were able to come up with small units that can be placed in a village or town. One unit takes in wet liquid and produces potable drinking water. The other unit takes combustible materials, like cow dung, and products electricity. Each unit is roughly the size of half a standard server rack. What does Coca-Cola get out of this? New "vending machines"! By combining drinking water with flavored syrups, they can create soft drinks on demand.
Dean's opinion was that if you want something done, you need to work with large corporations, as governments are mired in beauracracy and rules. I agree. When I first joined IBM, I was introduced to [TRIZ] which was a systematic method for solving problems. IBM's best and brightest are working to solve some of the toughest computer science challenges. For more on TRIZ, see this blog post about [TRIZ in BusinessWeek].
Helping injured veterans
Dean Kamen is well known for inventing the two-wheeled [Segway Personal Transporter], but his company, [DEKA], makes all kinds of things, mostly medical equipment. To help wounded soldiers returning from Iraq or Afghanistan without one or both arms, Dean and his team developed a robotic arm that has enough motor dexterity to pick up a raisin or grape off the table without dropping or squashing it. Dean has appeared several times on the Colbert Report, and here is a video of the robotic arm:
I have myself enjoyed riding a Segway. A local place in Tucson uses them to lead tourists through downtown Tucson and the University of Arizona campus.
Helping young students to learn science and technology
Dean wrapped up his talking by talking about his passion about "For Inspiration and Recognition of Science and Technology" or [FIRST]. Modeled after sports competitions, FIRST encourages teams of kids to build robots that perform specific tasks. Every year, companies and universities sponsor teams by purchasing robot kits from FIRST. Teams compete in regional competitions, and then the best of those go on to compete in a stadium in Atlanta, Georgia, hosting 76,000 people cheering for their teams.
Unlike other school sports (Football, Basketball, Baseball, etc.) where a student is more likely to win the lottery than get a successful career as a professional athlete, every student involved in FIRST competitions can "go pro". A study of FIRST success tracked students who participated in competitions, and found a substantial improvement in percentage of those students attending college and working as science and engineering professionals.
I am a big fan of encouraging kids of all ages to learn more about science, technology, engineering and math [STEM]. Back in 2009, I blogged about my involvement with [One Laptop Per Child] and [Junior FIRST Lego League]. I've gotten a great reaction to my latest challenge, to build a Watson Jr. in your own basement, based on my [step-by-step] instructions.
If you attended IBM Pulse this week, please comment on your thoughts and observations!
It's Tuesday again, and that means one thing.... IBM Announcements! On the heels of [last week's announcements], IBM announced some additional products of interest to storage administrators.
IBM Information Archive
Back in 2008, IBM [unveiled the Information Archive]. This storage solution provides automated policy-based tiering between disk and tape, with non-erasable non-rewriteable enforcement to protect against unethical tampering of data. The initial release supported [both files and object storage], with support for different collections, each with its own set of policies for management. However, it only supported NFS initially for the file protocol. Today, IBM announces the addition of CIFS protocol support, which will be especially helpful in healthcare and life sciences, as much of the medical equipment is designed for CIFS protocol storage.
Also, Information Archive will now provide a full index and search feature capability to help with e-Discovery. Searches and retrievals can be done in the background without disrupting applications or the archiving operations.
IBM Tivoli Storage Manager for Virtual Environments V6.2 extends capabilities that currently exist in IBM Tivoli Storage Manager. TSM backup/archive clients run fine on guest operating systems, but now this new extension improves backup for VMware environments. TSM provides incremental block-level backups utilizing VMware's vStorage APIs for Data Protection and Changed Block Tracking features.
To minimize impact to the VMware host, TSM for VE make use of non-disruptive snapshots and offload the backup processing to a vStorage backup server. This supports file-level recovery, volume-level recovery, and full VM recovery. Of course, since it is based on TSM v6, you get advanced storage efficiency features such as compression and deduplication to minimize consumption of disk storage pools.
IBM Tivoli Monitor has been extended to support virtual servers, including VMware, Linux KVM, and Citrix XenServer. This can help with capacity planning, performance monitoring, and availability. Tivoli Monitor will help you understand the relationships between physical and virtual resources to help isolate problems to the correct resource, reducing the time it takes for debug issues between servers and storage. See the
Next week is [IBM Pulse2011 Conference] in Las Vegas, February 27 to March 2. Sorry, I don't plan to be there this year. It is looking to be a great conference, with fellow inventor Dean Kamen as the keynote speaker. For a blast from the past, read my blog posts from Pulse2008 [Main Tent sessions] and [Breakout sessions].
The IBM Challenge was a big success. One of the contestants, Ken Jennings, [welcomes our new computer overlords]. Congratulations are in order to the IBM Research team who pulled off this Herculean effort!
Some folks have poked fun at some of the odd responses and wager amounts from the IBM Watson computer during the three-day tournament. Others were surprised as I was that the impressive feat was done with less than 1TB of stored data. Here is what John Webster wrote in CNET yesterday, in hist article [What IBM's Watson says to storage systems developers]:
"All well and good. But here's what I find most interesting as a result of what IBM has done in response to the Grand Challenge that motivated Watson's creators. We know, from Tony Pearson's blog, that the foundation of Watson's data storage system is a modified IBM SONAS cluster with a total of 21.6TB of raw capacity. But Pearson also reveals another very significant, and to me, surprising data point: "When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, the actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1 Terabyte."
What Pearson just said is that the data set Watson actually uses to reach his push-the-button decision would fit on a 1TB drive. So much for big data?"
To better appreciate how difficult the challenge was, and how a small amount of data can answer a billion different questions, I thought I would cover Business Intelligence, Data Retrieval and Text Mining concepts.
"In this paper, business is a collection of activities carried
on for whatever purpose, be it science, technology,
commerce, industry, law, government, defense, et cetera.
The communication facility serving the conduct of a business
(in the broad sense) may be referred to as an intelligence
system. The notion of intelligence is also defined
here, in a more general sense, as the ability to apprehend
the interrelationships of presented facts in such a way as
to guide action towards a desired goal."
Ideally, when you need "Business Intelligence" to help you make a better decision, you perform data retrieval from a structured database for the specific information you are looking for. In other cases, you might be looking for insight, patterns or trends. In that case, you go "data mining" against your structured databases.
Here's a simple example. John runs a fruit stand. One day, he kept track of how many apples and oranges were bought by men and women. How many questions can we ask against this small set of data? Let's count them:
How many apples were sold to men?
How many apples were sold to women?
How many oranges were sold to men?
How many oranges were sold to women?
But wait! For each row and column, we can combine them into totals.
How many apples were sold in total?
How many oranges were sold in total?
How many fruit in total were sold to men?
How many fruit in total were sold to women?
How many fruit in total were sold?
But wait, there's more! Each row and column can be evaluated for relative percentages, as well as percentages of each cell compared to the total. You could make five relevant pie-charts from this data. This results in 16 more questions, such as:
Of the fruit purchased by men, what percentage for apples?
Of all the apples purchased, what percentage by women?
And that's not including more ethereal questions, such as:
Are there gender-specific preferences for different types of fruit?
What type of fruit do men prefer?
This is just for a small set, two market segments (by gender) and two products (apples and oranges). However, if you have many market segments (perhaps by age group, zip code, etc.) and many products, the number of queries that can be supported is huge. For small sets of data, you can easily do this with a spreadsheet program like IBM Lotus Symphony or Microsoft Excel.
But why limit yourself to two dimensions? The above example was just for one day's worth of activity, if John captures this data for every day for historical and seasonal trending, it can be represented as a three-dimensional cube. The number of queries becomes astronomical. This is the basis for Online Analytical Processing (OLAP), and three-dimensional tables are often referred to as [OLAP cubes].
Back in 1970, IBM invented the Structured Query Language [SQL], and today, nearly all modern relational databases support this, including IBM DB2, Informix, Microsoft SQL Server, and Oracle DB. SQL poses two challenges. First, you had to structure the data in advance to the way you expect to perform your ad-hoc queries. Deciding the groups and categories in advance can limit the way information is recorded and captured.
Second, you had to be skilled at SQL to phrase your queries correctly to retrieve the data you are after. What ended up happening was that skilled SQL programmers would develop "canned reports" with fixed SQL parameters, so that less-skilled business decision makers could base their decisions from these reports.
IBM has fully integrated stacks to help process structured data, combining servers, storage, and advanced analytics software into a complete appliance. IBM offers the [Smart Analytics System] for robust, customized deployments, and recently acquired [Netezza] for pre-configured, and more rapid deployments.
However, the bigger problem is that more than 80 percent of information is not structured!
Semi-structured data like email provides some searchable fields like From and Subject. The rest of the information is unstructured, such as text files, photographs, video and audio. To look for specific information in unstructured sources can be like looking for a needle in a haystack, and trying to get insight, patterns or trends involves text mining.
This, in effect, is what IBM Watson was able to perform so well this week. Finding the needle in the haystacks of unstructured data from 200 million pages of text stored in its system, combined with the ability to apprehend the interrelationships of meaning and subtle nuance, resulted in an impressive technology demonstration. Certainly, this new technology will be powerful for a variety of use cases across a broad set of industries!
The Tucson Executive Briefing Center hosted 20 dignitaries from local companies and academia.
This is a historic competition, an exhibition match pitting a computer against the top two celebrated Jeopardy champions:
Brad Rutter, won $3.2 million USD on Jeopardy!, winning 5 days on the show, and then three later tournamets.
Ken Jennings, winning $2.5 million in a 74-day winning streak on Jeopardy!
One of the members of the audience had never seen an episode of Jeopardy! in his life.
(Note: there are NO SPOILERS in this blog post. If you have not yet watched the show, you are safe to continue reading the rest of this post. I will not
disclose the correct responses to any of the clues nor how well each contestant scored.)
Calline Sanchez, IBM Director, Systems Storage Development for Data Protection and Retention, kicked off today's ceremonies.
The IBM Watson computer, named after IBM founder Thomas J. Watson, has been developed over the past 4 years by a team of IBM scientists who set out to accomplish a grand challenge - build a computing system that rivals a human's ability to answer questions posed in natural language with speed, accuracy and confidence. IBM Research labs in the United States, Japan, China and Israel [collaborated with Artificial Intelligence (AI) experts at eight universities], including Massachusetts Institute of Technology (MIT), University of Texas (UT) at Austin, University of Southern California (USC), Rensselaer Polytechnic Institute (RPI), University at Albany (UAlbany), University of Trento (Italy), University of Massachusetts Amherst, and Carnegie Mellon University.
(Disclaimer: I attended the University of Texas at Austin. My father attended Carnegie Mellon University.)
Last week, NOVA on PBS had a special episode on the making of IBM Watson, you can [watch it online] on their website. Delaney Turner, IBM Social Media Communications Manager for Business Analytics Software, has posted [his observations of Nova].
Since IBM Watson is the size of 10 refrigerators and weighs over 14,000 pounds, it was easier to design the Jeopardy! set at the TJ Watson Research lab in Yorktown Heights, NY, than to ship it over to California where the show is normally recorded. Two of the visual designers that worked on this set, as well as on the visual appearance of Watson, live in Tucson and were part of our audience today.
The IBM Challenge consists of a two-game tournament, where the scores of both games will be added to determine winner rankings. The producers of Jeopardy! will give $1 million dollars USD to first place, $300,000 to second place, and $200,000 to third place. Regardless of outcome, [IBM will donate all of its winings to charity]. The two human contestants plan to donate half of their earnings to their favorite charities as well.
Jeopardy! The IBM Challenge
Alex Trebek introduces IBM Watson, explaining that it can neither hear nor see. It will receive all information electronically. Categories and clues will be sent as text files via TCP/IP over Ethernet at the same time the two human contestants see them so that all have the same time to think about the right answer.
Watson has two rows of five racks, back to back. This was done so that cold air could rise up from holes in the tile floors around the unit, and all the hot air would be forced into the center and up to the ceiling return. This technique is known as "hot aisle/cold aisle" design. Alex Trebek opens one of the rack doors to show a series of 4U-high IBM Power 750 servers.
The avatar is a representation of Watson, as the machine itself is too big to fit behind the podium. The avatar is IBM's "Smarter Planet" logo with orbiting streaks and circles. It shows "Green" when it has high confidence, and orange when it gets an answer wrong. When busy thinking, the streaks and circles speed up, the closest we will see to "watching a computer sweat."
During the show, an "Answer panel" shows Watson's top three candidate responses, with confidence level compared to its current "buzz threshold".
Watson knows what it knows, and knows what it doesn't know. Here is an [Interactive Watson Game] on New York Times website to give you an idea of how the answer panel works. I was impressed with how close all three candidate answers were. In a question about Olympic swimmers, all three candidates are Olympic swimmers. In a question about the novel "Les Miserables", all three candidates were characters of that novel.
Well, IBM Watson did well, but missed answered some questions incorrectly. This [parody Slate video] pokes fun at this. Here were some discussions we had after the show ended:
IBM did not do well in categories that required [abductive reasoning]. For example, to identify two or three things that happened in different years, and then postulate that what they all have in common is a specific decade (such as the 1950s) is difficult.
Watson does not hear the wrong answers from the two human contestants. For one question, Ken buzzes in first, guesses wrong, then Watson buzzes in with the same exact response. Alex Trebek rebukes Watson with "No, Ken just said that!" Brad would learn from their mistakes and guess correctly for the score.
Watson is provided the correct answer after a contestant guesses it correctly, or if nobody does, when Alex provides the correct response. This is sent as a text message to Watson immediately, so that it can use this information to adjust its algorithms and machine-learning for future clues in that same category. This was evident in the "Answer panel" on the fourth and fifth attempts on the category of "Decades".
With this demonstration, IBM Research has advanced science by leaps and bounds for the Articial Intelligence community. IBM is a leader in Business Analytics, and this technology will find uses in a variety of industries. The average knowledge worker spends 30 percent of her time looking for information on corporate data repositories. By demonstrating a computer that can provide answers quickly, employees will be more productive, make stronger business decisions, and have greater insight.
Day 1 was only able to cover the first round of Game 1. This allowed more time to talk about the history and technology of IBM Watson. Tomorrow, the contestants will finish Game 1 and head into Game 2.