This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Wrapping up my week's coverage of the IBM Pulse 2011 conference, I have had several people ask me to explain IBM's latest initiative, Smarter Computing, which IBM launched this week at this conference. Having led the IT industry through the Centralized Computing era and the Distributed Computing era, IBM is now well-positioned to help companies, governments and non-profit organizations to enter the new Smarter Computing era, focused on insight and discovery.
Thousands of IT professionals
Effiicent, but only the largest companies and governments had them
Millions of office workers
Personal computers (PC)
Innovative, extending the reach to small and medium-sized businesses, but resulted in server sprawl and increased TCO
Billions of people
Smart phones and other handheld devices
Efficient and Innovative, combining the best of centralized and distributed computing
1952 to 1980
1981 to 2010
2011 and beyond
To help clients with this transition, IBM's Smarter Computing initiative has three main components. This is a corporate-wide strategy, with systems, software and services all working together to realize results.
The first component is Big Data. This combines three different sources of data:
Traditional structured data in OLTP databases and OLAP data warehouses, using data management solutions like DB2 and IBM Netezza.
Unstructured data, including text documents, images, audio, and video, processed with massive parallelism using IBM BigInsights and Apache Hadoop.
Real-Time Analytics Processing (RTAP) of incoming data, including video surveillance, social media, RFID chips, smart meters, and traffic control systems, processed with IBM InfoSphere Streams
Of course, Big Data will bring new opportunities on the storage front, which I will save for a future post!
Rather than general purpose IT equipment, we have now the scale and scope to specialize with systems optimized for particular workloads, the second component of the Smarter Computing initiative. Of course, IBM has been delivering integrated stacks of systems, software and services for decades now, but it is important to remind people of this, as IBM now has a spate of competitors all trying to follow IBM's lead in this arena.
As with Big Data, the focus on Optimized Systems has impacted IBM's strategy on storage as well. I'll save that discussion for a future post as well!
I am glad that nearly all of the storage vendors have standardized to a common definition for Cloud, the third component of Smarter Computing, which shows that this concept has matured:
Cloud computing is a pay-per-use model for enabling network access to a pool of computing resources that can be provisioned and released rapidly with minimal management effort or service provider interaction. -- U.S. National Institute of Standards and Technology [nist.gov]
Of course, Cloud is just an evolution of IBM's Service Bureau business of the 1960s and 1970s, renting out time-sharing on mainframe systems, Grid Computing of the 1980s, and Application Service Providers that popped up in the 1990s. While the [butchers, bakers and candlestick makers] that IBM competes against might focus their efforts on just private cloud or just public cloud, IBM recognizes the reality is that different clients will need different solutions. Rather than rip-and-replace, IBM will help clients transition to cloud via inclusive solutions that adopt a hybrid approach:
Traditional enterprise with private cloud deployments, using solutions like IBM CloudBurst, SONAS and Information Archive
Traditional enterprise with public cloud services to handle seasonable peaks, providing offsite resiliency, and solutions for a mobile workforce
Hybrid clouds that blend private and public cloud services, to handle seasonal peak workloads, remote and branch offices
IBM's emphasis on IT Infrastructure Library (ITIL), Tivoli and Maximo products will play well in this space to provide integrated service management across traditional and cloud deployments. This is why IBM decided to launch Smarter Computing initiative at Pulse 2011 conference, the industry's premiere conference on intergrated service management.
The IBM Watson that competed on Jeopardy! is an excellent example of all three components of Smarter Computing at work.
IBM Watson was able to respond to Jeopardy! clues within three seconds, processing a combination of database searches with DB2 and text-mining analytics of unstructured data with IBM BigInsights.
IBM Watson combined servers, software and storage into an integrated supercomputer that was optimized for one particular workload: playing Jeopardy!
IBM Watson used many technologies prevalent in private and public cloud computing systems, storing its data on a modified version of SONAS for storage, using xCat administration tools, networking across 10GbE Ethernet, and massive parallel processing through lots of PowerVM guest images.
This week was the IBM Pulse 2011 converence in Las Vegas, Nevada, with over 7,000 attendees. I wasn't there, and my on-the-scene correspondent was too busy running the hands-on lab to get out and attend sessions. Fortunately, I was able to watch some of the [IBM Software live stream], and here are my thoughts and observations.
Fellow inventor [Dean Kamen] was the keynote speaker. His inventions help people, making the world a better place. Here are three examples I found interesting during his talk:
Helping third world countries
Dean started out with his favorite quote:
"A problem well defined is a problem half-solved." - John Dewey
Dean mentioned that we are fortunate, having both potable drinking water and a reliable supply of electricity, but 2 to 4 billion people on the planet do not. Sponsored by Coca-Cola, Dean and his team of innovators were able to come up with small units that can be placed in a village or town. One unit takes in wet liquid and produces potable drinking water. The other unit takes combustible materials, like cow dung, and products electricity. Each unit is roughly the size of half a standard server rack. What does Coca-Cola get out of this? New "vending machines"! By combining drinking water with flavored syrups, they can create soft drinks on demand.
Dean's opinion was that if you want something done, you need to work with large corporations, as governments are mired in beauracracy and rules. I agree. When I first joined IBM, I was introduced to [TRIZ] which was a systematic method for solving problems. IBM's best and brightest are working to solve some of the toughest computer science challenges. For more on TRIZ, see this blog post about [TRIZ in BusinessWeek].
Helping injured veterans
Dean Kamen is well known for inventing the two-wheeled [Segway Personal Transporter], but his company, [DEKA], makes all kinds of things, mostly medical equipment. To help wounded soldiers returning from Iraq or Afghanistan without one or both arms, Dean and his team developed a robotic arm that has enough motor dexterity to pick up a raisin or grape off the table without dropping or squashing it. Dean has appeared several times on the Colbert Report, and here is a video of the robotic arm:
I have myself enjoyed riding a Segway. A local place in Tucson uses them to lead tourists through downtown Tucson and the University of Arizona campus.
Helping young students to learn science and technology
Dean wrapped up his talking by talking about his passion about "For Inspiration and Recognition of Science and Technology" or [FIRST]. Modeled after sports competitions, FIRST encourages teams of kids to build robots that perform specific tasks. Every year, companies and universities sponsor teams by purchasing robot kits from FIRST. Teams compete in regional competitions, and then the best of those go on to compete in a stadium in Atlanta, Georgia, hosting 76,000 people cheering for their teams.
Unlike other school sports (Football, Basketball, Baseball, etc.) where a student is more likely to win the lottery than get a successful career as a professional athlete, every student involved in FIRST competitions can "go pro". A study of FIRST success tracked students who participated in competitions, and found a substantial improvement in percentage of those students attending college and working as science and engineering professionals.
I am a big fan of encouraging kids of all ages to learn more about science, technology, engineering and math [STEM]. Back in 2009, I blogged about my involvement with [One Laptop Per Child] and [Junior FIRST Lego League]. I've gotten a great reaction to my latest challenge, to build a Watson Jr. in your own basement, based on my [step-by-step] instructions.
If you attended IBM Pulse this week, please comment on your thoughts and observations!
Guest Post: The following post was written by Tom Rauchut, IBM Infrastructure Architect and Advanced Technical Sales Specialist for Tivoli Automation. Tom is at IBM Pulse 2011 for Las Vegas this week, and has offered to send his observations.
The expo opened last night. There are so many fantastic demos and product experts. Las Vegas has a Tivoli buzz on right now.
My series last week on IBM Watson (which you can read [here], [here], [here], and [here]) brought attention to IBM's Scale-Out Network Attached Storage [SONAS]. IBM Watson used a customized version of SONAS technology for its internal storage, and like most of the components of IBM Watson, IBM SONAS is commercially available as a stand-alone product.
Like many IBM products, SONAS has gone through various name changes. First introduced by Linda Sanford at an IBM SHARE conference in 2000 under the IBM Research codename Storage Tank, it was then delivered as a software-only offering SAN File System, then as a services offering Scale-out File Services (SoFS), and now as an integrated system appliance, SONAS, in IBM's Cloud Services and Systems portfolio.
If you are not familiar with SONAS, here are a few of my previous posts that go into more detail:
This week, IBM announces that SONAS has set a world record benchmark for performance, [a whopping 403,326 IOPS for a single file system]. The results are based on comparisons of publicly available information from Standard Performance Evaluation Corporation [SPEC], a prominent performance standardization organization with more than 60 member companies. SPEC publishes hundreds of different performance results each quarter covering a wide range of system performance disciplines (CPU, memory, power, and many more). SPECsfs2008_nfs.v3 is the industry-standard benchmark for NAS systems using the NFS protocol.
(Disclaimer: Your mileage may vary. As with any performance benchmark, the SPECsfs benchmark does not replicate any single workload or particular application. Rather, it encapsulates scores of typical activities on a NAS storage system. SPECsfs is based on a compilation of workload data submitted to the SPEC organization, aggregated from tens of thousands of fileservers, using a wide variety of environments and applications. As a result, it is comprised of typical workloads and with typical proportions of data and metadata use as seen in real production environments.)
The configuration tested involves SONAS Release 1.2 on 10 Interface Nodes and 8 Storage Pods, resulting a single file system over 900TB usable capacity.
10 Interface Nodes; each with:
Maximum 144 GB of memory
One active 10GbE port
8 Storage Pods; each with:
2 Storage nodes and 240 drives
Drive type: 15K RPM SAS hard drives
Data Protection using RAID-5 (8+P) ranks
Six spare drives per Storage Pod
IBM wanted a realistic "no compromises" configuration to be tested, by choosing:
Regular 15K RPM SAS drives, rather than a silly configuration full of super-expensive Solid State Drives (SSD) to plump up the results.
Moderate size, typical of what clients are asking for today. The Goldilocks rule applies. This SONAS is not a small configuration under 100TB, and nowhere close to the maximum supported configuration of 7,200 disks across 30 Interface Nodes and 30 Storage Pods.
Single file system, often referred to as a global name space, rather than using an aggregate of smaller file systems added together that would be more complicated to manage. Having multiple file systems often requires changes to applications to take advantage of the aggregate peformance. It is also more difficult to load-balance your performance and capacity across multiple file systems. Of course, SONAS can support up to 256 separate file systems if you have a business need for this complexity.
The results are stunning. IBM SONAS handled three times more workload for a single file system than the next leading contender. All of the major players are there as well, including NetApp, EMC and HP.
It's Tuesday again, and that means one thing.... IBM Announcements! On the heels of [last week's announcements], IBM announced some additional products of interest to storage administrators.
IBM Information Archive
Back in 2008, IBM [unveiled the Information Archive]. This storage solution provides automated policy-based tiering between disk and tape, with non-erasable non-rewriteable enforcement to protect against unethical tampering of data. The initial release supported [both files and object storage], with support for different collections, each with its own set of policies for management. However, it only supported NFS initially for the file protocol. Today, IBM announces the addition of CIFS protocol support, which will be especially helpful in healthcare and life sciences, as much of the medical equipment is designed for CIFS protocol storage.
Also, Information Archive will now provide a full index and search feature capability to help with e-Discovery. Searches and retrievals can be done in the background without disrupting applications or the archiving operations.
IBM Tivoli Storage Manager for Virtual Environments V6.2 extends capabilities that currently exist in IBM Tivoli Storage Manager. TSM backup/archive clients run fine on guest operating systems, but now this new extension improves backup for VMware environments. TSM provides incremental block-level backups utilizing VMware's vStorage APIs for Data Protection and Changed Block Tracking features.
To minimize impact to the VMware host, TSM for VE make use of non-disruptive snapshots and offload the backup processing to a vStorage backup server. This supports file-level recovery, volume-level recovery, and full VM recovery. Of course, since it is based on TSM v6, you get advanced storage efficiency features such as compression and deduplication to minimize consumption of disk storage pools.
IBM Tivoli Monitor has been extended to support virtual servers, including VMware, Linux KVM, and Citrix XenServer. This can help with capacity planning, performance monitoring, and availability. Tivoli Monitor will help you understand the relationships between physical and virtual resources to help isolate problems to the correct resource, reducing the time it takes for debug issues between servers and storage. See the
Next week is [IBM Pulse2011 Conference] in Las Vegas, February 27 to March 2. Sorry, I don't plan to be there this year. It is looking to be a great conference, with fellow inventor Dean Kamen as the keynote speaker. For a blast from the past, read my blog posts from Pulse2008 [Main Tent sessions] and [Breakout sessions].
For the longest time, people thought that humans could not run a mile in less than four minutes. Then, in 1954, [Sir Roger Bannister] beat that perception, and shortly thereafter, once he showed it was possible, many other runners were able to achieve this also. The same is being said now about the IBM Watson computer which appeared this week against two human contestants on Jeopardy!
(2014 Update: A lot has happened since I originally wrote this blog post! I intended this as a fun project for college students to work on during their summer break. However, IBM is concerned that some businesses might be led to believe they could simply stand up their own systems based entirely on open source and internally developed code for business use. IBM recommends instead the [IBM InfoSphere BigInsights] which packages much of the software described below. IBM has also launched a new "Watson Group" that has [Watson-as-a-Service] capabilities in the Cloud. To raise awareness to these developments, IBM has asked me to rename this post from IBM Watson - How to build your own "Watson Jr." in your basement to the new title IBM Watson -- How to replicate Watson hardware and systems design for your own use in your basement. I also took this opportunity to improve the formatting layout.)
Often, when a company demonstrates new techology, these are prototypes not yet ready for commercial deployment until several years later. IBM Watson, however, was made mostly from commercially available hardware, software and information resources. As several have noted, the 1TB of data used to search for answers could fit on a single USB drive that you buy at your local computer store.
Take a look at the [IBM Research Team] to determine how the project was organized. Let's decide what we need, and what we don't in our version for personal use:
Do we need it for personal use?
Yes, That's you. Assuming this is a one-person project, you will act as Team Lead.
Yes, I hope you know computer programming!
No, since this version for personal use won't be appearing on Jeopardy, we won't need strategy on wager amounts for the Daily Double, or what clues to pick next. Let's focus merely on a computer that can accept a question in text, and provide an answer back, in text.
Yes, this team focused on how to wire all the hardware together. We need to do that, although this version for personal use will have fewer components.
Optional. For now, let's have this version for personal use just return its answer in plain text. Consider this Extra Credit after you get the rest of the system working. Consider using [eSpeak], [FreeTTS], or the Modular Architecture for Research on speech sYnthesis [MARY] Text-to-Speech synthesizers.
Yes, I will explain what this is, and why you need it.
Yes, we will need to get information for personal use to process
Yes, this team developed a system for parsing the question being asked, and to attach meaning to the different words involved.
No, this team focused on making IBM Watson optimized to answer in 3 seconds or less. We can accept a slower response, so we can skip this.
(Disclaimer: As with any Do-It-Yourself (DIY) project, I am not responsible if you are not happy with your version for personal use I am basing the approach on what I read from publicly available sources, and my work in Linux, supercomputers, XIV, and SONAS. For our purposes, this version for personal use is based entirely on commodity hardware, open source software, and publicly available sources of information. Your implementation will certainly not be as fast or as clever as the IBM Watson you saw on television.)
Step 1: Buy the Hardware
Supercomputers are built as a cluster of identical compute servers lashed together by a network. You will be installing Linux on them, so if you can avoid paying extra for Microsoft Windows, that would save you some money. Here is your shopping list:
Three x86 hosts, with the following:
64-bit quad-core processor, either Intel-VT or AMD-V capable,
8GB of DRAM, or larger
300GB of hard disk, or larger
CD or DVD Read/Write drive
Computer Monitor, mouse and keyboard
Ethernet 1GbE 4-port hub, and appropriate RJ45 cables
Surge protector and Power strip
Local Console Monitor (LCM) 4-port switch (formerly known as a KVM switch) and appropriate cables. This is optional, but will make it easier during the development. Once your implementation is operational, you will only need the monitor and keyboard attached to one machine. The other two machines can remain "headless" servers.
Step 2: Establish Networking
IBM Watson used Juniper switches running at 10Gbps Ethernet (10GbE) speeds, but was not connected to the Internet while playing Jeopardy! Instead, these Ethernet links were for the POWER7 servers to talk to each other, and to access files over the Network File System (NFS) protocol to the internal customized SONAS storage I/O nodes.
The implementation will be able to run "disconnected from the Internet" as well. However, you will need Internet access to download the code and information sources. For our purposes, 1GbE should be sufficient. Connect your Ethernet hub to your DSL or Cable modem. Connect all three hosts to the Ethernet switch. Connect your keyboard, video monitor and mouse to the LCM, and connect the LCM to the three hosts.
Step 3: Install Linux and Middleware
To say I use Linux on a daily basis is an understatement. Linux runs on my Android-based cell phone, my laptop at work, my personal computers at home, most of our IBM storage devices from SAN Volume Controller to XIV to SONAS, and even on my Tivo at home which recorded my televised episodes of Jeopardy!
For this project, you can use any modern Linux distribution that supports KVM. IBM Watson used Novel SUSE Linux Enterprise Server [SLES 11]. Alternatively, I can also recommend either Red Hat Enterprise Linux [RHEL 6] or Canonical [Ubuntu v10]. Each distribution of Linux comes in different orientations. Download the the 64-bit "ISO" files for each version, and burn them to CDs.
Graphical User Interface (GUI) oriented, often referred to as "Desktop" or "HPC-Head"
Command Line Interface (CLI) oriented, often referred to as "Server" or "HPC-Compute"
Guest OS oriented, to run in a Hypervisor such as KVM, Xen, or VMware. Novell calls theirs "Just Enough Operating System" [JeOS].
For this version for personal use, I have chosen a [multitier architecture], sometimes referred to as an "n-tier" or "client/server" architecture.
Host 1 - Presentation Server
For the Human-Computer Interface [HCI], the IBM Watson received categories and clues as text files via TCP/IP, had a [beautiful avatar] representing a planet with 42 circles streaking across in orbit, and text-to-speech synthesizer to respond in a computerized voice. Your implementation will not be this sophisticated. Instead, we will have a simple text-based Query Panel web interface accessible from a browser like Mozilla Firefox.
Host 1 will be your Presentation Server, the connection to your keyboard, video monitor and mouse. Install the "Desktop" or "HPC Head Node" version of Linux. Install [Apache Web Server and Tomcat] to run the Query Panel. Host 1 will also be your "programming" host. Install the [Java SDK] and the [Eclipse IDE for Java Developers]. If you always wanted to learn Java, now is your chance. There are plenty of books on Java if that is not the language you normally write code.
While three little systems doesn't constitute an "Extreme Cloud" environment, you might like to try out the "Extreme Cloud Administration Tool", called [xCat], which was used to manage the many servers in IBM Watson.
Host 2 - Business Logic Server
Host 2 will be driving most of the "thinking". Install the "Server" or "HPC Compute Node" version of Linux. This will be running a server virtualization Hypervisor. I recommend KVM, but you can probably run Xen or VMware instead if you like.
Host 3 - File and Database Server
Host 3 will hold your information sources, indices, and databases. Install the "Server" or "HPC Compute Node" version of Linux. This will be your NFS server, which might come up as a question during the installation process.
Technically, you could run different Linux distributions on different machines. For example, you could run "Ubuntu Desktop" for host 1, "RHEL 6 Server" for host 2, and "SLES 11" for host 3. In general, Red Hat tries to be the best "Server" platform, and Novell tries to make SLES be the best "Guest OS".
My advice is to pick a single distribution and use it for everything, Desktop, Server, and Guest OS. If you are new to Linux, choose Ubuntu. There are plenty of books on Linux in general, and Ubuntu in particular, and Ubuntu has a helpful community of volunteers to answer your questions.
Step 4: Download Information Sources
You will need some documents for your implementation to process.
IBM Watson used a modified SONAS to provide a highly-available clustered NFS server. For this version, we won't need that level of sophistication. Configure Host 3 as the NFS server, and Hosts 1 and 2 as NFS clients. See the [Linux-NFS-HOWTO] for details. To optimize performance, host 3 will be the "official master copy", but we will use a Linux utility called rsync to copy the information sources over to the hosts 1 and 2. This allows the task engines on those hosts to access local disk resources during question-answer processing.
We will also need a relational database. You won't need a high-powered IBM DB2. Your implementation can do fine with something like [Apache Derby] which is the open source version of IBM CloudScape from its Informix acquisition. Set up Host 3 as the Derby Network Server, and Hosts 1 and 2 as Derby Network Clients. For more about structured content in relational databases, see my post [IBM Watson - Business Intelligence, Data Retrieval and Text Mining].
Linux includes a utility called wget which allows you to download content from the Internet to your system. What documents you decide to download is up to you, based on what types of questions you want answered. For example, if you like Literature, check out the vast resources at [FullBooks.com]. You can automate the download by writing a shell script or program to invoke wget to all the places you want to fetch data from. Rename the downloaded files to something unique, as often they are just "index.html". For more on wget utility, see [IBM Developerworks].
Step 5: The Query Panel - Parsing the Question
Next, we need to parse the question and have some sense of what is being asked for. For this we will use [OpenNLP] for Natural Language Processing, and [OpenCyc] for the conceptual logic reasoning. See Doug Lenat presenting this 75-minute video [Computers versus Common Sense]. To learn more, see the [CYC 101 Tutorial].
Unlike Jeopardy! where Alex Trebek provides the answer and contestants must respond with the correct question, we will do normal Question-and-Answer processing. To keep things simple, we will limit questions to the following formats:
Who is ...?
Where is ...?
When did ... happen?
What is ...?
Host 1 will have a simple Query Panel web interface. At the top, a place to enter your question, and a "submit" button, and a place at the bottom for the answer to be shown. When "submit" is pressed, this will pass the question to "main.jsp", the Java servlet program that will start the Question-answering analysis. Limiting the types of questions that can be posed will simplify hypothesis generation, reduce the candidate set and evidence evaluation, allowing the analytics processing to continue in reasonable time.
Step 6: Unstructured Information Management Architecture
The "heart and soul" of IBM Watson is Unstructured Information Management Architecture [UIMA]. IBM developed this, then made it available to the world as open source. It is maintained by the [Apache Software Foundation], and overseen by the Organization for the Advancement of Structured Information Standards [OASIS].
Basically, UIMA lets you scan unstructured documents, gleam the important points, and put that into a database for later retrieval. In the graph above, DBs means 'databases' and KBs means 'knowledge bases'. See the 4-minute YouTube video of [IBM Content Analytics], the commercial version of UIMA.
Starting from the left, the Collection Reader selects each document to process, and creates an empty Common Analysis Structure (CAS) which serves as a standardized container for information. This CAS is passed to Analysis Engines , composed of one or more Annotators which analyze the text and fill the CAS with the information found. The CAS are passed to CAS Consumers which do something with the information found, such as enter an entry into a database, update an index, or update a vote count.
(Note: This point requires, what we in the industry call a small matter of programming, or [SMOP]. If you've always wanted to learn Java programming, XML, and JDBC, you will get to do plenty here. )
If you are not familiar with UIMA, consider this [UIMA Tutorial].
Step 7: Parallel Processing
People have asked me why IBM Watson is so big. Did we really need 2,880 cores of processing power? As a supercomputer, the 80 TeraFLOPs of IBM Watson would place it only in 94th place on the [Top 500 Supercomputers]. While IBM Watson may be the [Smartest Machine on Earth], the most powerful supercomputer at this time is the Tianhe-1A with more than 186,000 cores, capable of 2,566 TeraFLOPs.
To determine how big IBM Watson needed to be, the IBM Research team ran the DeepQA algorithm on a single core. It took 2 hours to answer a single Jeopardy question! Let's look at the performance data:
Number of cores
Time to answer one Jeopardy question
Single IBM Power750 server
< 4 minutes
Single rack (10 servers)
< 30 seconds
IBM Watson (90 servers)
< 3 seconds
The old adage applies, [many hands make for light work]. The idea is to divide-and-conquer. For example, if you wanted to find a particular street address in the Manhattan phone book, you could dispatch fifty pages to each friend and they could all scan those pages at the same time. This is known as "Parallel Processing" and is how supercomputers are able to work so well. However, not all algorithms lend well to parallel processing, and the phrase [nine women can't have a baby in one month] is often used to remind us of this.
Fortuantely, UIMA is designed for parallel processing. You need to install UIMA-AS for Asynchronous Scale-out processing, an add-on to the base UIMA Java framework, supporting a very flexible scale-out capability based on JMS (Java Messaging Services) and ActiveMQ. We will also need Apache Hadoop, an open source implementation used by Yahoo Search engine. Hadoop has a "MapReduce" engine that allows you to divide the work, dispatch pieces to different "task engines", and the combine the results afterwards.
Host 2 will run Hadoop and drive the MapReduce process. Plan to have three KVM guests on Host 1, four on Host 2, and three on Host 3. That means you have 10 task engines to work with. These task engines can be deployed for Content Readers, Analysis Engines, and CAS Consumers. When all processing is done, the resulting votes will be tabulated and the top answer displayed on the Query Panel on Host 1.
Step 8: Testing
To simplify testing, use a batch processing approach. Rather than entering questions by hand in the Query Panel, generate a long list of questions in a file, and submit for processing. This will allow you to fine-tune the environment, optimize for performance, and validate the answers returned.
There you have it. By the time you get your implementation fully operational, you will have learned a lot of useful skills, including Linux administration, Ethernet networking, NFS file system configuration, Java programming, UIMA text mining analysis, and MapReduce parallel processing. Hopefully, you will also gain an appreciation for how difficult it was for the IBM Research team to accomplish what they had for the Grand Challenge on Jeopardy! Not surprisingly, IBM Watson is making IBM [as sexy to work for as Apple, Google or Facebook], all of which started their business in a garage or a basement with a system as small as this version for personal use.
The IBM Challenge was a big success. One of the contestants, Ken Jennings, [welcomes our new computer overlords]. Congratulations are in order to the IBM Research team who pulled off this Herculean effort!
Some folks have poked fun at some of the odd responses and wager amounts from the IBM Watson computer during the three-day tournament. Others were surprised as I was that the impressive feat was done with less than 1TB of stored data. Here is what John Webster wrote in CNET yesterday, in hist article [What IBM's Watson says to storage systems developers]:
"All well and good. But here's what I find most interesting as a result of what IBM has done in response to the Grand Challenge that motivated Watson's creators. We know, from Tony Pearson's blog, that the foundation of Watson's data storage system is a modified IBM SONAS cluster with a total of 21.6TB of raw capacity. But Pearson also reveals another very significant, and to me, surprising data point: "When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, the actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1 Terabyte."
What Pearson just said is that the data set Watson actually uses to reach his push-the-button decision would fit on a 1TB drive. So much for big data?"
To better appreciate how difficult the challenge was, and how a small amount of data can answer a billion different questions, I thought I would cover Business Intelligence, Data Retrieval and Text Mining concepts.
"In this paper, business is a collection of activities carried
on for whatever purpose, be it science, technology,
commerce, industry, law, government, defense, et cetera.
The communication facility serving the conduct of a business
(in the broad sense) may be referred to as an intelligence
system. The notion of intelligence is also defined
here, in a more general sense, as the ability to apprehend
the interrelationships of presented facts in such a way as
to guide action towards a desired goal."
Ideally, when you need "Business Intelligence" to help you make a better decision, you perform data retrieval from a structured database for the specific information you are looking for. In other cases, you might be looking for insight, patterns or trends. In that case, you go "data mining" against your structured databases.
Here's a simple example. John runs a fruit stand. One day, he kept track of how many apples and oranges were bought by men and women. How many questions can we ask against this small set of data? Let's count them:
How many apples were sold to men?
How many apples were sold to women?
How many oranges were sold to men?
How many oranges were sold to women?
But wait! For each row and column, we can combine them into totals.
How many apples were sold in total?
How many oranges were sold in total?
How many fruit in total were sold to men?
How many fruit in total were sold to women?
How many fruit in total were sold?
But wait, there's more! Each row and column can be evaluated for relative percentages, as well as percentages of each cell compared to the total. You could make five relevant pie-charts from this data. This results in 16 more questions, such as:
Of the fruit purchased by men, what percentage for apples?
Of all the apples purchased, what percentage by women?
And that's not including more ethereal questions, such as:
Are there gender-specific preferences for different types of fruit?
What type of fruit do men prefer?
This is just for a small set, two market segments (by gender) and two products (apples and oranges). However, if you have many market segments (perhaps by age group, zip code, etc.) and many products, the number of queries that can be supported is huge. For small sets of data, you can easily do this with a spreadsheet program like IBM Lotus Symphony or Microsoft Excel.
But why limit yourself to two dimensions? The above example was just for one day's worth of activity, if John captures this data for every day for historical and seasonal trending, it can be represented as a three-dimensional cube. The number of queries becomes astronomical. This is the basis for Online Analytical Processing (OLAP), and three-dimensional tables are often referred to as [OLAP cubes].
Back in 1970, IBM invented the Structured Query Language [SQL], and today, nearly all modern relational databases support this, including IBM DB2, Informix, Microsoft SQL Server, and Oracle DB. SQL poses two challenges. First, you had to structure the data in advance to the way you expect to perform your ad-hoc queries. Deciding the groups and categories in advance can limit the way information is recorded and captured.
Second, you had to be skilled at SQL to phrase your queries correctly to retrieve the data you are after. What ended up happening was that skilled SQL programmers would develop "canned reports" with fixed SQL parameters, so that less-skilled business decision makers could base their decisions from these reports.
IBM has fully integrated stacks to help process structured data, combining servers, storage, and advanced analytics software into a complete appliance. IBM offers the [Smart Analytics System] for robust, customized deployments, and recently acquired [Netezza] for pre-configured, and more rapid deployments.
However, the bigger problem is that more than 80 percent of information is not structured!
Semi-structured data like email provides some searchable fields like From and Subject. The rest of the information is unstructured, such as text files, photographs, video and audio. To look for specific information in unstructured sources can be like looking for a needle in a haystack, and trying to get insight, patterns or trends involves text mining.
This, in effect, is what IBM Watson was able to perform so well this week. Finding the needle in the haystacks of unstructured data from 200 million pages of text stored in its system, combined with the ability to apprehend the interrelationships of meaning and subtle nuance, resulted in an impressive technology demonstration. Certainly, this new technology will be powerful for a variety of use cases across a broad set of industries!
Full VMware Vstorage API for Array Integration (VAAI). Back in 2008, VMware announced new vStorage APIs for its vSphere ESX hypervisor: vStorage API for Site Recovery Manager, vStorage API for Data Potection, vStorage API for Multipathing. Last July, VMware added a new API called vStorage API for Array Integration [VAAI] which offers three primitives:
Hardware-assisted Blocks zeroing. Sometimes referred to as "Write Same", this SCSI command will zero out a large section of blocks, presumably as part of a VMDK file. This can then be used to reclaim space on the XIV on thin-provisioned LUNs.
Hardware-assisted Copy. Make an XIV snapshot of data without any I/O on the server hardware.
Hardware-assisted locking. On mainframes, this is call Parallel Access Volumes (PAV). Instead of locking an entire LUN using standard SCSI reserve commands, this primitive allows an ESX host to lock just an individual block so as not to interfere with other hosts accessing other blocks on that same LUN.
Quality of Service (QoS) Performance Classes.
When XIV was first released, it treated all hosts and all data the same, even when deployed for a variety of different applications. This worked for some clients, such as [Medicare y Mucho Más]. They migrated their databases, file servers and email system from EMC CLARiiON to an IBM XIV Storage System. In conjunction with VMware, the XIV provides a highly flexible and scalable virtualized architecture, which enhances the company's business agility.
However, other clients were skeptical, and felt they needed additional "nobs" to prioritize different workloads. The new 10.2.4 microcode allows you to define four different "performance classes". This is like the door of a nightclub. All the regular people are waiting in a long line, but when a celebrity in a limo arrives, the bouncer unclips the cord, and lets the celebrity in. For each class, you provide IOPS and/or MB/sec targets, and the XIV manages to those goals. Performance classes are assigned to each host based on their value to the business.
Offline Initialization for Asynchronous Mirror.
Internally, we called this Truck Mode. Normally, when a customer decides to start using Asynchronous Mirror, they already have a lot of data at the primary location, and so there is a lot of data to send over to the new XIV box at the secondary location. This new feature allows the data to be dumped to tape at the primary location. Those tapes are shipped to the secondary location and restored on the empty XIV. The two XIV boxes are then connected for Asynchronous Mirroring, and checksums of each 64KB block are compared to determine what has changed at the primary during this "tape delivery time". This greatly reduces the time it takes for the two boxes to get past the initial synchronization phase.
IP-based Replication. When IBM first launched the Storwize V7000 last October, people commented that the one feature they felt missing was IP-based replication. Sure, we offered FCP-based replication as most other Enterprise-class disk systems offer today, but many midrange systems also offer IP-based repliation to reduce the need for expensive FCIP routers. [IBM Tivoli Storage FastBack for Storwize V7000] provides IP-based replication for Storwize V7000 systems.
Network Attached Storage
IBM announced two new models of the IBM System Storage N series. The midrange N6240 supports up to 600 drives, replacing the N6040 system. The entry-level N6210 supports up to 240 drives, and replaces the N3600 system. Details for both are available on the latest [data sheet].
IBM Real-Time Compression appliances work with all N series models to provide additional storage efficiency. Last October, I provided the [Product Name Decoder Ring] for the STN6500 and STN6800 models. The STN6500 supports 1 GbE ports, and the STN6800 supports 10GbE ports (or a mix of 10GbE and 1GbE, if you prefer). The IBM versions of these models were announced last December, but some people were on vacation and might have missed it. For more details of this, read the [Resources page], the [landing page], or [watch this video].
IBM System Storage DS3000 series
IBM System Storage [DS3524 Express DC and EXP3524 Express DC] models are powered with direct current (DC) rather than alternating current (AC). The DS3524 packs dual controllers and two dozen small-form factor (2.5 inch) drives in a compact 2U-high rack-optimized module. The EXP3524 provides addition disk capacity that can be attached to the DS3524 for expansion.
Large data centers, especially those in the Telecommunications Industry, receive AC from their power company, then store it in a large battery called an Uninterruptible Power Supply (UPS). For DC-powered equipment, they can run directly off this battery source, but for AC-powered equipment, the DC has to be converted back to AC, and some energy is lost in the conversion. Thus, having DC-powered equipment is more energy efficient, or "green", for the IT data center.
Whether you get the DC-powered or AC-powered models, both are NEBS-compliant and ETSI-compliant.
New Tape Drive Options for Autoloaders and Libraries
IBM System Storage [TS2900 Autoloader] is a compact 1U-high tape system that supports one LTO drive and up to 9 tape cartridges. The TS2900 can support either an LTO-3, LTO-4 or LTO-5 half-height drive.
IBM System Storage [TS3100 and TS3200 Tape Libraries] were also enhanced. The TS3100 can accomodate one full-height LTO drive, or two half-height drives, and hold up to 24 cartridges. The TS3200 offers twice as many drives and space for cartridges.
The Tucson Executive Briefing Center hosted 20 dignitaries from local companies and academia.
This is a historic competition, an exhibition match pitting a computer against the top two celebrated Jeopardy champions:
Brad Rutter, won $3.2 million USD on Jeopardy!, winning 5 days on the show, and then three later tournamets.
Ken Jennings, winning $2.5 million in a 74-day winning streak on Jeopardy!
One of the members of the audience had never seen an episode of Jeopardy! in his life.
(Note: there are NO SPOILERS in this blog post. If you have not yet watched the show, you are safe to continue reading the rest of this post. I will not
disclose the correct responses to any of the clues nor how well each contestant scored.)
Calline Sanchez, IBM Director, Systems Storage Development for Data Protection and Retention, kicked off today's ceremonies.
The IBM Watson computer, named after IBM founder Thomas J. Watson, has been developed over the past 4 years by a team of IBM scientists who set out to accomplish a grand challenge - build a computing system that rivals a human's ability to answer questions posed in natural language with speed, accuracy and confidence. IBM Research labs in the United States, Japan, China and Israel [collaborated with Artificial Intelligence (AI) experts at eight universities], including Massachusetts Institute of Technology (MIT), University of Texas (UT) at Austin, University of Southern California (USC), Rensselaer Polytechnic Institute (RPI), University at Albany (UAlbany), University of Trento (Italy), University of Massachusetts Amherst, and Carnegie Mellon University.
(Disclaimer: I attended the University of Texas at Austin. My father attended Carnegie Mellon University.)
Last week, NOVA on PBS had a special episode on the making of IBM Watson, you can [watch it online] on their website. Delaney Turner, IBM Social Media Communications Manager for Business Analytics Software, has posted [his observations of Nova].
Since IBM Watson is the size of 10 refrigerators and weighs over 14,000 pounds, it was easier to design the Jeopardy! set at the TJ Watson Research lab in Yorktown Heights, NY, than to ship it over to California where the show is normally recorded. Two of the visual designers that worked on this set, as well as on the visual appearance of Watson, live in Tucson and were part of our audience today.
The IBM Challenge consists of a two-game tournament, where the scores of both games will be added to determine winner rankings. The producers of Jeopardy! will give $1 million dollars USD to first place, $300,000 to second place, and $200,000 to third place. Regardless of outcome, [IBM will donate all of its winings to charity]. The two human contestants plan to donate half of their earnings to their favorite charities as well.
Jeopardy! The IBM Challenge
Alex Trebek introduces IBM Watson, explaining that it can neither hear nor see. It will receive all information electronically. Categories and clues will be sent as text files via TCP/IP over Ethernet at the same time the two human contestants see them so that all have the same time to think about the right answer.
Watson has two rows of five racks, back to back. This was done so that cold air could rise up from holes in the tile floors around the unit, and all the hot air would be forced into the center and up to the ceiling return. This technique is known as "hot aisle/cold aisle" design. Alex Trebek opens one of the rack doors to show a series of 4U-high IBM Power 750 servers.
The avatar is a representation of Watson, as the machine itself is too big to fit behind the podium. The avatar is IBM's "Smarter Planet" logo with orbiting streaks and circles. It shows "Green" when it has high confidence, and orange when it gets an answer wrong. When busy thinking, the streaks and circles speed up, the closest we will see to "watching a computer sweat."
During the show, an "Answer panel" shows Watson's top three candidate responses, with confidence level compared to its current "buzz threshold".
Watson knows what it knows, and knows what it doesn't know. Here is an [Interactive Watson Game] on New York Times website to give you an idea of how the answer panel works. I was impressed with how close all three candidate answers were. In a question about Olympic swimmers, all three candidates are Olympic swimmers. In a question about the novel "Les Miserables", all three candidates were characters of that novel.
Well, IBM Watson did well, but missed answered some questions incorrectly. This [parody Slate video] pokes fun at this. Here were some discussions we had after the show ended:
IBM did not do well in categories that required [abductive reasoning]. For example, to identify two or three things that happened in different years, and then postulate that what they all have in common is a specific decade (such as the 1950s) is difficult.
Watson does not hear the wrong answers from the two human contestants. For one question, Ken buzzes in first, guesses wrong, then Watson buzzes in with the same exact response. Alex Trebek rebukes Watson with "No, Ken just said that!" Brad would learn from their mistakes and guess correctly for the score.
Watson is provided the correct answer after a contestant guesses it correctly, or if nobody does, when Alex provides the correct response. This is sent as a text message to Watson immediately, so that it can use this information to adjust its algorithms and machine-learning for future clues in that same category. This was evident in the "Answer panel" on the fourth and fifth attempts on the category of "Decades".
With this demonstration, IBM Research has advanced science by leaps and bounds for the Articial Intelligence community. IBM is a leader in Business Analytics, and this technology will find uses in a variety of industries. The average knowledge worker spends 30 percent of her time looking for information on corporate data repositories. By demonstrating a computer that can provide answers quickly, employees will be more productive, make stronger business decisions, and have greater insight.
Day 1 was only able to cover the first round of Game 1. This allowed more time to talk about the history and technology of IBM Watson. Tomorrow, the contestants will finish Game 1 and head into Game 2.
"When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, the actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1 Terabyte (TB). For performance reasons, various subsets of the data are replicated in RAM on different functional groups of cluster nodes. The entire system is self-contained, Watson is NOT going to the internet searching for answers."
I had several readers ask me to explain the significance of the "Terabyte". I'll work my way up.
A bit is simply a zero (0) or one (1). This could answer a Yes/No or True/False question.
Most computers have standardized a byte as a collection of 8 bits. There are 256 unique combinations of ones and zeros possible, so a byte could be used to storage a 2-digit integer, or a single upper or lower case character in the English alphabet. In pratical terms, a byte could store your age in years, or your middle initial.
The Kilobyte is a thousand bytes, enough to hold a few paragraphs of text. A typical written page could be held in 4 KB, for example.
The IBM Challenge to play on Jeopardy! is being compared to the historic 1969 moon landing. To land on the moon, Apollo 11 had the "Apollo Guidance Computer" (AGC) which had 74KB of fixed read-only memory, and 2KB of re-writeable memory. Over [3500 IBM employees were involved] to get the astronauts to the moon and safely back to earth again.
The importance of this computer was highlighted in a [lecture by astronaut David Scott] who said: "If you have a basketball and a baseball 14 feet apart, where the baseball represents the moon and the basketball represents the Earth, and you take a piece of paper sideways, the thinness of the paper would be the corridor you have to hit when you come back."
The Megabyte is a thousand KB, or a million bytes. The 3.5-inch floppy diskette, mentioned in my post [A Boxfull of Floppies] could hold 1.44MB, or about 360 pages of text.
In the article [Wikipedia as a printed book], the printing of a select 400 articles resulted in a book 29 inches thick. Those 5,000 pages would consume about 20 MB of space.
One of my favorite resources I use to search is the Internet Movie Data Base [IMDB]. Leaving out the photos and videos, the [text-only portion of the IMDB database is just over 600 MB], representing nearly all of the actors, awards, nominations, television shows and movies. A standard CD-ROM can hold 700MB, so the text portion of the IMDB could easily fit on a single CD.
The Gigabyte is a thousand MB, or a billion bytes. My Thinkpad T410 laptop has 4GB of RAM and 320GB of hard disk space. My laptop comes with a DVD burner, and each DVD can hold up to 4.7GB of information.
The popular Wikipedia now has some 17 million articles, of which 3.5 million are in English language. It would only take [14GB of space to hold the entire English portion] of Wikipedia. That is small enough to fit on twenty CDs, three DVDs, an Apple iPad or my cellphone (a Samsung Galaxy S Vibrant).
Perhaps you are thinking, "Someone should offer Wikipedia pre-installed on a small handheld!" Too late. The [The Humane Reader] is able to offer 5,000 books and Wikipedia in a small device that connects to your television. This would be great for people who do not have access to the internet, or for parents who want their kids to do their homework, but not be online while they are doing it.
In the latest 2009 report of [How Much Information?] from the University of California, San Diego, the average American consumes 34 GB of information. This includes all the information from radio, television, newspapers, magazines, books and the internet that a person might look at or listen to throughout the day. This project is sponsored by IBM and others to help people understand the nature of our information-consuption habits.
Back in 1992, I visited a client in Germany. Their 90 GB of disk storage attached to their mainframe was the size of three refrigerators, and took five full-time storage administrators to manage.
The Terabyte is a thousand GB, or a trillion bytes. It is now possible to buy external USB drive for your laptop or personal computer that holds 1TB or more. However, at 40MB/sec speeds that USB 2.0 is capable of, it would take seven hours to do a bulk transfer in or out of the device.
IBM offers 1TB and 2TB disk drives in many of our disk systems. In 2008, IBM was preparing to announce the first 1TB tape drive. However, Sun Microsystems announced their own 1TB drive the day before our big announcement, so IBM had to rephrase the TS1130 announcement to [The World's Fastest 1TB tape drive!]
A typical academic research library will hold about 2TB of information. For the [US Library of Congress] print collection is considered to be about 10TB, and their web capture team has collected 160TB of digital data. If you are ever in the Washington DC, I strongly recommend a visit to the Library of Congress. It is truly stunning!
Full-length computer animated movies, like [Happy Feet], consume about 100TB of disk storage during production. IBM offers disk systems that can hold this much data. For example, the IBM XIV can hold up to 151 TB of usable disk space in the size of one refrigerator.
A Key Performance Indicator (KPI) for some larger companies is the number of TB that can be managed by a full-time employee, referred to as TB/FTE. Discussions about TB/FTE are available from IT analysts including [Forrester Research] and [The Info Pro].
The website [Ancestry.com] claims to have over 540 million names in its genealogical database, with a storage of 600TB, with the inclusion of [US census data from 1790 to 1930]. The US government took nine years to process the 1880 census, so for the 1890 census, it rented equipment from Herman Hollerith's Tabulating Machine Company. This company would later merge with two others in 1911 to form what is now called IBM.
A Petabyte is thousand TB, or a quadrillion bytes. It is estimated that all printed materials on Earth would represent approximately 200 PB of information.
IBM's largest disk system, the Scale-Out Network Attach Storage (SONAS) comprised of up to 7,200 disk drives, which can hold over 11 PB of information. A smaller 10-frame model, the same size as IBM Watson, with six interface nodes and 19 storage pods, could hold over 7 PB of information.
For those of us in the IT industry, 1TB is small potatoes. I for one, was expecting it to be much bigger. But for everyone else, the equivalent of 200 million pages of text that IBM Watson has loaded inside is an incredibly large repository of information. I suspect IBM Watson probably contains the complete works of Shakespeare as well as other fiction writers, the IMDB database, all 3.5 million articles of Wikipedia, religious texts like the Bible and the Quran, famous documents like the Magna Carta and the US Constitution, and reference books like a Dictionary, a Thesaurus, and "Gray's Anatomy". And, of course, lots and lots of lists.
For those on Twitter, follow [@ibmwatson] these next three days during the challenge.
We are only days away from the big IBM Challenge of Watson computer against two human contestants on the show Jeopardy!
I watched two episodes of Jeopardy! on my Tivo, pausing it to follow the [homework assignment] I suggested in my last post. Here are my own results and observations.
Episode  involved a web programmer, a customer service representative, and a bank teller.
Of the first six categories in Round 1, I guessed four of the six themes for each category. For the category "Diamonds are Forever", I wrote down "All answers are some kind of gem or mineral", but the reality was that all the answers were some physical characteristic of diamonds specifically. For the category "...Fame is not", I wrote down "All answers are TV or Movie celebrities". I was close, but actually it was famous celebrities, rock bands and pop culture of the 1980s. (The movie "Fame" came out in 1980).
In the round, there were 27 of the 30 answers given before they ran out of time. Of these, I was able to get 24 of 27 correct by searching the Internet. That is 88 percent correct. Here were the ones that eluded me:
Answer related to a "multi-chambered mollusk". I could not find anything on the Internet definitively on this, so abstained from wager. The correct question was "What is Nautilus?".
Answer was the Irish variant of "Kathryne". I found Kathleen as a variant, but did not investigate if it had Irish origins. The correct question was "What is Caitlin?"
Answer was this Norse name for "ruler" whether you had red hair or not. I found "Roy" and "Rory" so guessed "What is Rory?" The correct question was "What is Eric?"
The second round, I guesed three of the six themese for the categories. For category "Musical Titles Letter Drop" I wrote down "All the answers are titles of musical songs" but it was actually "Musicals" as in the Broadway shows. For category "Place called Carson", I wrote down "All the answers are places" and was way off on that one, with answers that were people, places and names of corporations. And for "State University Alums", I wrote down "All the answers are college graduates", but instead they were all "State Universities" such as the University of Arizona.
In this second round, only 26 answers were posed. I got 80 percent correct with Internet searching. I missed three on the "Musical Titles", one in "Pope-pourri" and one State University (sorry SMU). The "Musical Titles Letter Drop category" was especially difficult, as for each title of a Musical, you had to remove a single letter out of it to form the correct response.
For the answer "Good luck when you ask the singers "What I Did For Love"; they never tell the truth", you would need to take "Chorus Line" the musical, where the song "What I did for Love" appears, and ask "What is Chorus Lie?" Note that "line" changed to "lie" and the letter "n" was dropped out.
For the answer "Embrace the atoms as Simba and company lose and gain electrons en masse in this production", you would need to recognize that Simba was the main character of "The Lion King" and change it to "What is The Ion King".
I think these play-on-words are the questions that would stump the IBM Watson computer.
In the final round, the category was "Ancient Quotes". I thought the answer would be a famous adage or quotation, but it was instead famous people who uttered those phrases. The answer was "He said, to leave this stream uncrossed will breed manifold distress for me; to cross it, for all mankind". I was able to determine the correct response readily from searching the Internet: The river was the Rubicon, the border of the Gaul region governed by an ambitious general. The correct response "Who was Julius Caesar?"
Total time for the entire exercise: 87 minutes.
The following night, episode  brought back Paul Wampler, the returning champion web programmer, against two new contestants: an actor, and high school principal.
Of the first six categories in Round 1, I guessed five of the six themes for each category. For the category "Nonce Words", I wrote all the answers would be nonsense words. I was close, the clues had words invented for a particular occasion, but the correct responses did not.
I was able to get 29 of 30 correct by searching the Internet. That is 96 percent correct. The one I missed was in the category "Nonce Words" and the answer was "In an arithmocracy, this portion of the population rules, not trigonometry teachers.." My response was "What is Math?" but the correct answer was "What are the majority?" It did not occur for me to even look up [Arithmocracy] as a legitimate word, but it is real.
The second round, I guesed five of the six themese for the categories. For category "Hawk" eyes, the "Hawk" was in quotation marks, so I wrote "All answers would start with the word Hawk or end with the word "eyes". I was close, the correct theme was that the word "hawk" would appear in the front, middle or end of the correct response.
In this second round, I got 28 of 30 correct. I got 93 percent correct with Internet searching. Ironically, it was the category "German Foods" that caught me off guard.
For, the answer was "Pichelsteiner Fleisch, a favorite of Otto von Bismarck, is this one-pot concoction, made with beef & pork". I know that "fleisch" is a German word for meat, so I guessed "What is sausage?" but the correct response was "What is stew?" I should have paid more attention to the "one-pot concoction" part of the answer.
For the answer was "Mimi Sheraton says German stuffed hard-boiled eggs are always made with a great deal of this creamy product". I didn't realize that "stuffed eggs" was German for "deviled eggs". Instead, I found Mimi Sheraton's "The German Cookbook" on Google Books, and jumped to the page for "Stuffed Eggs" The ingredients I read included whippedc cream, cognac, and worcestershire sauce. Taking the "creamiest" ingredient of these, I wrote down "What is whipped cream?" However, it turned out I was actually reading the ingredients for "Crabmeat Cocktail" that was coninuing from the previous page. I thought it was gross to put whipped cream with eggs, and should have known better. The correct response was "What is mayonnaise?"
In the final round, the category was "Political Parties". This could either be political organizations like Republicans and Democrats, or festivities like the Whitehouse Correspondents Dinner. The answer was "Only one U.S. president represented this party, and he said, I dread...a division of the republic into two great parties." So, we can figure out the answer refers to political organizations, but both Democrat and Republican are ruled out because each has had multiple presidents. So, looking at a [List of Political Parties of each US President], I found that there were four presidents in the Whig party, four in the Democrat-Republic party, but only one president in the Federalist party (John Adams), and one in the War Union party (Andrew Johnson). Looking at [famous quotes from John Adams] first, I found the quote, it matched, and so I wrote down "What is the Federalist party?". I got it right, as did two of the three contestants. Ironically, the one contestant who got it wrong, the returning champion web programmer, wagered a small amount, so he still had more money after the round and won the game overall.
Total time for the entire exercise: 75 minutes. I was able to do this faster as I skipped searching the internet for the responses I was confident on.
To find out when Jeopardy is playing in your town, consult the [Interactive Map].
With all the excitement of the [IBM Challenge], where the [IBM Watson computer] will compete against humans on [Jeopardy!], I thought it would be good to provide the following homework exercise to help you appreciate how challenging the game is and the strategies required.
Overview of the game of Jeopardy!
If you are familiar with the show, you can safely skip this section.
Known as "America's Favorite Quiz Show", the Jeopardy pits three contestants against each other. The board is divided into six columns and five rows of answers. Each column indicates the category for that column of answers. The rows are ranked from easiest to most difficult, with more difficult answers being worth more money to wager.
The contestants take turns. The returning champion gets to select a spot on the board, by indicating the category (column) and wager (row), such as "I will take Animals for 800 dollars!" Contestants must then press a button to "buzz in", be recognized by the host, and respond correctly. If the contestant responds incorrectly, the other two contestants have the opportunity to respond. The contestant with the correct response gets to chose the next answer.
For each turn, the host, Alex Trebek, shows the answer on the board, and spends three seconds reading it aloud to give everyone a chance to come up with a corresponding question. This is perhaps what Jeopardy is most famous for. In a traditional "Quiz Show", the host asks questions, and the contestants answer that question. On Jeopardy, however, the host poses "answers", and the contestants provide their response in the form of a "questions" that best fit the category and answer clues. For example, if the categories were "Large Corporations" and the answer was "Sam Palmisano", the contestant would answer "Who is the CEO of IBM Corporation?" Both the categories, and the answers are filled with puns, slang and humor to make it more challenging. Often, the answer itself is not sufficient clue, you have to factor in the category as well to have a complete set of information.
The game is played in three rounds:
In the first round, there are six categories, and the rows are worth $200, $400, $600, $800 and $1000 dollars. If you respond correctly on all five answers in a category column, you would win $3000. If you respond to all thirty answers correctly, you would earn $18,000.
In the second round, there are six different categories, and the rows are worth twice as much.
The final round has a single category and a single question. Each player can decide to wager up to the full amount of their score in this game. This wager is done after they see the category, but before they see the answer.
After the host finishes reading the answer aloud, the buzzers are lighted so that the contestants can buzz in. If a contestant gets the question correctly, he earns the corresponding money for the row it was in. If the contestant guesses incorrectly, the money is subtracted from his score. If the first contestant fails, the buzzers are re-lit so the other two contestants can then buzz in with their answers, learning from previous failed attempts.
To provide added challenge, some of the answers are surprise "Daily Double". Instead of the dollar amount for the row, the contestant can wager any amount, up to their total score they have won so far in that game, or the largest dollar amount for that round, whichever is higher, based on his confidence in that category. There is one "Daily Double" surprise in the first round, and two in the second round.
In the final round, each contestant wagers an amount up to their total score, based on their confidence on the final category. A common strategy for the leading contestant with the highest score is to wager a low amount, so that if he fails to guess the response correctly, he will still have a large dollar amount. For example, if the leader has $2000 and the second place is $900, the leader can wager only $100 dollars, and the second place might wager his full $900. If the leader loses the round, he still has $1900, beating the second place regardless of how well he does.
Whomever has the most money at the end of all three rounds wins that amount of cash, and gets to return to the show for another game the next day to continue his winning streak. The other two contestants are given consolation prizes and a nominal appearance fee for being on the show, and are never seen from again.
The show is only 30 minutes long, so the folks at Sony Pictures who produce the show can film a full weeks' worth of television shows in just two days of real-life, Tuesday and Wednesday, allowing the host Alex Trebek and his "Clue Crew" time to research new categories and answers.
So, here is your homework assignment. Record a full episode of Jeopardy on your VCR or Digital Video Recorder (DVR) and have your thumb ready to press the pause button. For each round, listen to each category, pause, and try to guess what all the answers in that column will have in common. For each category, write down a statement like "All the responses in this category are ...".
The answers could be people, places or things. Suppose the category "Chicks Dig Me". In English, "chicks" can be slang for women, or refer to young chickens. The term "dig" can be slang for admires or adores, so this could be "Male Celebrities" that women find attractive, it could be objects of desire that women fancy (diamonds, puppies, etc.), or it could be places that women like to go to. As it turns out, the "dig" referred to archaeology, and the responses were all famous female archaeologists.
Once you have those all your statements written down, press play button again.
Next, as each answer is shown, you have three seconds to hit the pause again, so that you have the question on the screen, but before any contestants have responded. Go on your favorite search engine like Google or Bing and try to determine the correct response based on the category and answer. Consider these [tips for being an Internet Search ninja]. Once you think you have figured out your response, write it down, and the dollar amount you wager, or decide you will not respond for that answer, if you are not sure about your findings.
Even if you think you already know the correct response, you may decide to gain more confidence of your response by finding confirming or supporting evidence on the Internet.
Press play. Either one of the contestants will get it right, or the host will provide the question that was expected as the correct response.
How well did you do? Were you able to find on the the correct response online, or at least confirm that what you knew was correct. If you got it correct, add in your dollar amount to your score. If you got it wrong, subtract the amount.
At the end of each round, look back at your statements for each category. Did you guess correctly the common theme for each category column of answers? Did you misinterpret the slang, pun or humor intended?
At the end of the game, you might have done better than the contestant that won the game. However, check how much added time you took to do those Internet searches. The average winner only questions half of the answers and only gets 80 percent of them correctly.
If you are really brave, take the [Jeopardy Online Test]. If you do this homework assignment, feel free to post your insights in the comments below.
Tonight PBS plans to air Season 38, Episode 6 of NOVA, titled [Smartest Machine On Earth]. Here is an excerpt from the station listing:
"What's so special about human intelligence and will scientists ever build a computer that rivals the flexibility and power of a human brain? In "Artificial Intelligence," NOVA takes viewers inside an IBM lab where a crack team has been working for nearly three years to perfect a machine that can answer any question. The scientists hope their machine will be able to beat expert contestants in one of the USA's most challenging TV quiz shows -- Jeopardy, which has entertained viewers for over four decades. "Artificial Intelligence" presents the exclusive inside story of how the IBM team developed the world's smartest computer from scratch. Now they're racing to finish it for a special Jeopardy airdate in February 2011. They've built an exact replica of the studio at its research lab near New York and invited past champions to compete against the machine, a big black box code -- named Watson after IBM's founder, Thomas J. Watson. But will Watson be able to beat out its human competition?"
Like most supercomputers, Watson runs the Linux operating system. The system runs 2,880 cores (90 IBM Power 750 servers, four sockets each, eight cores per socket) to achieve 80 [TeraFlops]. TeraFlops is the unit of measure for supercomputers, representing a trillion floating point operations. By comparison, Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University (CMU) estimates that the [human brain is about 100 TeraFlops]. So, in the three seconds that Watson gets to calculate its response, it would have processed 240 trillion operations.
Several readers of my blog have asked for details on the storage aspects of Watson. Basically, it is a modified version of IBM Scale-Out NAS [SONAS] that IBM offers commercially, but running Linux on POWER instead of Linux-x86. System p expansion drawers of SAS 15K RPM 450GB drives, 12 drives each, are dual-connected to two storage nodes, for a total of 21.6TB of raw disk capacity. The storage nodes use IBM's General Parallel File System (GPFS) to provide clustered NFS access to the rest of the system. Each Power 750 has minimal internal storage mostly to hold the Linux operating system and programs.
When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, "The actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1TB." For performance reasons, various subsets of the data are replicated in RAM on different functional groups of cluster nodes. The entire system is self-contained, Watson is NOT going to the internet searching for answers.
As time progresses, things change, sometimes for the better in the right direction, sometimes a step backwards, and sometimes just different enough to be annoying. I wrote my blog post about [A Box Full of Floppies] a week ago, and posted in Monday. Let's take a look at how time and change impacted that one post.
The weather has warmed up here in Tucson so I started my Spring Cleaning early this year...
If there is ever a good time to brag about how beautiful the weather here in Tucson, it would be when everyone else in the country is digging themselves out of piles of snow. When my friends on Twitter were complaining how cold it was in Scottland, Ireland, Canada, or the East Coast of the United States, I would remind them that I am wearing a tee-shirt and shorts. I played golf for a week last December!
Sadly, a few days after my post, Tucson had the coldest days of February, breaking records set back in the year 1899. Water pipes were frozen, outdoor plants have suffered, and over 14,000 homes and businesses were shut off from natural gas. The 1,400-plus employees at the IBM Tucson facility have been asked to telecommute until restroom facilities can be restored to working order.
While we should all pay more attention to [climate change], this latest chill is probably just a seasonal flucation thanks to [La Niña] that happens every 10-15 years.
Here is a YouTube video of an astronaut ejecting a floppy disk...
Back in 2009, YouTube decided to [stop supporting Internet Explorer 6 (IE6)] to view its videos. However, that is what most IBMers were on, and this posted a problem when I embedded a video on my blog. To get around that, my friends at Microsoft provided special "conditional HTML tags" that allows me to suppress YouTube videos when viewed from Internet Explorer. The video shows up for those using Chrome, Opera, Firefox or other browsers, but is suppressed for IE users, and that allowed IBM employees to at least read the text.
Fortunately, last July, IBM decided to switch from IE6 over to Mozilla Firefox as the standard browser, so I thought this would no longer be an issue.
Unfortunately, my friends at YouTube have done it again. They changed the generated embed code from using "object" tags to "iframe" which messes up blogs written in various blogging systems, including Lotus Connections that I have here on DeveloperWorks, as well as WordPress. The new method is intended to either promote the new HTML5 standard, or to piss off [iPhone users]. In any case, several readers found they could not read my entire post about floppies because the "iframe" prevented the rest of the post to be shown. I have since reverted back to the old "object" tags and re-posted for everyone's benefit.
I may have to stand up an OS/2 machine just to check out what is actually on those floppies...
For any data that you keep for long term retention, it is important that you be able to access the data in a meaningful way when you need it. IBM has identified five ways that this can be done:
Museum approach -- keep old servers, storage and applications around. In my case, I have computers that can handle 3.5-inch floppy diskettes, but no hardware to read my Zip cartridges or 5.25-inch floppies.
Emulation approach -- emulating old systems with new systems. I remember the first CD players had "tape cassette" attachements so they can be used in car stereos.
Migration approach -- migrating data and applications to new technology. This is what most businesses do. For example, if you keep archives through IBM Tivoli Storage Manager or DFSMShsm, the software will migrate data from old tapes to new tapes as part of its tape reclamation process.
Descriptive approach -- including sufficiently descriptive metadata, such as with HTML or XML tags, that would enable future rendering.
Ecapsulation approach -- encapsulate the data, metadata and related application logic for future processing. While the "descriptive" approach might help display the contents of proprietary formats, the encapsulation approach would include application logic, perhaps written in Java, that could be used to actually operate built-in macros, pivot tables, or other active features of a document or database.
IBM Research is working closely with industry standards groups, like the Organization for the Advancement of Structured Information Standards [OASIS], to help promote the use of open standards for long-term retention.
For my readers who follow American Football, enjoy the [SuperBowl]!
The weather has warmed up here in Tucson so I started my Spring Cleaning early this year and unearthed from my garage a [Bankers Box] full of floppy diskettes.
IBM invented the floppy disk back in 1971, and continued to make improvements and enhancements through the 1980s and 1990s. It will be one of the many inventions celebrated as part of IBM's Centennial (100-year) anniversary. Here is an example [T-shirt]
IBM needed a way to send out small updates and patches for microcode of devices out in client locations. IBM had drives that could write information, and sent out "read-only" drives to the customer locations to receive these updates. These were flexible plastic circles with a magnetic coating, and placed inside a square paper sleeve. Imagine a floppy disk the size of a piece of standard paper. The 8-inch floppy fit conveniently in a manila envelope, sendable by standard mail, and could hold nearly 80KB of data.
I've been using floppies for the past thirty years. Here's some of my fondest memories:
While still in high school, my friend Franz Kurath and I formed "Pearson Kurath Systems", a software development firm. We wrote computer programs to run on UNIX and Personal Computers for small businesses here in Tucson. Whenever we developed a clever piece of code, a subroutine or procedure, we would save it on a floppy disk and re-use it for our next project. We wrote in the BASIC language, and our databases were simple Comma-Separated-Variable (CSV) flat files.
The 5.25-inch floppies we used could hold 360KB, and were flexible like the 8-inch models. Later versions of these 5.25-inch floppies would be able to hold as much as 1.2MB of data. We would convert single-sided floppies into double-sided ones by cutting out a notch in the outer sleeve. Covering up the notches would mark them as read-only.
The 3.5-inch floppies were introduced with a hard plastic shell, with the selling point that you can slap on a mailing label and postage and send it "as is" without the need for a separate envelope. These new 3.5-inch floppies would carry "HD" for high density 720KB, and double-sided versions could hold 1.44MB of data. The term "diskette" was used to associate these new floppies with [hard-shelled tape cassettes]. Sliding a plastic tab would allow floppies to be marked "read-only". IBM has the patent on this clever invention.
Continuing our computer programming business in college, Franz and I took out a bank loan to buy our first Personal Computer, for over $5000 dollars USD. Until then, we had to use equipment belonging to each client. The banks we went to didn't understand why we needed a computer, and suggested we just track our expenses on traditional green-and-white ledger paper. Back then, peronsal computers were for balancing your checkbook, playing games and organizing your collection of cooking recipies. But for us, it was a production machine. A computer with both 5.25-inch and 3.5-inch drives could copy files from one format to another as needed. The boost in productivity paid for itself within months.
Apple launched its Macintosh computer in 1984, with a built-in 3.5-inch disk drive as standard equipment. Here is a YouTube video of an [astronaut ejecting a floppy disk] from an Apple computer in space.
In my senior year at the University of Arizona, my roommate Dave had borrowed my backpack to hold his lunch for a bike ride. He thought he had taken everything out, but forgot to remove my 3.5-inch floppy diskette containing files for my senior project. By the time he got back, the diskette was covered in banana pulp. I was able to rescue my data by cracking open the plastic outer shell, cleaning the flexible magnetic media in soapy water, placing it back into the plastic shell of a second diskette, and then copied the data off to a third diskette.
After graduating from college, Franz and I went our separate ways. I went to work for IBM, and Franz went to work for [Chiat/Day], the advertising agency famous for the 1984 Macintosh commercial. We still keep in touch through Facebook.
At IBM, I was given a 3270 terminal to do my job, and would not be assigned a personal computer until years later. Once I had a personal computer at home and at work, the floppy diskette became my "briefcase". I could download a file or document at work, take it home, work on it til the wee hours of the morning, and then come back the next morning with the updated effort.
To help prepare me for client visits and public speaking at conferences, IBM loaned me out to local schools to teach. This included teaching Computer Science 101 at Pima Community College. When asked by a student whether to use "disc" or "disk", I wrote a big letter "C" on the left side of the chalkboard, and a big letter "K" on the right side. If it is round, I told the students while pointing at the letter "C", like a CD-ROM or DVD, use "disc". If it has corners, pointing to corners of the letter "K", like a floppy diskette or hard disk drive, use "disk".
On one of my business trips to visit a client, we discovered the client had experienced a problem that we had just recently fixed. Normally, this would have meant cutting a Program Trouble Fix (PTF) to a 3480 tape cartridge at an IBM facility, and send it to the client by mail. Unwilling to wait, I offered to download the PTF onto a floppy diskette on my laptop, upload it from a PC connected to their systems, and apply it there. This involved a bit of REXX programming to deal with the differences between ASCII and EBCDIC character sets, but it worked, and a few hours later they were able to confirm the fix worked.
In 1998, Apple would signal the begining of the end of the floppy disk era, announcing their latest "iMac" would not come with an internal built-in floppy drive. David Adams has a great article on this titled [The iMac and the Floppy Drive: A Conspiracy Theory]. You can get external floppy drives that connect via USB, so not having an internal drive is no longer a big deal.
While teaching a Top Gun class to a mix of software and hardware sales reps, one of the students asked what a "U" was. He had noticed "2U" and "3U" next to various products and wondered what that was referring to. The "U" represents the [standard unit of measure for height of IT equipment in standard racks]. To help them visualize, I explained that a 5.25-inch floppy disk was "3U" in size, and a 3.5-inch floppy diskette was "2U". Thus, a "U" is 1.75 inches, the thinnest dimension on a two-by-four piece of lumber. Servers that were only 1U tall would be referred to as "pizza boxes" for having similar dimensions.
Every year, right around November or so, my friends and family bring me their old computers for me to wipe clean. Either I would re-load them with the latest Ubuntu Linux so that their kids could use it for homework, or I would donate it to charity. Last November, I got a computer that could not boot from a CD-ROM, forcing me to build a bootable floppy. This gave me a chance to check out the various 1-disk and 2-disk versions of Linux and other rescue disks. I also have a 3-disk set of floppies for booting OS/2 in command line mode.
So while this unexpected box of nostalgia derailed my efforts to clean out my garage this weekend, it did inspire me to try to get some of the old files off them and onto my PC hard drive. I have already retrieved some low-res photographs, some emails I sent out, and trip reports I wrote. While floppy diskettes were notorious for being unreliable, and this box of floppies has been in the heat and cold for many Arizonan summers and winters, I am amazed that I was able to read the data off most of them so far, all the way back to data written in 1989. While the data is readable, in most cases I can't render it into useful information. This brings up a few valuable lessons:
Backups are not Archives
Some of the files are in proprietary formats, such as my backups for TurboTax software. I would need a PC running a correct level of Windows operating system, and that particular software, just to restore the data. TurboTax shipped new software every year, and I don't know how forward or backward-compatible each new release was.
Another set of floppies are labeled as being in "FDBACK" format. I have no idea what these are. Each floppy has just two files, "backup.001" and "control.001", for example.
Backups are intended solely to protect against unexpected loss from broken hardware or corrupted data. If you plan to keep data as archives for long-term retention, use archive formats that will last a long time, so that you can make sense of them later.
Operating System Compatibility
Windows 7 and all of my favorite flavors of Linux are able to recognize the standard "FAT" file system that nearly all of my floppies are written in. Sadly, I have some files that were compressed under OS/2 operating system using software called "Stacker". I may have to stand up an OS/2 machine just to check out what is actually on those floppies.
You can't judge a book by its cover
Floppies were a convenient form of data interchange. Sometimes, I reused commercially-labeled floppies to hold personal files. So, just because a floppy says "America On-Line (AOL) version 2.5 Installation", I can't just toss it away. It might actually contain something else entirely. This means I need to mount each floppy to check on its actual contents.
So what will I do with the floppies I can't read, can't write, and can't format? I think I will convert them into a [retro set of coasters], to protect my new living room furniture from hot and cold beverages.
In keeping with the spirit to be a more kinder, gentler 2011, I decided last week to refrain from being the rain on someone else's parade that occurs immediately before, during or after a competitor's announcement or annual conference, and let EMC have their few moments in the spotlight last week. This of course allows me more time to learn about the announcements and reflect on marketplace reactions. Here's a quick look at the [EMC Press Release]:
A new VNXe disk system
Of the 41 new storage technologies and products EMC announced last week, the VNXe is EMC's "me-too" product to compete against other low-end disk systems like the IBM System Storage DS3524 and N3000 series. It looks truly new, developed organically from the ground up, with a new architecture, new OS. It comes in either the 2U-high VNXe3100 or the 3U-high VNXe3300. These employ 3.5-inch SAS drives to provide Ethernet-based NFS, CIFS and iSCSI host attachment. The $10K USD price tag appears to be for the hardware only. As is typical for EMC, they charge software features in bundles or "suites", so the actual TCO will be much higher. I have not seen any announcements whether Dell plans to resell either the VNXe nor the VNX models, now that they have acquired Compellent.
A new VNX disk system
Despite having a similar name as the VNXe, the VNX appears to be a re-hash of the Celerra/CLARiiON mess that EMC has been selling already, based on the old FLARE and DART operating systems of these older disk systems. This scales from 75 to 1000 SAS drives. While EMC calls the VNX "unified", it currently is only available in block-only and file-only models, with a future promise from EMC that they will offer a combined block-and-file version sometime in the future. EMC claims that the VNX will be faster than the predecessors, so hopefully that means EMC has joined the rest of the planet and will publish SPC-1 and SPC-2 benchmarks to back up that claim. They can compare against the SPC-1 benchmarks that our friends at NetApp ran against EMC CLARiiON.
New software for the VMAX
A long time ago, EMC announced they would provide non-disruptive automated tiering. Their first delivery "FAST V1" handled entire LUNs at a time. EMC now has finally "FAST VP" which we expected was going to be called "FAST V2", which provides sub-LUN automated tiering between Solid-state and spinning disk drives.. Meanwhile, IBM has been delivering "Easy Tier" on the IBM System Storage DS8000 series, SAN Volume Controller, and Storwize V7000 disk systems.
Data Domain Archiver
Competing against IBM, HP and Oracle in the tape arena, EMC's latest addition to the Data Domain family is designed for the long-term retention of backups? Archives of backups? Backups are short-lived, protecting against the unexpected loss from hardware failure or data corruption. Keeping backups as "archives" is generally a bad mistake, as it makes it hard to e-Discover the data you need when you need it, and may not have the appropriate hardware tor restore these old backups when you do find them.
I will have to dig deeper into all of these different technologies in separate posts in the future.
If we have learned anything from last decade's Y2K crisis, is that we should not wait for the last minute to take action. Now is the time to start thinking about weaning ourselves off Windows XP. IBM has 400,000 employees, so this is not a trivial matter.
Already, IBM has taken some bold steps:
Last July, IBM announced that it was switching from Internet Explorer (IE6) to [Mozilla Firefox as its standard browser]. IBM has been contributing to this open source project for years, including support for open standards, and to make it [more accessible to handicapped employees with visual and motor impairments]. I use Firefox already on Windows, Mac and Linux, so there was no learning curve for me. Before this announcement, if some web-based application did not work on Firefox, our Helpdesk told us to switch back to Internet Explorer. Those days are over. Now, if a web-based application doesn't work on Firefox, we either stop using it, or it gets fixed.
IBM also announced the latest [IBM Lotus Symphony 3] software, which replaces Microsoft Office for Powerpoint, Excel and Word applications. Symphony also works across Mac, Windows and Linux. It is based on the OpenOffice open source project, and handles open-standard document formats (ODF). Support for Microsoft Office 2003 will also run out in the year 2014, so moving off proprietary formats to open standards makes sense.
I am not going to wait for IBM to decide how to proceed next, so I am starting my own migrations. In my case, I need to do it twice, on my IBM-provided laptop as well as my personal PC at home.
Last summer, IBM sent me a new laptop, we get a new one every 3-4 years. It was pre-installed with Windows XP, but powerful enough to run a 64-bit operating system in the future. Here are my series of blog posts on that:
I decided to try out Red Hat Enterprise Linux 6.1 with its KVM-based Red Hat Enterprise Virtualization to run Windows XP as a guest OS. I will try to run as much as I can on native Linux, but will have Windows XP guest as a next option, and if that still doesn't work, reboot the system in native Windows XP mode.
So far, I am pleased that I can do nearly everything my job requires natively in Red Hat Linux, including accessing my Lotus Notes for email and databases, edit and present documents with Lotus Symphony, and so on. I have made RHEL 6.1 my default when I boot up. Setting up Windows XP under KVM was relatively simple, involving an 8-line shell script and 54-line XML file. Here is what I have encountered:
We use a wonderful tool called "iSpring Pro" which merges Powerpoint slides with voice recordings for each page into a Shockwave Flash video. I have not yet found a Linux equivalent for this yet.
To avoid having to duplicate files between systems, I use instead symbolic links. For example, my Lotus Notes local email repository sits on D: drive, but I can access it directly with a link from /home/tpearson/notes/data.
While my native Ubuntu and RHEL Linux can access my C:, D: and E: drives in native NTFS file system format, the irony is that my Windows XP guest OS under KVM cannot. This means moving something from NTFS over to Ext4, just so that I can access it from the Windows XP guest application.
For whatever reason, "Password Safe" did not run on the Windows XP guest. I launch it, but it takes forever to load and never brings up the GUI. Fortunately, there is a Linux version [MyPasswordSafe] that seems to work just fine to keep track of all my passwords.
Personal home PC
My Windows XP system at home gave up the ghost last month, so I bought a new system with Windows 7 Professional, quad-core Intel processor and 6GB of memory. There are [various editions of Windows 7], but I chose Windows 7 Professional to support running Windows XP as a guest image.
Here's is how I have configured my personal computer:
I actually found it more time-consuming to implement the "Virtual PC" feature of Windows 7 to get Windows XP mode working than KVM on Red Hat Linux. I am amazed how many of my Windows XP programs DO NOT RUN AT ALL natively on Windows 7. I now have native 64-bit versions of Lotus Notes and Symphony 3, which will do well enough for me for now.
I went ahead and put Red Hat Linux on my home system as well, but since I have Windows XP running as a guest under Windows 7, no need to duplicate KVM setup there. At least if I have problems with Windows 7, I can reboot in RHEL6 Linux at home and use that for Linux-native applications.
Hopefully, this will position me well in case IBM decides to either go with Windows 7 or Linux as the replacement OS for Windows XP.
Actually, if the title confuses you, it is because it has a double meaning.
Meaning 1: IBM earned almost 100 Billion dollars (USD)
IBM's 2010 [earnings report is now available], for the full year 2010 and the fourth quarter. IBM had $99.9 Billion dollars (USD) in revenue, almost $100 Billion dollars that it had set out as a vision in the 1980s. IBM Storage contributed with 8 percent growth, not bad for a year Dave Barry considers [one of the worst years ever.].
IBM President and CEO Sam Palmisano granted me a chunk of IBM stock in appreciation of my efforts towards the 2010 success! Actually, he gave stock to a whole bunch of IBMers, not just me, and they all deserve it also. Woo hoo!
Meaning 2: IBM is almost 100 years old
That's right, this upcoming June 16, 2011, IBM turns 100 years old. This Centennial date also happens to be my 25th year anniversary working in IBM Storage, which IBM calls joining the Quarter Century Club, or QCC for short. So, I am looking forward to plenty of cake and fireworks on that day!
I am looking forward to a year-long celebration on both counts!
Every January, we look back into the past as well as look into the future for trends to watch for the upcoming year. Ray Lucchesi of Silverton Consulting has a great post looking back at the [Top 10 storage technologies over the last decade]. I am glad to see that IBM has been involved with and instrumental in all ten technologies.
Looking into the future, Mark Cox of eChannel has an article [Storage Trends to Watch in 2011], based on his interviews with two fellow IBM executives: Steve Wojtowecz, VP of storage software development, and Clod Barrera, distinguished engineer and CTO for storage. Let's review the four key trends:
Cloud Storage and Cloud Computing
No question: Cloud Computing will be the battleground of the IT industry this decade. I am amused by the latest spate of Microsoft commercials where problems are solved with someone saying "...to the cloud". Riding on the coat tails of this is "Cloud Storage", the ability to store data across an Internet Protocol (IP) network, such as 10GbE Ethernet, in support of Cloud Computing applications. Cloud Storage protocols in the running include NFS, CIFS, iSCSI and FCoE.
Mark writes "..vendors who aren't investing in cloud storage solutions will fall behind the curve."
Economic Downturn forces Innovation
The old British adage applies: "Necessity is the mother of invention." The status quo won't do. In these difficult economic times, IT departments are running on constrained budgets and staff. This forces people to evaluate innovative technologies for storage efficiency like real-time compression and data deduplication to make better use of what they currently have. It also is forcing people to take a "good enough" attitude, instead of paying premium prices for best-of-breed they don't really need and can't really afford.
IT Service Management
Companies are getting away from managing individual pieces of IT kit, and are focusing instead on the delivery of information, from the magnetic surface of disk and tape media, to the eyes and ears of the end users. The deployment mix of private, hybrid and public clouds makes this even more important to measure and manage IT as a set of services that are delivered to the business. IT Service Management software can be the glue, helping companies implement ITIL v3 best practices and management disciplines.
Smarter Data Placement
A recent survey by "The Info Pro" analysts indicates that "managing storage growth" is considered more critical than "managing storage costs" or "managing storage complexity".
This tells me that companies are willing to spend a bit extra to deploy a tiered information infrastructure if it will help them manage storage growth, which typically ranges around 40 to 60 percent per year. While I have discussed the concept of "Information Lifecycle Management" (ILM), for the past four years on this blog, I am glad to see it has gone mainstream, helped in part with automated storage tiering features like IBM System Storage Easy Tier feature on the IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems. Not all data is created equal, so the smart placement of data, based on the business value of the information contained, makes a lot of sense.
These trends are influencing what solutions the various different vendors will offer, and will influence what companies purchase and deploy.
The "Basic" offering includes a single IBM Storwize V7000 controller enclosure, and three year warranty package that includes software licenses for IBM Tivoli Storage FlashCopy Manager (FCM) and IBM Tivoli Storage Productivity Center for Disk - Midrange Edition (MRE). Planning, configuration and testing services for the software are included and can be performed by either IBM or an IBM Business Partner.
The "Standard" offering allows for multiple IBM Storwize V7000 enclosures, provides three year warranty package for the FCM and MRE software, and includes implementation services for both the hardware and the software components. These services can be performed by IBM or an IBM Business Partner.
Why bundle? Here are the key advantages for these offerings:
Increased storage utilization! First introduced in 2003, IBM SAN Volume Controller is able to improve storage utilization by 30 percent through virtualization and thin provisioning. IBM Storwize V7000 carries on this tradition. Space-efficient FlashCopy is included in this bundle at no additional charge and can reduce the amount of storage normally required for snapshots by 75 percent or more. IBM Tivoli Storage FlashCopy Manager can manage these FlashCopy targets easily.
Improved storage administrator productivity! The new IBM Storwize V7000 Graphical User Interface can help improve administrator productivity up to 2 times compared to other midrange disk solutions. The IBM Tivoli Storage Productivity Center for Disk - Midrange Edition provides real-time performance monitoring for faster analysis time.
Increased application performance! This bundle includes the "Easy Tier" feature at no additional charge. Easy Tier is IBM's implementation of sub-LUN automated tiering between Solid-State Drives (SSD) and spinning disk. Easy Tier can help improve application throughput up to 3 times, and improve response time up to 60 percent. Easy Tier can help meet or exceed application performance levels with its internal "hot spot" analytics.
Increased application availability! IBM Tivoli Storage FlashCopy Manager provides easy integration with existing applications like SAP, Microsoft Exchange, IBM DB2, Oracle, and Microsoft SQL Server. Reduce application downtime to just seconds with backups and restores using FlashCopy. The built-in online migration feature, included at no additional charge, allows you to seamlessly migrate data from your old disk to the new IBM Storwize V7000.
Significantly reduced implementation time! This bundle will help you cut implementation time in half, with little or no impact to storage administrator staff. This will help you realize your return on investment (ROI) much sooner.
Regardless of what you do, it is important to keep your finger on the pulse of what is going on around you. Let me recap the different jobs I have had within IBM:
I started as a Software Engineer on DFHSM, which was later renamed to DFSMShsm, and worked my way up to lead architect for the entire DFSMS product. I attended user group conferences like SHARE and GUIDE to formally present the latest releases of the product, and to collect requirements for improvements and additions desired by the CIOs, IT directors and Storage Admins that attended. Each requirement was proposed to the group, who then voted on a scale from -3 to +3, with zero considered abstention. Six months later, I would come back to present which requirements were implemented, which ones were in consideration for future releases, and which ones were rejected because they were not strategic. Not everyone was happy with these decisions, and I took a lot of abuse on this. However, the process of gathering requirements was important, and the products are better for it.
I switched over to Marketing, starting out as a Marketing Manager for various prodcts, and working my way up to lead Marketing Strategist for the IBM System Storage product line. I continued to attend conferences to understand the client requirements, but I also attended meetings with IBM sales reps and Business Partners. For those who lump "Marketing and Sales" into a single category, there is a difference. Marketing is the transfer of awareness and enthusiasm, whereas Sales is the transfer of ownership. When Marketing does their job well, prospects are lining up to buy your product. When they don't, the Sales team has to pick up the slack, and provide the awareness and enthusiasm that Marketing failed to deliver. I traveled all over the world to present our Marketing Strategy. Not everyone was happy with some of our decisions, and I took a lot of abuse on this. However, the process of "socializing" the marketing message and hearing feedback of those who faced clients every day was important, and the marketing strategy was better for it.
Three years ago, I switched again, this time to be a Storage Consultant at the Tucson Executive Briefing Center. While I still travel to clients and conferences, in most cases the clients come to me, here in Tucson, Arizona. I get to present our strategy, solutions and products. Not everyone is happy with some of our decisions, and I take a lot of abuse on this. However, the process of helping customers make tough business and IT purchase decisions is important, and both IBM and our clients are better for it.
It was in this same concept that US Representative Gabrielle ("Gabby") Giffords launched a series of "Congress on your Corner" meetings. These were open air townhall meetings that allowed her to present her priorities and plans for the future, and to get feedback from her constituents. Last Saturday, at one such event here in Tucson, she was shot in the head. The shooter then proceeded to shoot another 20 rounds at others before being tackled to the ground by two volunteers. He had another 70 bullets left, so it could have been much worse.
Congresswoman Gifford survived, but six died, including a US Federal Judge, a Pastor at a local church, and a 9-year-old girl, who ironically was born on Setpember 11, 2001, the date of another US tragedy. The girl had just been elected to her student council, and came out to learn what government was all about. Another dozen people were wounded.
The last time I saw Gabby in person was last October 2010, at a charity auction to benefit the local Boys and Girls Club of America. She was shaking hands with everyone. I wished her good luck on her re-election campaign, which she won a few weeks later by a slim margin of some 4,000 votes.
(People have asked me if I knew her in high school. Gabby and I both attended University High in Tucson, rated one of the top 25 high schools in the USA. She would have started her freshman year months after I graduated, so I don't remember ever crossing paths.)
Having spent much of my childhood in Central and South America, I have witnessed my fair share of gun violence, military coups, and government take-overs. Of course, in a democratic government, there is a more peaceful way to resolve your differences. In my younger days, I was a lobbyist for local and state government here in Arizona for various causes and issues. I have met and dealt with many politicians. While many people are still in shock and awe over Saturday's tragedy, consider the following:
Tucson is part of the Wild, Wild, West. We are not far from the infamous town of Tombstone where a famous shoot-out happened at the OK Corrall. A popular activity here is to shoot rounds at a shooting range, either rent a gun or bring your own. Gun ownership is high, and hunting is a popular sport. Tucson hosts "Gun Shows" that allow people to buy guns without the mandatory 5-day waiting period. Every year, Tucson celebrates "Dillinger Days" to comemorate the capture of gunslinger John Dilinger at the Hotel Congress in downtown Tucson.
Tucson is close to Mexico. Authorities have reported as many as 30,000 people have been killed on the other side of the US-Mexico border in the past five years by rival drug cartels. An estimated 30 percent of the Tucson economy comes from human and drug trafficking. Those killed in Mexico include government officials, law enforcement and journalists. Last year, US President Barack Obama [ordered 1200 troops to protect the US-Mexico border], of which half were deployed here in Arizona. The district I live in that Congresswoman Giffords represents borders Mexico.
Tucson has high schools, colleges and Universities. We have had our share of shootings by frustrated students.
While everyone immediately was quick to blame this tragedy on everyone from [Sarah Palin] to Mexican drug lords, it appears the shooter was merely a frustrated college student, acting alone, and is now in custody awaiting trial. He was attending Pima Community College and had his run-ins with the college police there as well. He had applied to join the US Army, but his application was rejected.
In the early 1990s, to help me prepare to become a public speaker, IBM loaned me out to teach at the local schools. I did four semesters of high school, and then taught a year of Computer Science 101 at Pima Community College. (Yes, I have all the teaching credentials to do this.) I found this experience to be great training for me to practice my speaking skills. However, I took a lot of abuse. I had disruptive students, angry students, frustrated students, and students that would threaten me if they did not pass the class. One by one, they would drop out of my class, leaving me with only nine students finishing my class with a passing grade.
Sadly, community colleges across the country carry a stigma that they are not as good as a full four-year University. The students I met at Pima Community College were here because they could not find decent employment with just a high school diploma, weren't smart enough or rich enough to attend the University of Arizona, and just didn't know what to do with their lives. Some who graduate manage to get jobs as technicians and medical assistants, while others use this as a stepping stone to transfer over to the University of Arizona or other specialized training program.
I am sure there is much more to learn about this incident. Politicians can expect to take some abuse for the decisions made, their actions or lack of action on various issues, but nobody deserves being shot. Congresswoman Giffords was just trying to put her finger on the pulse of her district, to understand the concerns of her constituents so that she could represent us properly in her third term in office. Instead, we have doctors at the University Medical Center keeping their finger on her pulse. So far, things are hopeful, she is able to respond to commands such as "wiggle your toes" or "hold up two fingers".
The latest update to the IBM Storage channel on YouTube is fellow IBMer Bob Dalton presenting IBM Scale-Out Network Attached Storage (SONAS) at the NAB 2010 conference. Here is the quick [2-minute YouTube video].
Last year, I took a different approach. I decided to NOT publicize my resolutions to see if that allowed me to stick to them better. Here is what I had resolved for 2010:
The recession took quite a hit on my investments and retirement plans in 2009, so in 2010 I decided to increase my savings rate, and diversify my portfolio. I consider this one a success, with special thanks to the financial planners at Fidelity Investments for their assistance in this area. This was not a matter of sticking to a strict budget as much as not wasting money on so many expensive, frivolous things.
Publish "Inside System Storage: Volume II"
Yes, I finally got my latest book published last October, a follow-on to my 2007 hit "Inside System Storage: Volume I". I have already begun working on Volume III, so I consider this one a success.
Quit Exercising at the Gym
From 2003 to 2009, I had worked out consistently at my gym three times a week for an hour, under the supervision of a personal trainer, with 20 minutes of cardio work-out on a treadmill followed by 40 minutes of weight lifting with kettle-bells, free weights and resistance machines. During that time, I did not gain muscle mass nor lose body fat. Rather than admit their failure, my personal trainers indicated this is merely a "plateau". Armed with the Time article [Why Exercise Won't Make You Thin], I decided to save thousands of dollars in 2010 by discontinuing my gym membership and expire my contract with my personal trainer. End result? I was two pounds lighter after 12 months, so I consider this one a success.
Re-Decorate my Living Room
With all the money I saved, I resolved to re-decorate my living room. I hired a professional interior decorator, bought new furniture, and had the entire room re-painted to new colors. This also means keeping the room uncluttered, which I have managed to do so far. So, this one is also a success.
Learn Cloud Computing
This last one was work-related. Every year, IBM asks its employees to document their "Personal Business Commitments", or PBCs, which then forms the basis of your year-end appraisal. IBM is what the industry calls a "Results-Oriented Work Environment" [ROWE]. These PBCs are an opportunity to identify areas to stretch and grow, broaden your skills and strengthen your expertise. I was able to get access to IBM's cloud computing offerings to get hands-on experience, as well as research this topic on various fronts so that I could provide advice to clients and make presentations at various briefings and events. While there is still much more to learn, I consider this a success.
So, while I seem to have been more successful keeping my resolutions by not making them public up front, I think the more important pattern is that when I made many resolutions, I had only a 60 to 80 percent success rate, but when I had fewer, I was more likely to keep them all and be less stressed about it. This could also be psychological, in that feeling that you have completed 60 to 80 percent allows you to forgive yourself for not keeping some of the more difficult resolutions. Therefore, this year, I have decided to focus on a single resolution, to reduce my body fat percentage.
Rather than make you wait 12 months for my results, I plan to provide periodic updates in this blog on my progress. Over the vacation break, I bought and read Tim Ferriss' book [Four Hour Body]. Mo and I are in this together, and we have started Tim's [Slow-Carb diet] last Sunday. My doctor has advised me on which vitamins and supplements to take. Rather than go back to the gym, I will just focus on walking at least 20,000 steps per week, which works out roughly to 10 kilometers.
Wrapping up my post-week coverage of the [Data Center 2010 conference], I stuck through the end to get my money's worth at this conference. As the morning went on, it became obvious many people booked flights or started their weekends prior to the official 3:15pm ending of the last day.
Strategies for Data Life Cycle Management
I prefer the term "Information Lifecycle Management", but the two analysts presenting decided to use DLM instead. Let's start with the biggest challenge faced by the audience.
The problem is not meeting Service Level Agreements (SLA) but Service Level Expectations. When looking at the real business value of IT, you should link IT strategy to business outcomes and directives, align with your CIO's pet initiatives, and position storage as a technology supporting IT Directors goals. Here were the top five goals:
Curtailing Storage Sprawl
Compliance and e-Discovery
Improving Service Levels for Data Availability and Protection
Moving to Cloud Computing
The analysts reviewed both a "Tops Down" and "Bottoms Up" approach. They recommend what they call an "Enterprise Infomration Archive" (what IBM calls Smart Archive, by the way) that provides a better understanding of all data.
No greater lie has been told than "Storage is Cheap". Currently, only 10 percent of companies hvae a formal "deletion policy", but the analysts predict this will rise to 50 percent by 2013.
The "Bottoms Up" approach is focused on modernizing the data center at the storage technology level. There has been a resurgence in interest in ILM solutions, implementing storage tiers, and storage efficiency features like thin provisioning, data deduplication and real-time compression. Cloud Computing can help off-load this effort to someone else.
ILM provides real business value, such as reduce costs, improve quality of service, and mitigate risks. The analysts felt that if you are not partnering with a storage vendor that offers five essential technologies, you should probably change vendors. What are those five essential technologies? I am glad you asked. Watch this [YouTube video] to find out.
Getting the Most From Your Storage Vendor Relationships
The analyst mentioned there are two kinds of storage vendors. Suppliers that sell you solutions, and Partners that work with you to develop unique functionality. He offered some advice:
Allow vendors to analyze and profile your workloads, such as IOPS, MB/sec bandwidth, average blocksize, and so on.
Review your Service level agreements (SLAs), procedures and asset management strategies
Identify upgrade risks, conversion costs, and unintended consequences
Take advantage of vendor engineers and technical staff for skills transfer, best practices, industry trends, and competitive comparisons
Explore different solutions and approaches
Avoid big pitfalls by negotiating and locking in upgrade and maintenance costs, scheduling conversions, and getting any guarantees in writing.
Asking the audience how they currently interact with their storage vendors:
The analyst's "Do's and Don'ts" were good advice for nearly any kind of business negotiation:
Keep language simple and enforceable
Limit diagnostic time
Be reasonble with rolling time-lines
Design remedies that keep you whole and are implementable in your environment
Make remedies punitive
Use qualitative measures
Rely on vendor's metrics only
Set terms that expire during life of system
Let the vendor provide best practices after installation, set reasonable expectations, schedule regular reviews, and insist on cross-vendor cooperation, have zero tolerance for finger-pointing between vendors. Depreciate storage equipment quickly.
This was the last session of the conference, a workshop to deal with irrational behavior during unexpected events that could disrupt or impact business operations. In the exercise, each table was a fictitious company, and the 7-8 people sitting at each table represented different department heads who had to make recommendations to upper management on how to deal with each disastrous situation presented to us. Decisions had to be made with limited and incomplete information. Each table had to come to a consensus on each action, and a single spokesperson from each table would present the recommendations. Winners of each round got prizes.
Plenty of coffee, not enough juice. Power and Cooling were top of mind. The rooms were cold, designed for people wearing suits I imagine. I enjoyed plenty of hot coffee throughout the event. Everyone complained that their smartphones and iPads were running out of electricity. The conference had "recharge" stations with plugs for all kinds of different phones, but the Micro-USB plugs that I needed for my Samsung Vibrant, and the apple connections needed by everyone else's iPhones and iPads, were always taken. I remember when you could charge your cell phone once a week, because you hardly used it to make calls, and now that they can be used to follow Twitter feeds, surf websites, and other actions between sessions, power runs out quickly.
Information Overload. I was one of those following tweets on the HootSuite app on my Android-based smart phone. I was able to meet some of the people I have exchanged blog comments and tweets. One told me that his tweets was his way of taking notes, so that his trip report would be done when he got back to the office. I used to write trip reports also, before blogging and tweeting.
The mood was positive. Overall, all the rival competitors got along well. I had friendly chats with people from Oracle, HP, Cisco, EMC, VCE, and others. People are overall optimistic that the IT industry is set for economic growth in 2011.
The only people who look forward to change are babies in soiled diapers. My impression is that people who were threatened by Cloud Computing now have a better understanding on what they need to do going forward. Yes, this means learning new skills, re-evaluating your backup/recovery procedures, reviewing your BC/DR contingency plans, and a variety of other changes. Those who don't like frequent change should consider getting out of the IT industry. Just sayin'
I suspect this will be my last post of 2010. I will be taking a much-needed break, celebrating the Winter Solstice. To all my readers, I wish you good times over the next few weeks, and a Happy New Year!
Continuing my post-week coverage of the [Data Center 2010 conference], Thursday morning had some interesting sessions for those that did not leave town last night.
Interactive Session Results
In addition to the [Profile of Data Center 2010] that identifies the demographics of this year's registrants, the morning started with highlights of the interactive polls during the week.
External or Heterogeneous Storage Virtualization
The analyst presented his views on the overall External/Heterogeneous Storage Virtualization marketplace. He started with the key selling points.
Avoid vendor lock-in. Unlike the IBM SAN Volume Controller, many of the other storage virtualization products result in vendor lock-in.
Leverage existing back-end capacity. Limited to what back-end storage devices are supported.
Simplify and unify management of storage. Yes, mostly.
Lower storage costs. Unlike the IBM SAN Volume Controller, many using other storage virtualization discover an increase in total storage costs.
Migration tools. Yes, as advertised.
Consolidation/Transition. Yes, over time.
Better functionality. Potentially.
Shortly after several vendors started selling external/heterogeneous storage virtualization solutions, either as software or pre-installed appliances, major storage vendors that were caught with their pants down immediately started calling everything internally as also "storage virtualization" to buy some time and increase confusion.
While the analyst agreed that storage virtualization simplifies the view of storage from the host server side, it can complicate the management of storage on the storage end. This often comes up at the Tucson Briefing Center. I explain this as the difference between manual and automatic transmission cars. My father was a car mechanic, and since he is the sole driver and sole mechanic, he prefers manual transmission cars, easier to work on. However, rental car companies, such as Hertz or Avis, prefer automatic transmission cars. This might require more skills on behalf of their mechanics, but greatly simplifies the experience for those driving.
The analyst offered his views on specific use cases:
Data Migration. The analyst feels that external virtualization serves as one of the best tools for data migration. But what about tech refresh of the storage virtualization devices themselves? Unlike IBM SAN Volume Controller, which allows non-disruptive upgrades of the nodes themselves, some of the other solutions might make such upgrades difficult.
Consolidation/Transition. External virtualization can also be helpful, depending on how aggressive the schedule for consolidation/transition is performed.
Improved Functionality/Usability. IBM SAN Volume Controller is a good example, an unexpected benefit. Features like thin provisioning, automated storage tiering, and so on, can be added to existing storage equipment.
The analyst mentioned that there were different types of solutions. The first category were those that support both internal storage and external storage virtualization, like the HDS USP-V or IBM Storwize V7000. He indicated that roughly 40 percent of HDS USP-V are licensed for virtualization. The second category were those that support external virtualization only, such as IBM SAN Volume Controller, HP Lefthand and SVSP, and so on. The third category were software-only Virtual Guest images that could provide storage virtualization capabilities.
The analyst mentioned EMC's failed product Invista, which sold less than 500 units over the past five years. The low penetration for external virtualization, estimated between 2-5 percent, could be explained from the bad taste that left in everyone considering their options. However, the analyst predicts that by 2015, external virtualization will reach double digit marketshare.
Having a feel for the demographics of the registrants, and specific interactive polling in each meeting, provides a great view on who is interested in what topic, and some insight into their fears and motivations.
Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday evening we had six hospitality suites. These are fun informal get-togethers sponsored by various companies. I present them in the order that I attended them.
Intel - The Silver Lining
Intel called their suite "The Silver Lining". Magician Joel Bauer wowed the crowds with amazing tricks.
Intel handed out branded "Snuggies". I had to explain to this guy that he was wearing his backwards.
i/o - Wrestling with your Data Center?
New-comer "i/o" named their suite "Wrestling with your Data Center?" They invited attendees frustrated with their data centers to don inflated Sumo Wrestling suits.
APC by Schneider Electric - Margaritaville
This will be the last year for Margaritaville, a theme that APC has used now for several years at this conference.
Cisco - Fire and Ice
Cisco had "Fire and Ice" with half the room decorated in Red for fire, and White for ice.
This is Ivana, welcoming people to the "Ice" side.
This is Peter, on the "Fire" side. Cisco tried to have opposites on both sides, savory food on one side, sweets on the other.
CA Technologies - Can you Change the Game?
CA Technologies offered various "sports games", with a DJ named "Coach".
Compellent - Get "Refreshed" at the Fluid Data Hospitality Suite
Compellent chose a low-key format, "lights out" approach with a live guitarist. They had hourly raffles for prizes, but it was too dark to read the raffle ticket numbers.
Of the six, my favorite was Intel. The food was awesome, the Snuggies were hilarious, and the magician was incredibly good. I would like to think Intel for providing me super-secret inside access to their Cloud Computing training resources and for the Snuggie!
Continuing my post-week coverage of the [Data Center 2010 conference], Wendesday afternoon included a mix of sessions that covered storage and servers.
Enabling 5x Storage Efficiency
Steve Kenniston, who now works for IBM from recent acquisition of Storwize Inc, presented IBM's new Real-Time Compression appliance. There are two appliances, one handles 1 GbE networks, and the other supports mixed 1GbE/10GbE connectivity. Files are compressed in real-time with no impact to performance, and in some cases can improve performance because there is less data written to back-end NAS devices. The appliance is not limited to IBM's N series and NetApp, but is vendor-agnostic. IBM is qualifying the solution with other NAS devices in the market. The compression can compress up to 80 percent, providing a 5x storage efficiency.
Townhall - Storage
The townhall was a Q&A session to ask the analysts their thoughts on Storage. Here I will present the answer from the analyst, and then my own commentary.
Are there any gotchas deploying Automated Storage Tiering?
Analyst: you need to fully understand your workload before investing any money into expensive Solid-State Drives (SSD).
Commentary: IBM offers Easy Tier for the IBM DS8000, SAN Volume Controller, and Storwize V7000 disk systems. Before buying any SSD, these systems will measure the workload activity and IBM offers the Storage Tier Advisory Tool (STAT) that can help identify how much SSD will benefit each workload. If you don't have these specific storage devices, IBM Tivoli Storage Productivity Center for Disk can help identify disk performance to determine if SSD is cost-justified.
Wouldn't it be simpler to just have separate storage arrays for different performance levels?
Analyst: No, because that would complicate BC/DR planning, as many storage devices do not coordinate consistency group processing from one array to another.
Commentary: IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems support consistency groups across storage arrays, for those customers that want to take advantage of lower cost disk tiers on separate lower cost storage devices.
Can storage virtualization play a role in private cloud deployments?
Analyst: Yes, by definition, but today's storage virtualization products don't work with public cloud storage providers. None of the major public cloud providers use storage virtualization.
Commentary: IBM uses storage virtualization for its public cloud offerings, but the question was about private cloud deployments. IBM CloudBurst integrated private cloud stack supports the IBM SAN Volume Controller which makes it easy for storage to be provisioned in the self-service catalog.
Can you suggest one thing we can do Monday when we get back to the office?
Analyst: Create a team to develop a storage strategy and plan, based on input from your end-users.
Commentary: Put IBM on your short list for your next disk, tape or storage software purchase decision. Visit
[ibm.com/storage] to re-discover all of IBM's storage offerings.
What is the future of Fibre Channel?
Analyst 1: Fibre Channel is still growing, will go from 8Gbps to 16Gbps, the transition to Ethernet is slow, so FC will remain the dominant protocol through year 2014.
Analyst 2: Fibre Channel will still be around, but NAS, iSCSI and FCoE are all growing at a faster pace. Fibre Channel will only be dominant in the largest of data centers.
Commentary: Ask a vague question, get a vague answer. Fibre Channel will still be around for the next five years.
However, SAN administrators might want to investigate Ethernet-based approaches like NAS, iSCSI and FCoE where appropriate, and start beefing up their Ethernet skills.
Will Linux become the Next UNIX?
Linux in your datacenter is inevitable. In the past, Linux was limited to x86 architectures, and UNIX operating systems ran on specialized CPU architectures: IBM AIX on POWER7, Solaris on SPARC, HP-UX on PA-RISC and Itanium, and IBM z/OS on System z Architecture, to name a few. But today, Linux now runs on many of these other CPU chipsets as well.
Two common workloads, Web/App serving and DBMS, are shifting from UNIX to Linux. Linux Reliability, Availability and Serviceability (RAS) is approaching the levels of UNIX. Linux has been a mixed blessing for UNIX vendors, with x86 server margins shrinking, but the high-margin UNIX market has shrunk 25 percent in the past three years.
UNIX vendors must make the "mainframe argument" that their flavor of UNIX is more resilient than any OS that runs on Intel or AMD x86 chipsets. In 2008, Sun Solaris was the number #1 UNIX, but today, it is IBM AIX with 40 percent marketshare. Meanwhile HP has focused on extending its Windows/x86 lead with a partnership with Microsoft.
The analyst asks "Are the three UNIX vendors in it for the long haul, or are they planning graceful exits?" The four options for each vendor are:
Milk it as it declines
Accelerate the decline by focusing elsewhere
Impede the market to protect margins
Re-energize UNIX base through added value
Here is the analyst's view on each UNIX vendor.
IBM AIX now owns 40 percent marketshare of the UNIX market. While the POWER7 chipset supports multiple operating systems, IBM has not been able to get an ecosystem to adopt Linux-on-POWER. The "Other" includes z/OS, IBM i, and other x86-based OS.
HP has multi-OS Itanium from Intel, but is moving to Multi-OS blades instead. Their "x86 plus HP-UX" strategy is a two-pronged attack against IBM AIX and z/OS. Intel Nehalem chipset is approaching the RAS of Itanium, making the "mainframe argument" more difficult for HP-UX.
Before Oracle acquired Sun Microsystems, Oracle was focused on Linux as a UNIX replacement. After the acquisition, they now claim to support Linux and Solaris equally. They are now focused on trying to protect their rapidly declining install base by keeping IBM and HP out. They will work hard to differentiate Solaris as having "secret sauce" that is not in Linux. They will continue to compete head-on against Red Hat Linux.
An interactive poll of the audience indicated that the most strategic Linux/UNIX platform over the next next five years was Red Hat Linux. This beat out AIX, Solaris and HP-UX, as well as all of the other distributions of Linux.
The rooms emptied quickly after the last session, as everyone wanted to get to the "Hospitality Suites".
Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday morning started with another keynote session, followed by some break-out sessions.
Realities of IT Investment
Tighter budgets mean more business decisions. Future investments will come from cost savings. The analysts report that 77 percent of IT decisions are made by CFOs. Most organizations are spending less now than back in 2008 before the recession.
How we innovate through IT is changing. In bad times, risk trumps return, but only 21 percent of the audience have a formal "risk calculation" as part of their purchase plans.
Divestment matters as much as investment. Reductions in complexity have the greatest long-term cost savings. Try to retire at least 20 percent of your applications next year. With the advent of Cloud Computing, companies might just retire it and go entirely with public cloud offerings. Note that this graph the years are different than the ones above, in groups of half-decade increments.
It is important to identify functional dependencies and link your IT risks to business outcomes. Focus on making costs visible, and re-think how you communicate IT performance measurements and their impact to business. Try to change the culture and mind-set so that projects are not referred to as "IT projects" focused on technology, but rather they are "business projects" focused on business results.
Moving to the Cloud
Richard Whitehead from Novell presented challenges in moving to Cloud Computing. There are risks and challenges managing multiple OS environments. Users should have full access to all IT resources they need to do their jobs. Computing should be secure, compliant, and portable. Here is the shift he sees from physical servers to virtual and cloud deployments, years 2010 to 2015:
Richard considers a "workload" as being the combination of the operating system, middleware, and application. He then defines "Business Service" as an appropriate combination of these workloads. For example, a business service that provides a particular report might involve a front-end application, talking through business logic workload server, talking to a back-end database workload server.
To address this challenge, Novell introduces "Intelligent Workload Management", called WorkloadIQ. This manages the lifecycle to build, secure, deploy, manage and measure each workload. Their motto was to take the mix of physical, virtual and cloud workloads all "make it work as one". IBM is a business partner with Novell, and I am a big fan of Novell's open-source solutions including SUSE Linux.
A Funny Thing Happened on the Way to the Cloud....
Bud Albers, CTO of Disney, shared their success in deploying their hybrid cloud infrastructure. Everyone recognizes the Disney brand for movies and theme parks, but may not aware that they also own ABC News and ESPN television, Travel cruises, virtual worlds, mobile sites, and deploy applications like Fantasy Football and Fantasy Fishing.
Two years ago, each Line of Business (LOB) owned their own servers, they were continually out of space, power and HVAC issues forced tactical build-outs of their datacenters. But in 2008, the answer to all questions was Cloud Computing, it slices and dices like something invented by [Ron Popeill], with no investment or IT staff required. However, continuing to ask the CFO for CAPEX to purchase assets that were only 1/7th used was not working out either. That's right, over 75 percent of their servers were running less than 15 percent CPU utilization.
The compromise was named "D*Cloud". Internal IT infrastructure would be positioned for Cloud Computing, by adopting server virtualization, implementing REST/SOAP interfaces, and replicating the success across their various Content Distribution Networks (CDN). Disney is no stranger to Open Source software, using Linux and PHP. Their [Open Source] web page shows tools available from Disney Animation studios.
At the half-way point, they had half their applications running virtualized on just 4 percent of their servers. Today, they run over 20 VMs per host and have 65 percent of their apps virtualized. Their target is 80 percent of their apps virtualized by 2014.
Bud used the analogy that public clouds will be the "gas stations" of the IT industry. People will choose the cheapest gas among nearby gas stations. By focusing on "Application management" rather than "VM instance management", Disney is able to seamlessly move applications as needed from private to public cloud platforms.
Their results? Disney is now averaging 40 percent CPU utilization across all servers. Bud feels they have achieved better scalability, better quality of service, and increased speed, all while saving money. Disney is spending less on IT now than in 2008,
UPMC Maximizes Storage Efficiency with IBM
Kevin Muha, UPMC Enterprise Architect & Technology Manager for Storage and Data Protection Services, was unable to present this in person, so Norm Protsman (IBM) presented Kevin's charts on the success at the University of Pittsburgh Medical Center [UPMC]. UPMC is Western Pennsylvania's largest employer, with roughly 50,000 employees across 20 hospitals, 400 doctors' offices and outpatient sites. They have frequently been rated one of the best hospitals in the US.
Their challenge was storage growth. Their storage environment had grown 328 percent over the past three years, to 1.6PB of disk and nearly 7 PB of physical tape. To address this, UPMC deployed four IBM TS7650G ProtecTIER gateways (2 clusters) and three XIV storage systems for their existing IBM Tivoli Storage Manager (TSM) environment. Since they were already using TSM over a Fibre Channel SAN, the implementation took only three days.
UPMC was backing up nearly 60TB per day, in a 15-hour back window. Their primary data is roughly 60 percent Oracle, with the rest being a mix of Microsoft Exchange, SQL Server, and unstructured data such as files and images.
Their results? TSM reclamation is 30 percent faster. Hardware footprint reduced from 9 tiles to 5. Over 50 percent reduction in recovery time for Oracle DB, and 20 percent reduction in recovery of SQL Server, Microsoft Exchange, and Epic Cache. They average 24:1 deduplication overall, which can be broken down by data category as follows:
29:1 Cerner Oracle
18:1 EPIC Cache
10:1 Microsoft SQL Server
8:1 Unstructured files
6:1 Microsoft Exchange
UPMC still has lots of LTO-4 tapes onsite and offsite from before the change-over, so the next phase planned is to implement "IP-based remote replication" between ProtecTIER gateways to a third data center at extended distance. The plan is to only replicate the backups of production data, and not replicate the backups of test/dev data.