This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
By combining multiple components into a single "integrated system", IBM can offer a blended disk-and-tape storage solutions. This provides the best of both worlds, high speed access using disk, while providing lower costs and more energy efficiency with tape. According to a study by the Clipper Group, tape can be 23 times less expensive than disk over a 5 year total cost of ownership (TCO).
I've also covered Hierarchical Storage Management, such as my post [Seven Tiers of Storage at ABN Amro], and my role as lead architect for DFSMS on z/OS in general, and DFSMShsm in particular.
However, some explanation might be warranted in the use of these two terms in regards to SONAS. In this case, ILM refers to policy-based file placement, movement and expiration on internal disk pools. This is actually a GPFS feature that has existed for some time, and was tested to work in this new configuration. Files can be individually placed on either SAS (15K RPM) or SATA (7200 RPM) drives. Policies can be written to move them from SAS to SATA based on size, age and days non-referenced.
HSM is also a form of ILM, in that it moves data from SONAS disk to external storage pools managed by IBM Tivoli Storage Manager. A small stub is left behind in the GPFS file system indicating the file has been "migrated". Any reference to read or update this file will cause the file to be "recalled" back from TSM to SONAS for processing. The external storage pools can be disk, tape or any other media supported by TSM. Some estimate that as much as 60 to 80 percent of files on NAS have low reference and should be stored on tape instead of disk, and now SONAS with HSM makes that possible.
This distinction allows the ILM movement to be done internally, within GPFS, and the HSM movement to be done externally, via TSM. Both ILM and HSM movement take advantage of the GPFS high-speed policy engine, which can process 10 million files per node, run in parallel across all interface nodes. Note that TSM is not required for ILM movement. In effect, SONAS brings the policy-based management features of DFSMS for z/OS mainframe to all the rest of the operating systems that access SONAS.
HTTP and NIS support
In addition to NFS v2, NFS v3, and CIFS, the SONAS v1.1.1 adds the HTTP protocol. Over time, IBM plans to add more protocols in subsequent releases. Let me know which protocols you are interested in, so I can pass that along to the architects designing future releases!
SONAS v1.1.1 also adds support for Network Information Service (NIS), a client/server based model for user administration. In SONAS, NIS is used for netgroup and ID mapping only. Authentication is done via Active Directory, LDAP or Samba PDC.
SONAS already had synchronous replication, which was limited in distance. Now, SONAS v1.1.1 provides asynchronous replication, using rsync, at the file level. This is done over Wide Area Network (WAN) across to any other SONAS at any distance.
Interface modules can now be configured with either 64GB or 128GB of cache. Storage now supports both 450GB and 600GB SAS (15K RPM) and both 1TB and 2TB SATA (7200 RPM) drives. However, at this time, an entire 60-drive drawer must be either all one type of SAS or all one type of SATA. I have been pushing the architects to allow each 10-pack RAID rank to be independently selectable. For now, a storage pod can have 240 drives, 60 drives of each type of disk, to provide four different tiers of storage. You can have up to 30 storage pods per SONAS, for a total of 7200 drives.
An alternative to internal drawers of disk is a new "Gateway" iRPQ that allows the two storage nodes of a SONAS storage pod to connect via Fibre Channel to one or two XIV disk systems. You cannot mix and match, a storage pod is either all internal disk, or all external XIV. A SONAS gateway combined with external XIV is referred to as a "Smart Business Storage Cloud" (SBSC), which can be configured off premises and managed by third-party personnel so your IT staff can focus on other things.
See the Announcement Letters for the SONAS [hardware] and [software] for more details.
For those who are wondering how this positions against IBM's other NAS solution, the IBM System Storage N series, the rule of thumb is simple. If your capacity needs can be satisfied with a single N series box per location, use that. If not, consider SONAS instead. For those with non-IBM NAS filers that realize now that SONAS is a better approach, IBM offers migration services.
Both the Information Archive and the SONAS can be accessed from z/OS or Linux on System z mainframe, from "IBM i", AIX and Linux on POWER systems, all x86-based operating systems that run on System x servers, as well as any non-IBM server that has a supported NAS client.
Continuing this week's theme on Business Continuity, I thought I would explore more on the identification of scenarios to help drive appropriate planning. As I mentioned in my last post, this should be done first.
A recent post in Anecdote talks about the long list of cognitive biases which affect business decision making. This list is a good explanation of why so many people have a difficult time identifying appropriate recovery scenarios as the basis for Business Continuity planning. Their "cognitive biases" get in the way.
Again, using my IBM Thinkpad T60 laptop as an example, here are a variety of different scenarios:
Corrupted File System
Some file systems are more fragile than others. If your NTFS file system gets corrupted, you might be able to run
CHKDSK C: /F
but this just puts damaged blocks into dummy files, it doesn't really repair your files back to their pre-damage level.All kinds of things can damage the file system, including viruses, software defects, and user error.
I keep my programs and data in separate file systems. C: has my Windows operating system and applications, and D: holds my pure data. If one file system is corrupted, the other one might be in tact, mitigating the risk.
Hard Disk Crash
Hopefully, you will have temporary read/write errors to provide warning prior to a complete failure. In theory, if I kept a spare hard disk in my laptop bag, I could swap out the bad drive with the good drive. I don't have that. The three times that I have had a disk failure all occurred while I was in Tucson.
Instead, I keep the few files I need for my trip on a separate USB key, and carry bootable Live CD, which allows you to boot entirely from CDrom drive, either to run applications, or perform rescue operations.
The latest one that I am trying out is Ubuntu Linux, which has OpenOffice 2.2 that can read/write PowerPoint, Word, and Excel spreadsheets; Firefox web browser; Gimp graphics software; and a variety of other applications, all in a 700MB CDrom image. I even have been able to get Wireless (Wi-Fi) working with it, and the process to create your own customized Live CD with the your own application packages is fairly straightforward. Combined with a writeable USB key, you can actually get work done this way. Special thanks to IBM blogger Bob Sutor for pointing me to this.
(If you have a DVD-RAM drive, there are bigger Live CDs from SUSE and RedHat Fedora that provide even more applications)
Laptop Shell Failure
This might catch some people by surprise. I have had the keyboard, LCD screen, or some essential port/plug fail on my laptop. The disk drive and CDrom drive work fine, but unless you have another "laptop" to stick them into, they don't help you recover. This can also happen if the motherboard fails, or the battery is unable to hold a charge.
IBM provides a 24-hour turn around fix. Basically, IBM sends me a laptop shell, no drive, no CDrom, with instructions to move the disk drive and CDrom drive from your broken shell, to the new shell, then send the bad shell back in the same shipping box.
Here, again, I am thankful that I keep my key files on an USB key. Often I travel with other IBMers, and can borrow their laptop to make presentations, check my e-mail, or other work, until I can get my replacement shell. In you are travelling outside the US, you might be able to move your disk drive into a colleague's laptop, access the data, copy it to your USB key or burn a copy on CD or DVD.
In a data center, many outages are really "failures to access data", but the data is safe. For example, power outages, network outages, and so on, can prevent people from using their IT systems, but the data is safe when these are re-established.
At times, I have been temporarily separated from my laptop. Three examples:
A higher level executive had technical difficulties with his laptop, and usurped mine instead.
A colleague forgot his power supply for his laptop, and borrowed my laptop instead. (I wish there were a standard for laptop power plug connectors)
Customs agents confiscate your laptop, give you a receipt, and eventually you get it back.
In all cases, I was glad that no "recovery" was required, and that the few files I needed were on my USB key. A few times, I was able to get by on the machines available at the nearest Internet Cafe, in the meantime.
With some imagination, you can recognize that this scenario is similar to the previous one for laptop shell failure.Here is a good example that you can identify different scenarios, and then later discover they have similar properties in terms of recovery, and can be treated as one.
Laptops are stolen every day. Luckily, I've only had this happen twice to me in my career at IBM, and I managed to get a replacement soon enough. The key lesson here is to keep your USB key and recovery media in separate luggage.I know it is more convenient to keep all computer-related stuff in one place, but a thief is going to take your whole laptop bag, to make sure that all cables and power supplies are included, and is not going to leave anything behind. That would just slow them down.
In each case, some brainstorming, or personal experience, can help identify scenarios, identify what makes them unique from a recovery perspective, and plan accordingly. If you looking to create or upgrade your Business Continuity plan, give IBM a call, we can help!
Wrapping up this week's theme on ways to make the planet smarter, and less confusing, I present IBM's third annual [five in five]. These are five IBM innovations to watch over the next five years, all of which have implications on information storage. Here is a quick [3-minute video] that provides the highlights:
Have you ever noticed that sometimes two movies come out that seem eerily similar to each other, released by different studios within months or weeks of each other? My sister used to review film scripts for a living, she would read ten of them and have to pick her top three favorites, and tells me that scripts for nearly identical concepts came all the time. Here are a few of my favorite examples:
1994: [Wyatt Earp] and [Tombstone] were Westerns recounting the famed gunfight at the O.K. Corral. Tombstone, Arizona is near Tucson, and the gunfight is recreated fairly often for tourists.
1998: [Armageddon] and [Deep Impact] were a pair of disaster movies dealing with a large rock heading to destroy all life on earth. I was in Mazatlan, Mexico to see the latter, dubbed in Spanish as "Impacto Profundo".
1998: [A Bug's Life] and [Antz] were computer-animated tales of the struggle of one individual ant in an ant colony.
2000: [Mission to Mars] and [Red Planet] were sci-fi pics exploring what a manned mission to our neighboring planet might entail.
This is different than copy-cat movies that are re-made or re-imagined many years later based on the previous successes of an original. Ever since my blog post [VPLEX: EMC's Latest Wheel is Round] in 2010 comparing EMC's copy-cat product that came our seven years after IBM's SAN Volume Controller (SVC), I've noticed EMC doesn't talk about VPLEX that much anymore.
This week, IBM announced [XIV Gen3 Solid-State Drive support] and our friends over at EMC announced [VFCache SSD-based PCIe cards]. Neither of these should be a surprise to anyone who follows the IT industry, as IBM had announced its XIV Gen3 as "SSD-Ready" last year specifically for this purpose, and EMC has been touting its "Project Lightning" since last May.
Fellow blogger Chuck Hollis from EMC has a blog post [VFCache means Very Fast Cache indeed] that provides additional detail. Chuck claims the VFCache is faster than popular [Fusion-IO PCIe cards] available for IBM servers. I haven't seen the performance spec sheets, but typically SSD is four to five times slower than the DRAM cache used in the XIV Gen3. The VFCache's SSD is probably similar in performance to the SSD supported in the IBM XIV Gen3, DS8000, DS5000, SVC, N series, and Storwize V7000 disk systems.
Nonetheless, I've been asked my opinions on the comparison between these two announcements, as they both deal with improving application performance through the use of Solid-State Drives as an added layer of read cache.
(FTC Disclosure: I am both a full-time employee and stockholder of the IBM Corporation. The U.S. Federal Trade Commission may consider this blog post as a paid celebrity endorsement of IBM servers and storage systems. This blog post is based on my interpretation and opinions of publicly-available information, as I have no hands-on access to any of these third-party PCIe cards. I have no financial interest in EMC, Fusion-IO, Texas Memory Systems, or any other third party vendor of PCIe cards designed to fit inside IBM servers, and I have not been paid by anyone to mention their name, brands or products on this blog post.)
The solutions are different in that IBM XIV Gen3 the SSD is "storage-side" in the external storage device, and EMC VFCache is "server-side" as a PCI Express [PCIe] card. Aside from that, both implement SSD as an additional read cache layer in front of spinning disk to boost performance. Neither is an industry first, as IBM has offered server-side SSD since 2007, and IBM and EMC have offered storage-side SSD in many of their other external storage devices. The use of SSD as read cache has already been available in IBM N series using [Performance Accelerator Module (PAM)] cards.
IBM has offered cooperative caching synergy between its servers and its storage arrays for some time now. The predecessor to today's POWER7-based were the iSeries i5 servers that used PCI-X IOP cards with cache to connect i5/OS applications to IBM's external disk and tape systems. To compete in this space, EMC created their own PCI-X cards to attach their own disk systems. In 2006, IBM did the right thing for our clients and fostered competition by entering in a [Landmark agreement] with EMC to [license the i5 interfaces]. Today, VIOS on IBM POWER systems allows a much broader choice of disk options for IBM i clients, including the IBM SVC, Storwize V7000 and XIV storage systems.
Can a little SSD really help performance? Yes! An IBM client running a [DB2 Universal Database] cluster across eight System x servers was able to replace an 800-drive EMC Symmetrix by putting eight SSD Fusion-IO cards in each server, for a total of 64 Solid-State drives, saving money and improving performance. DB2 has the Data Partitioning Feature that has multi-system DB2 configurations using a Grid-like architecture similar to how XIV is designed. Most IBM System x and BladeCenter servers support internal SSD storage options, and many offer PCIe slots for third-party SSD cards. Sadly, you can't do this with a VFCache card, since you can have only one VFCache card in each server, the data is unprotected, and only for ephemeral data like transaction logs or other temporary data. With multiple Fusion-IO cards in an IBM server, you can configure a RAID rank across the SSD, and use it for persistent storage like DB2 databases.
Here then is my side-by-side comparison:
IBM XIV Gen3 SSD Caching
Selected x86-based models of Cisco UCS, Dell PowerEdge, HP ProLiant DL, and IBM xSeries and System x servers
All of these, plus any other blade or rack-optimized server currently supported by XIV Gen3, including Oracle SPARC, HP Titanium, IBM POWER systems, and even IBM System z mainframes running Linux
Operating System support
Linux RHEL 5.6 and 5.7, VMware vSphere 4.1 and 5.0, and Windows 2008 x64 and R2.
All of these, plus all the other operating systems supported by XIV Gen3, including AIX, IBM i, Solaris, HP-UX, and Mac OS X
FCP and iSCSI
Vendor-supplied driver required on the server
Yes, the VFCache driver must be installed to use this feature.
No, IBM XIV Gen3 uses native OS-based multi-pathing drivers.
External disk storage systems required
None, it appears the VFCache has no direct interaction with the back-end disk array, so in theory the benefits are the same whether you use this VFCache card in front of EMC storage or IBM storage
XIV Gen3 is required, as the SSD slots are not available on older models of IBM XIV.
Shared disk support
No, VFCache has to be disabled and removed for vMotion to take place.
Yes! XIV Gen3 SSD caching shared disk supports VMware vMotion and Live Partition Mobility.
Support for multiple servers
An advantage of the XIV Gen3 SSD caching approach is that the cache can be dynamically allocated to the busiest data from any server or servers.
Support for active/active server clusters
Aware of changes made to back-end disk
No, it appears the VFCache has no direct interaction with the back-end disk array, so any changes to the data on the box itself are not communicated back to the VFCache card itself to invalidate the cache contents.
None identified. However, VFCache only caches blocks 64KB or smaller, so any sequential processing with larger blocks will bypass the VFCache.
Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.
Number of SSD supported
One, which seems odd as IBM supports multiple Fusion-IO cards for its servers. However, this is not really a single point of failure (SPOF) as an application experiencing a VFCache failure merely drops down to external disk array speed, no data is lost since it is only read cache.
6 to 15 (one per XIV module) for high availability.
Pin data in SSD cache
Yes, using split-card mode, you can designate a portion of the 300GB to serve as Direct-attached storage (DAS). All data written to the DAS portion will be kept in SSD. However, since only one card is supported per server and the data is unprotected, this should only be used for ephemeral data like logs and temp files.
No, there is no option to designate an XIV Gen3 volume to be SSD-only. Consider using Fusion-IO PCIe card as a DAS alternative, or another IBM storage system for that requirement.
Pre-sales Estimating tools
Yes! CDF and Disk Magic tools are available to help cost-justify the purchase of SSD based on workload performance analysis.
IBM has the advantage that it designs and manufactures both servers and storage, and can design optimal solutions for our clients in that regard.
Today is Tuesday, a good day for announcements and good news!
This week I am in Guadalajara, Mexico, and the focus in Mexico is Small and Medium sized Business (SMB). SmallBusinessComputing.COM put out their [2008 Awards: The Absolute Best in Small Business], and IBM disk and server systems were recognized. Here is an excerpt:
Network Storage As companies expand, so does the data, and often at an alarming rate. Adding dedicated storage to your network can ease both system performance and efficiency woes, making your work life a bit easier.
This year, 42 percent our readers cast their lot with the [IBM System Storage DS3400]. The $6,495 system supports 12 hard disk drives for capacity of up to 3.6 terabytes a good match for tasks such as managing databases, e-mail and Web serving.
Last year's winner, NetApp, takes a very respectable runner-up slot for the NetApp Store Vault S300, a $3,000 storage appliance that offers security, scalability, data protection and simplified management.
Also, IBM's SMB departmental machine, the [System i515 Express] was named runner-up for servers.
IBM is hosting a webcast about storage for SAP Environments. Learn how integrated IBM infrastructure solutions, specifically, customized for your SAP environments, can help lower your business costs, increases productivity in SAP development and test tasks, and improve resource utilization. This will include discussion of archive solutions with WebDAV, ArchiveLink and DR550;IBM Business Intelligence (BI) Accelerator; IBM support for SAP [Adaptive Computing]; and performance benchmark results. The session is intended for SAP and storage administrators, IT directorsand managers.
Here are the details:
Date: Wednesday, June 18, 2008
Time: 11:00am EDT (8:00am for those of us in Arizona or California)
Two years ago, the folks at University of Toronto asked me to help their graduate students build a "Watson" running entirely on IBM SoftLayer to see if this would be a worthwhile class project. Needless to say, it was more difficult than they expected, but we managed to pull it off during that summer, able to answer a handful of simple questions from a single page corpus.
Last month, [Industry Leaders Establish Partnership on AI], combining the talents from Amazon, DeepMind/Google, Facebook, IBM and Microsoft, to form a non-profit to explore best practices and ethical questions related to Watson and other Artificial Intelligence applications.
Since data is at the core of any Artificial Intelligence, IBM is pleased to announce today that IBM Cloud Object Storage System is now available on IBM SoftLayer. This is based on the Cleversafe technology IBM acquired last year.
While other cloud service providers have offered data storage in the cloud, this new offering also allows hybrid configurations with geographically dispersed erasure coding. Unlike RAID which protects against the loss of one or two drives, erasure coding can protect against a larger number of concurrent failures. For example, using an Information Dispersal Algorithm of "7+5", where seven pieces of data are encoded on twelve independent disks, the system can lose up to five disk drives without losing any data.
Click graphic to view larger
Combining this with Geographically Dispersed Configuration across three or more sites means that you can lose an entire data center, four of the twelve disks, and still have instant full access to all of your data from eight drives at the other locations. In the graphic, you see two on-premise data centers combined with a third location in IBM SoftLayer.
Over time, I have gotten many emails, comments and tweets related to this post. The instructions have been downloaded over 130,000 times!
The letter below was so inspiring that I felt I need to share it. (Published here with permission from the author, who goes by the screen name DaveAlex)
Thought you would like to know that I am working toward an AI Agent hopefully more advanced than "Watson Jr." although I will probably include the software behind it.
The hardware I have on hand is a System X3650M2 which I bought for $250 on eBay. It has four 2.66 GHz Xeons with 6 cores each, and 16 GB RAM. I have another 16 to install when I need it. I will shortly have 4 TB of HDD space on line, plus an addition 3 TB USB3 drive.
Ultimately, I hope to have some of the available knowledge bases on line, Freebase, CYC, etc which will handle specific information perhaps better than the Watson software by itself.
What the target (goal) that I am aiming for is a stationary version of Commander Data of Star Trek, Next generation.
I envision if having some form of self knowledge, being capable of processing graphical data, i.e., facial recognition, gesture interpretation, voice input/output, mathematical processing, with graphical output (display & hardcopy) and several additional features.
As I have studied this project, I am amazed at how much of the required software is already available. The biggest stumbling block is integrating the separate parts.
Back to Hardware. I just bought 2 Dell 2850 servers, each with dual Intel Xeons which can handle some of the tasks. If I need more processing power, I just happen to have about 10 other towers with Pentium IV or dual core processors sitting around, which can be pressed into service as needed. So far, my total cost is less than $1000 US Dollars, and my wife has not thrown me out yet. I continue to watch eBay for additional older used equipment for fractions of the original cost. My friends who follow my project keep telling me that I need to get on with the software, and add hardware as needed; they are absolutely correct, but I can't resist a bargain.
The power consumption is a potential problem, but I have a 4500 Watt solar array to use. The cooling could be a problem too, but my house sits into the side of hill, and can readily duct the air supply pass the sub-surface wall, perhaps with old Processor cooling fins glued to the wall.
I hope to get some hobby programmers involved in the project, it is a bit beyond my programming capabilities. I hope that I can live long enough to see it come to fruition; I am 78 now, and mentally in very good condition.
Wow! He is 78 years old! While others his age are playing shuffleboard at the nursing home, he is out there learning new things about the latest technology. I wish him the best of luck on this! If you would like to reach out to DaveAlex, send me a note or comment below, and I will forward them on to him.
For the longest time, people thought that humans could not run a mile in less than four minutes. Then, in 1954, [Sir Roger Bannister] beat that perception, and shortly thereafter, once he showed it was possible, many other runners were able to achieve this also. The same is being said now about the IBM Watson computer which appeared this week against two human contestants on Jeopardy!
(2014 Update: A lot has happened since I originally wrote this blog post! I intended this as a fun project for college students to work on during their summer break. However, IBM is concerned that some businesses might be led to believe they could simply stand up their own systems based entirely on open source and internally developed code for business use. IBM recommends instead the [IBM InfoSphere BigInsights] which packages much of the software described below. IBM has also launched a new "Watson Group" that has [Watson-as-a-Service] capabilities in the Cloud. To raise awareness to these developments, IBM has asked me to rename this post from IBM Watson - How to build your own "Watson Jr." in your basement to the new title IBM Watson -- How to replicate Watson hardware and systems design for your own use in your basement. I also took this opportunity to improve the formatting layout.)
Often, when a company demonstrates new techology, these are prototypes not yet ready for commercial deployment until several years later. IBM Watson, however, was made mostly from commercially available hardware, software and information resources. As several have noted, the 1TB of data used to search for answers could fit on a single USB drive that you buy at your local computer store.
Take a look at the [IBM Research Team] to determine how the project was organized. Let's decide what we need, and what we don't in our version for personal use:
Do we need it for personal use?
Yes, That's you. Assuming this is a one-person project, you will act as Team Lead.
Yes, I hope you know computer programming!
No, since this version for personal use won't be appearing on Jeopardy, we won't need strategy on wager amounts for the Daily Double, or what clues to pick next. Let's focus merely on a computer that can accept a question in text, and provide an answer back, in text.
Yes, this team focused on how to wire all the hardware together. We need to do that, although this version for personal use will have fewer components.
Optional. For now, let's have this version for personal use just return its answer in plain text. Consider this Extra Credit after you get the rest of the system working. Consider using [eSpeak], [FreeTTS], or the Modular Architecture for Research on speech sYnthesis [MARY] Text-to-Speech synthesizers.
Yes, I will explain what this is, and why you need it.
Yes, we will need to get information for personal use to process
Yes, this team developed a system for parsing the question being asked, and to attach meaning to the different words involved.
No, this team focused on making IBM Watson optimized to answer in 3 seconds or less. We can accept a slower response, so we can skip this.
(Disclaimer: As with any Do-It-Yourself (DIY) project, I am not responsible if you are not happy with your version for personal use I am basing the approach on what I read from publicly available sources, and my work in Linux, supercomputers, XIV, and SONAS. For our purposes, this version for personal use is based entirely on commodity hardware, open source software, and publicly available sources of information. Your implementation will certainly not be as fast or as clever as the IBM Watson you saw on television.)
Step 1: Buy the Hardware
Supercomputers are built as a cluster of identical compute servers lashed together by a network. You will be installing Linux on them, so if you can avoid paying extra for Microsoft Windows, that would save you some money. Here is your shopping list:
Three x86 hosts, with the following:
64-bit quad-core processor, either Intel-VT or AMD-V capable,
8GB of DRAM, or larger
300GB of hard disk, or larger
CD or DVD Read/Write drive
Computer Monitor, mouse and keyboard
Ethernet 1GbE 4-port hub, and appropriate RJ45 cables
Surge protector and Power strip
Local Console Monitor (LCM) 4-port switch (formerly known as a KVM switch) and appropriate cables. This is optional, but will make it easier during the development. Once your implementation is operational, you will only need the monitor and keyboard attached to one machine. The other two machines can remain "headless" servers.
Step 2: Establish Networking
IBM Watson used Juniper switches running at 10Gbps Ethernet (10GbE) speeds, but was not connected to the Internet while playing Jeopardy! Instead, these Ethernet links were for the POWER7 servers to talk to each other, and to access files over the Network File System (NFS) protocol to the internal customized SONAS storage I/O nodes.
The implementation will be able to run "disconnected from the Internet" as well. However, you will need Internet access to download the code and information sources. For our purposes, 1GbE should be sufficient. Connect your Ethernet hub to your DSL or Cable modem. Connect all three hosts to the Ethernet switch. Connect your keyboard, video monitor and mouse to the LCM, and connect the LCM to the three hosts.
Step 3: Install Linux and Middleware
To say I use Linux on a daily basis is an understatement. Linux runs on my Android-based cell phone, my laptop at work, my personal computers at home, most of our IBM storage devices from SAN Volume Controller to XIV to SONAS, and even on my Tivo at home which recorded my televised episodes of Jeopardy!
For this project, you can use any modern Linux distribution that supports KVM. IBM Watson used Novel SUSE Linux Enterprise Server [SLES 11]. Alternatively, I can also recommend either Red Hat Enterprise Linux [RHEL 6] or Canonical [Ubuntu v10]. Each distribution of Linux comes in different orientations. Download the the 64-bit "ISO" files for each version, and burn them to CDs.
Graphical User Interface (GUI) oriented, often referred to as "Desktop" or "HPC-Head"
Command Line Interface (CLI) oriented, often referred to as "Server" or "HPC-Compute"
Guest OS oriented, to run in a Hypervisor such as KVM, Xen, or VMware. Novell calls theirs "Just Enough Operating System" [JeOS].
For this version for personal use, I have chosen a [multitier architecture], sometimes referred to as an "n-tier" or "client/server" architecture.
Host 1 - Presentation Server
For the Human-Computer Interface [HCI], the IBM Watson received categories and clues as text files via TCP/IP, had a [beautiful avatar] representing a planet with 42 circles streaking across in orbit, and text-to-speech synthesizer to respond in a computerized voice. Your implementation will not be this sophisticated. Instead, we will have a simple text-based Query Panel web interface accessible from a browser like Mozilla Firefox.
Host 1 will be your Presentation Server, the connection to your keyboard, video monitor and mouse. Install the "Desktop" or "HPC Head Node" version of Linux. Install [Apache Web Server and Tomcat] to run the Query Panel. Host 1 will also be your "programming" host. Install the [Java SDK] and the [Eclipse IDE for Java Developers]. If you always wanted to learn Java, now is your chance. There are plenty of books on Java if that is not the language you normally write code.
While three little systems doesn't constitute an "Extreme Cloud" environment, you might like to try out the "Extreme Cloud Administration Tool", called [xCat], which was used to manage the many servers in IBM Watson.
Host 2 - Business Logic Server
Host 2 will be driving most of the "thinking". Install the "Server" or "HPC Compute Node" version of Linux. This will be running a server virtualization Hypervisor. I recommend KVM, but you can probably run Xen or VMware instead if you like.
Host 3 - File and Database Server
Host 3 will hold your information sources, indices, and databases. Install the "Server" or "HPC Compute Node" version of Linux. This will be your NFS server, which might come up as a question during the installation process.
Technically, you could run different Linux distributions on different machines. For example, you could run "Ubuntu Desktop" for host 1, "RHEL 6 Server" for host 2, and "SLES 11" for host 3. In general, Red Hat tries to be the best "Server" platform, and Novell tries to make SLES be the best "Guest OS".
My advice is to pick a single distribution and use it for everything, Desktop, Server, and Guest OS. If you are new to Linux, choose Ubuntu. There are plenty of books on Linux in general, and Ubuntu in particular, and Ubuntu has a helpful community of volunteers to answer your questions.
Step 4: Download Information Sources
You will need some documents for your implementation to process.
IBM Watson used a modified SONAS to provide a highly-available clustered NFS server. For this version, we won't need that level of sophistication. Configure Host 3 as the NFS server, and Hosts 1 and 2 as NFS clients. See the [Linux-NFS-HOWTO] for details. To optimize performance, host 3 will be the "official master copy", but we will use a Linux utility called rsync to copy the information sources over to the hosts 1 and 2. This allows the task engines on those hosts to access local disk resources during question-answer processing.
We will also need a relational database. You won't need a high-powered IBM DB2. Your implementation can do fine with something like [Apache Derby] which is the open source version of IBM CloudScape from its Informix acquisition. Set up Host 3 as the Derby Network Server, and Hosts 1 and 2 as Derby Network Clients. For more about structured content in relational databases, see my post [IBM Watson - Business Intelligence, Data Retrieval and Text Mining].
Linux includes a utility called wget which allows you to download content from the Internet to your system. What documents you decide to download is up to you, based on what types of questions you want answered. For example, if you like Literature, check out the vast resources at [FullBooks.com]. You can automate the download by writing a shell script or program to invoke wget to all the places you want to fetch data from. Rename the downloaded files to something unique, as often they are just "index.html". For more on wget utility, see [IBM Developerworks].
Step 5: The Query Panel - Parsing the Question
Next, we need to parse the question and have some sense of what is being asked for. For this we will use [OpenNLP] for Natural Language Processing, and [OpenCyc] for the conceptual logic reasoning. See Doug Lenat presenting this 75-minute video [Computers versus Common Sense]. To learn more, see the [CYC 101 Tutorial].
Unlike Jeopardy! where Alex Trebek provides the answer and contestants must respond with the correct question, we will do normal Question-and-Answer processing. To keep things simple, we will limit questions to the following formats:
Who is ...?
Where is ...?
When did ... happen?
What is ...?
Host 1 will have a simple Query Panel web interface. At the top, a place to enter your question, and a "submit" button, and a place at the bottom for the answer to be shown. When "submit" is pressed, this will pass the question to "main.jsp", the Java servlet program that will start the Question-answering analysis. Limiting the types of questions that can be posed will simplify hypothesis generation, reduce the candidate set and evidence evaluation, allowing the analytics processing to continue in reasonable time.
Step 6: Unstructured Information Management Architecture
The "heart and soul" of IBM Watson is Unstructured Information Management Architecture [UIMA]. IBM developed this, then made it available to the world as open source. It is maintained by the [Apache Software Foundation], and overseen by the Organization for the Advancement of Structured Information Standards [OASIS].
Basically, UIMA lets you scan unstructured documents, gleam the important points, and put that into a database for later retrieval. In the graph above, DBs means 'databases' and KBs means 'knowledge bases'. See the 4-minute YouTube video of [IBM Content Analytics], the commercial version of UIMA.
Starting from the left, the Collection Reader selects each document to process, and creates an empty Common Analysis Structure (CAS) which serves as a standardized container for information. This CAS is passed to Analysis Engines , composed of one or more Annotators which analyze the text and fill the CAS with the information found. The CAS are passed to CAS Consumers which do something with the information found, such as enter an entry into a database, update an index, or update a vote count.
(Note: This point requires, what we in the industry call a small matter of programming, or [SMOP]. If you've always wanted to learn Java programming, XML, and JDBC, you will get to do plenty here. )
If you are not familiar with UIMA, consider this [UIMA Tutorial].
Step 7: Parallel Processing
People have asked me why IBM Watson is so big. Did we really need 2,880 cores of processing power? As a supercomputer, the 80 TeraFLOPs of IBM Watson would place it only in 94th place on the [Top 500 Supercomputers]. While IBM Watson may be the [Smartest Machine on Earth], the most powerful supercomputer at this time is the Tianhe-1A with more than 186,000 cores, capable of 2,566 TeraFLOPs.
To determine how big IBM Watson needed to be, the IBM Research team ran the DeepQA algorithm on a single core. It took 2 hours to answer a single Jeopardy question! Let's look at the performance data:
Number of cores
Time to answer one Jeopardy question
Single IBM Power750 server
< 4 minutes
Single rack (10 servers)
< 30 seconds
IBM Watson (90 servers)
< 3 seconds
The old adage applies, [many hands make for light work]. The idea is to divide-and-conquer. For example, if you wanted to find a particular street address in the Manhattan phone book, you could dispatch fifty pages to each friend and they could all scan those pages at the same time. This is known as "Parallel Processing" and is how supercomputers are able to work so well. However, not all algorithms lend well to parallel processing, and the phrase [nine women can't have a baby in one month] is often used to remind us of this.
Fortuantely, UIMA is designed for parallel processing. You need to install UIMA-AS for Asynchronous Scale-out processing, an add-on to the base UIMA Java framework, supporting a very flexible scale-out capability based on JMS (Java Messaging Services) and ActiveMQ. We will also need Apache Hadoop, an open source implementation used by Yahoo Search engine. Hadoop has a "MapReduce" engine that allows you to divide the work, dispatch pieces to different "task engines", and the combine the results afterwards.
Host 2 will run Hadoop and drive the MapReduce process. Plan to have three KVM guests on Host 1, four on Host 2, and three on Host 3. That means you have 10 task engines to work with. These task engines can be deployed for Content Readers, Analysis Engines, and CAS Consumers. When all processing is done, the resulting votes will be tabulated and the top answer displayed on the Query Panel on Host 1.
Step 8: Testing
To simplify testing, use a batch processing approach. Rather than entering questions by hand in the Query Panel, generate a long list of questions in a file, and submit for processing. This will allow you to fine-tune the environment, optimize for performance, and validate the answers returned.
There you have it. By the time you get your implementation fully operational, you will have learned a lot of useful skills, including Linux administration, Ethernet networking, NFS file system configuration, Java programming, UIMA text mining analysis, and MapReduce parallel processing. Hopefully, you will also gain an appreciation for how difficult it was for the IBM Research team to accomplish what they had for the Grand Challenge on Jeopardy! Not surprisingly, IBM Watson is making IBM [as sexy to work for as Apple, Google or Facebook], all of which started their business in a garage or a basement with a system as small as this version for personal use.
"When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, the actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1 Terabyte (TB). For performance reasons, various subsets of the data are replicated in RAM on different functional groups of cluster nodes. The entire system is self-contained, Watson is NOT going to the internet searching for answers."
I had several readers ask me to explain the significance of the "Terabyte". I'll work my way up.
A bit is simply a zero (0) or one (1). This could answer a Yes/No or True/False question.
Most computers have standardized a byte as a collection of 8 bits. There are 256 unique combinations of ones and zeros possible, so a byte could be used to storage a 2-digit integer, or a single upper or lower case character in the English alphabet. In pratical terms, a byte could store your age in years, or your middle initial.
The Kilobyte is a thousand bytes, enough to hold a few paragraphs of text. A typical written page could be held in 4 KB, for example.
The IBM Challenge to play on Jeopardy! is being compared to the historic 1969 moon landing. To land on the moon, Apollo 11 had the "Apollo Guidance Computer" (AGC) which had 74KB of fixed read-only memory, and 2KB of re-writeable memory. Over [3500 IBM employees were involved] to get the astronauts to the moon and safely back to earth again.
The importance of this computer was highlighted in a [lecture by astronaut David Scott] who said: "If you have a basketball and a baseball 14 feet apart, where the baseball represents the moon and the basketball represents the Earth, and you take a piece of paper sideways, the thinness of the paper would be the corridor you have to hit when you come back."
The Megabyte is a thousand KB, or a million bytes. The 3.5-inch floppy diskette, mentioned in my post [A Boxfull of Floppies] could hold 1.44MB, or about 360 pages of text.
In the article [Wikipedia as a printed book], the printing of a select 400 articles resulted in a book 29 inches thick. Those 5,000 pages would consume about 20 MB of space.
One of my favorite resources I use to search is the Internet Movie Data Base [IMDB]. Leaving out the photos and videos, the [text-only portion of the IMDB database is just over 600 MB], representing nearly all of the actors, awards, nominations, television shows and movies. A standard CD-ROM can hold 700MB, so the text portion of the IMDB could easily fit on a single CD.
The Gigabyte is a thousand MB, or a billion bytes. My Thinkpad T410 laptop has 4GB of RAM and 320GB of hard disk space. My laptop comes with a DVD burner, and each DVD can hold up to 4.7GB of information.
The popular Wikipedia now has some 17 million articles, of which 3.5 million are in English language. It would only take [14GB of space to hold the entire English portion] of Wikipedia. That is small enough to fit on twenty CDs, three DVDs, an Apple iPad or my cellphone (a Samsung Galaxy S Vibrant).
Perhaps you are thinking, "Someone should offer Wikipedia pre-installed on a small handheld!" Too late. The [The Humane Reader] is able to offer 5,000 books and Wikipedia in a small device that connects to your television. This would be great for people who do not have access to the internet, or for parents who want their kids to do their homework, but not be online while they are doing it.
In the latest 2009 report of [How Much Information?] from the University of California, San Diego, the average American consumes 34 GB of information. This includes all the information from radio, television, newspapers, magazines, books and the internet that a person might look at or listen to throughout the day. This project is sponsored by IBM and others to help people understand the nature of our information-consuption habits.
Back in 1992, I visited a client in Germany. Their 90 GB of disk storage attached to their mainframe was the size of three refrigerators, and took five full-time storage administrators to manage.
The Terabyte is a thousand GB, or a trillion bytes. It is now possible to buy external USB drive for your laptop or personal computer that holds 1TB or more. However, at 40MB/sec speeds that USB 2.0 is capable of, it would take seven hours to do a bulk transfer in or out of the device.
IBM offers 1TB and 2TB disk drives in many of our disk systems. In 2008, IBM was preparing to announce the first 1TB tape drive. However, Sun Microsystems announced their own 1TB drive the day before our big announcement, so IBM had to rephrase the TS1130 announcement to [The World's Fastest 1TB tape drive!]
A typical academic research library will hold about 2TB of information. For the [US Library of Congress] print collection is considered to be about 10TB, and their web capture team has collected 160TB of digital data. If you are ever in the Washington DC, I strongly recommend a visit to the Library of Congress. It is truly stunning!
Full-length computer animated movies, like [Happy Feet], consume about 100TB of disk storage during production. IBM offers disk systems that can hold this much data. For example, the IBM XIV can hold up to 151 TB of usable disk space in the size of one refrigerator.
A Key Performance Indicator (KPI) for some larger companies is the number of TB that can be managed by a full-time employee, referred to as TB/FTE. Discussions about TB/FTE are available from IT analysts including [Forrester Research] and [The Info Pro].
The website [Ancestry.com] claims to have over 540 million names in its genealogical database, with a storage of 600TB, with the inclusion of [US census data from 1790 to 1930]. The US government took nine years to process the 1880 census, so for the 1890 census, it rented equipment from Herman Hollerith's Tabulating Machine Company. This company would later merge with two others in 1911 to form what is now called IBM.
A Petabyte is thousand TB, or a quadrillion bytes. It is estimated that all printed materials on Earth would represent approximately 200 PB of information.
IBM's largest disk system, the Scale-Out Network Attach Storage (SONAS) comprised of up to 7,200 disk drives, which can hold over 11 PB of information. A smaller 10-frame model, the same size as IBM Watson, with six interface nodes and 19 storage pods, could hold over 7 PB of information.
For those of us in the IT industry, 1TB is small potatoes. I for one, was expecting it to be much bigger. But for everyone else, the equivalent of 200 million pages of text that IBM Watson has loaded inside is an incredibly large repository of information. I suspect IBM Watson probably contains the complete works of Shakespeare as well as other fiction writers, the IMDB database, all 3.5 million articles of Wikipedia, religious texts like the Bible and the Quran, famous documents like the Magna Carta and the US Constitution, and reference books like a Dictionary, a Thesaurus, and "Gray's Anatomy". And, of course, lots and lots of lists.
For those on Twitter, follow [@ibmwatson] these next three days during the challenge.
The Tucson Executive Briefing Center hosted 20 dignitaries from local companies and academia.
This is a historic competition, an exhibition match pitting a computer against the top two celebrated Jeopardy champions:
Brad Rutter, won $3.2 million USD on Jeopardy!, winning 5 days on the show, and then three later tournamets.
Ken Jennings, winning $2.5 million in a 74-day winning streak on Jeopardy!
One of the members of the audience had never seen an episode of Jeopardy! in his life.
(Note: there are NO SPOILERS in this blog post. If you have not yet watched the show, you are safe to continue reading the rest of this post. I will not
disclose the correct responses to any of the clues nor how well each contestant scored.)
Calline Sanchez, IBM Director, Systems Storage Development for Data Protection and Retention, kicked off today's ceremonies.
The IBM Watson computer, named after IBM founder Thomas J. Watson, has been developed over the past 4 years by a team of IBM scientists who set out to accomplish a grand challenge - build a computing system that rivals a human's ability to answer questions posed in natural language with speed, accuracy and confidence. IBM Research labs in the United States, Japan, China and Israel [collaborated with Artificial Intelligence (AI) experts at eight universities], including Massachusetts Institute of Technology (MIT), University of Texas (UT) at Austin, University of Southern California (USC), Rensselaer Polytechnic Institute (RPI), University at Albany (UAlbany), University of Trento (Italy), University of Massachusetts Amherst, and Carnegie Mellon University.
(Disclaimer: I attended the University of Texas at Austin. My father attended Carnegie Mellon University.)
Last week, NOVA on PBS had a special episode on the making of IBM Watson, you can [watch it online] on their website. Delaney Turner, IBM Social Media Communications Manager for Business Analytics Software, has posted [his observations of Nova].
Since IBM Watson is the size of 10 refrigerators and weighs over 14,000 pounds, it was easier to design the Jeopardy! set at the TJ Watson Research lab in Yorktown Heights, NY, than to ship it over to California where the show is normally recorded. Two of the visual designers that worked on this set, as well as on the visual appearance of Watson, live in Tucson and were part of our audience today.
The IBM Challenge consists of a two-game tournament, where the scores of both games will be added to determine winner rankings. The producers of Jeopardy! will give $1 million dollars USD to first place, $300,000 to second place, and $200,000 to third place. Regardless of outcome, [IBM will donate all of its winings to charity]. The two human contestants plan to donate half of their earnings to their favorite charities as well.
Jeopardy! The IBM Challenge
Alex Trebek introduces IBM Watson, explaining that it can neither hear nor see. It will receive all information electronically. Categories and clues will be sent as text files via TCP/IP over Ethernet at the same time the two human contestants see them so that all have the same time to think about the right answer.
Watson has two rows of five racks, back to back. This was done so that cold air could rise up from holes in the tile floors around the unit, and all the hot air would be forced into the center and up to the ceiling return. This technique is known as "hot aisle/cold aisle" design. Alex Trebek opens one of the rack doors to show a series of 4U-high IBM Power 750 servers.
The avatar is a representation of Watson, as the machine itself is too big to fit behind the podium. The avatar is IBM's "Smarter Planet" logo with orbiting streaks and circles. It shows "Green" when it has high confidence, and orange when it gets an answer wrong. When busy thinking, the streaks and circles speed up, the closest we will see to "watching a computer sweat."
During the show, an "Answer panel" shows Watson's top three candidate responses, with confidence level compared to its current "buzz threshold".
Watson knows what it knows, and knows what it doesn't know. Here is an [Interactive Watson Game] on New York Times website to give you an idea of how the answer panel works. I was impressed with how close all three candidate answers were. In a question about Olympic swimmers, all three candidates are Olympic swimmers. In a question about the novel "Les Miserables", all three candidates were characters of that novel.
Well, IBM Watson did well, but missed answered some questions incorrectly. This [parody Slate video] pokes fun at this. Here were some discussions we had after the show ended:
IBM did not do well in categories that required [abductive reasoning]. For example, to identify two or three things that happened in different years, and then postulate that what they all have in common is a specific decade (such as the 1950s) is difficult.
Watson does not hear the wrong answers from the two human contestants. For one question, Ken buzzes in first, guesses wrong, then Watson buzzes in with the same exact response. Alex Trebek rebukes Watson with "No, Ken just said that!" Brad would learn from their mistakes and guess correctly for the score.
Watson is provided the correct answer after a contestant guesses it correctly, or if nobody does, when Alex provides the correct response. This is sent as a text message to Watson immediately, so that it can use this information to adjust its algorithms and machine-learning for future clues in that same category. This was evident in the "Answer panel" on the fourth and fifth attempts on the category of "Decades".
With this demonstration, IBM Research has advanced science by leaps and bounds for the Articial Intelligence community. IBM is a leader in Business Analytics, and this technology will find uses in a variety of industries. The average knowledge worker spends 30 percent of her time looking for information on corporate data repositories. By demonstrating a computer that can provide answers quickly, employees will be more productive, make stronger business decisions, and have greater insight.
Day 1 was only able to cover the first round of Game 1. This allowed more time to talk about the history and technology of IBM Watson. Tomorrow, the contestants will finish Game 1 and head into Game 2.
Sadly, only 70 percent of doctors in the United States use Electronic Medical Record [EMR] systems. My own Primary Care Physician has made the switch, and told me he how much he loves having ready access to the information he needs. EMR systems reduce costs, help manage risk, and improve healthcare outcomes. It is no surprise that the U.S. government has taken a [stick-and-carrot approach] to encourage doctors to use them.
A frequent topic at the Tucson Executive Briefing Center where I work is how to make the most use of IT for healthcare and life sciences. For much of 2011 and 2012, I was also one of the technical advocates assigned to Wellpoint Insurance, in support of their adoption of IBM Watson technology for healthcare.
The IBM Challenge was a big success. One of the contestants, Ken Jennings, [welcomes our new computer overlords]. Congratulations are in order to the IBM Research team who pulled off this Herculean effort!
Some folks have poked fun at some of the odd responses and wager amounts from the IBM Watson computer during the three-day tournament. Others were surprised as I was that the impressive feat was done with less than 1TB of stored data. Here is what John Webster wrote in CNET yesterday, in hist article [What IBM's Watson says to storage systems developers]:
"All well and good. But here's what I find most interesting as a result of what IBM has done in response to the Grand Challenge that motivated Watson's creators. We know, from Tony Pearson's blog, that the foundation of Watson's data storage system is a modified IBM SONAS cluster with a total of 21.6TB of raw capacity. But Pearson also reveals another very significant, and to me, surprising data point: "When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, the actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1 Terabyte."
What Pearson just said is that the data set Watson actually uses to reach his push-the-button decision would fit on a 1TB drive. So much for big data?"
To better appreciate how difficult the challenge was, and how a small amount of data can answer a billion different questions, I thought I would cover Business Intelligence, Data Retrieval and Text Mining concepts.
"In this paper, business is a collection of activities carried
on for whatever purpose, be it science, technology,
commerce, industry, law, government, defense, et cetera.
The communication facility serving the conduct of a business
(in the broad sense) may be referred to as an intelligence
system. The notion of intelligence is also defined
here, in a more general sense, as the ability to apprehend
the interrelationships of presented facts in such a way as
to guide action towards a desired goal."
Ideally, when you need "Business Intelligence" to help you make a better decision, you perform data retrieval from a structured database for the specific information you are looking for. In other cases, you might be looking for insight, patterns or trends. In that case, you go "data mining" against your structured databases.
Here's a simple example. John runs a fruit stand. One day, he kept track of how many apples and oranges were bought by men and women. How many questions can we ask against this small set of data? Let's count them:
How many apples were sold to men?
How many apples were sold to women?
How many oranges were sold to men?
How many oranges were sold to women?
But wait! For each row and column, we can combine them into totals.
How many apples were sold in total?
How many oranges were sold in total?
How many fruit in total were sold to men?
How many fruit in total were sold to women?
How many fruit in total were sold?
But wait, there's more! Each row and column can be evaluated for relative percentages, as well as percentages of each cell compared to the total. You could make five relevant pie-charts from this data. This results in 16 more questions, such as:
Of the fruit purchased by men, what percentage for apples?
Of all the apples purchased, what percentage by women?
And that's not including more ethereal questions, such as:
Are there gender-specific preferences for different types of fruit?
What type of fruit do men prefer?
This is just for a small set, two market segments (by gender) and two products (apples and oranges). However, if you have many market segments (perhaps by age group, zip code, etc.) and many products, the number of queries that can be supported is huge. For small sets of data, you can easily do this with a spreadsheet program like IBM Lotus Symphony or Microsoft Excel.
But why limit yourself to two dimensions? The above example was just for one day's worth of activity, if John captures this data for every day for historical and seasonal trending, it can be represented as a three-dimensional cube. The number of queries becomes astronomical. This is the basis for Online Analytical Processing (OLAP), and three-dimensional tables are often referred to as [OLAP cubes].
Back in 1970, IBM invented the Structured Query Language [SQL], and today, nearly all modern relational databases support this, including IBM DB2, Informix, Microsoft SQL Server, and Oracle DB. SQL poses two challenges. First, you had to structure the data in advance to the way you expect to perform your ad-hoc queries. Deciding the groups and categories in advance can limit the way information is recorded and captured.
Second, you had to be skilled at SQL to phrase your queries correctly to retrieve the data you are after. What ended up happening was that skilled SQL programmers would develop "canned reports" with fixed SQL parameters, so that less-skilled business decision makers could base their decisions from these reports.
IBM has fully integrated stacks to help process structured data, combining servers, storage, and advanced analytics software into a complete appliance. IBM offers the [Smart Analytics System] for robust, customized deployments, and recently acquired [Netezza] for pre-configured, and more rapid deployments.
However, the bigger problem is that more than 80 percent of information is not structured!
Semi-structured data like email provides some searchable fields like From and Subject. The rest of the information is unstructured, such as text files, photographs, video and audio. To look for specific information in unstructured sources can be like looking for a needle in a haystack, and trying to get insight, patterns or trends involves text mining.
This, in effect, is what IBM Watson was able to perform so well this week. Finding the needle in the haystacks of unstructured data from 200 million pages of text stored in its system, combined with the ability to apprehend the interrelationships of meaning and subtle nuance, resulted in an impressive technology demonstration. Certainly, this new technology will be powerful for a variety of use cases across a broad set of industries!
Last week, I got the following comment from Bob Swann:
I am looking for the IBM VM Poster or a picture of the IBM VM "Catch the Wave"
Do you know where I might find it?
Well, Bob, I made some phone calls. The company that published these posters no longer exists, butI found a coworker at the Poughkeepsie Briefing Center who still had the poster on his wall, and he was kind enough to take a picture of it for you.
VM: The Wave of the Future (click thumbnail at left to see larger image)
Some may recognize this as a [mash-up] using as a base the famous Japanese 10-inch by 15-inch block print[The Great Wave off Kanagawa] byartist [Katsushika Hokusai]. I had this as my laptop'swallpaper screen image until last year when I was presenting in Kuala Lumpur, Malaysia. I was told that it reminded people about the horrible tsunami caused by the [Indian Ocean earthquake] back in 2004.I was actually scheduled to fly the last week of December 2004 to Jakarta, Indonesia, but at the last minute ourclient team changed plans. I would have been on route over the Pacific ocean when the tsunami hit, and probably stranded over there for weeks or months until the airports re-opened.
The Wave theme was in part to honor the IBM users group called World Alliance VSE VM and Linux (WAVV) which is havingtheir next meeting [April 18-22, 2008] in Chattanooga, Tennessee. I presentedat this conference back in 1996 in Green Bay, Wisconsin, as part of the IBM Linux for S/390 team. It started onthe Sunday that Wisconsin switched their clocks for [DaylightSaving Time], and the few of us from Arizona or other places that don't both with this, all showed up forbreakfast an hour early.
When I was in Australia last year, I was told the wave that sports fans do, by raising their hands in coordinatedsequence, was called the [Mexican Wave]in most other countries. When I was there, Melbourne was trying to outlaw this practice at their cricket matches.
The "wave" represents a powerful metaphor, from z/VM operating system on System z mainframes to VMware and Xenon Intel-based processor machines, as the direction of virtualization that we are heading for future data centers.The Mexican wave represents a glimpse of what humans can accomplish with collaboration on a globalscale. It can also represent the tidal wave of data arising from nearly 60 percent annual growth instorage capacity. (I had to mention storage eventually, to avoid being completely off-topic on this post!)
I hope this is the graphic you were looking for Bob. If anyone else has wave-themed posters they would like to contribute, please post a comment below.
In case you missed it, IBMunveiled a new digital video surveillance service yesterday. This "marks an important shift in the industry's approach to security, applying advanced analytics to video data and signaling the ability to converge physical and information technology (IT) security."
The IBM Smart Surveillance Solution is designed to provide the unique capability to carry out efficient data analysis of video sequences either in real time or from recordings. These recordings can be on disk or tape storage.
The problem with today's existing "analog" surveillance is that the analog cameras record onto traditional VHS tapes, and these are rotated through, re-written after a few hours or days. To review tapes often involves human intervention, and must be done before the VHS tapes are re-used. Many shoplifters, thieves, and other law-breakers take a chance that their actions will not be caught on tape, or that they will be long gone by the time the video is analyzed.
The IBM Smart Surveillance Solution can provide a number of advantages over traditional video solutions, including:
Real-time alerts that can help anticipate incidents by identifying suspicious behaviors.
Forensic capabilities are enhanced by utilizing unique indexing and attribute-based search of video events to classify objects into categories such as people and cars.
Situational awareness of the location, identity and activity of objects in a monitored space including license plate recognition and face capture.
With real-time analytics capabilities, the new DVS service can open up a wide array of new applications that go far beyond the traditional security aspects of surveillance systems. Early adopter industries in this rapidly evolving market include retail, public sector and financial services. The retail industry estimates nearly $50 billion is lost annually to fraud, theft and administrative errors.
Once in digital format, video surveillance can be sent further, processed quicker, and stored for longer periods of time, than traditional media makes practical today.
Well, it's Tuesday, and that means IBM announcements! Today is bigger, as there are a lot of Dynamic Infrastructure announcements throughout the company with a common theme, cloud computing and smart business systems that support the new way of doing things. Today, IBM announced its new "IBM Smart Archive" strategy that integrates software, storage, servers and services into solutions that help meet the challenges of today and tomorrow. IBM has been spending the past few years working across its various divisions and acquisitions to ensure that our clients have complete end-to-end solutions.
IBM is introducing new "Smart Business Systems" that can be used on-premises for private-cloud configurations, as well as by cloud-computing companies to offer IT as a service.
IBM [Information Archive] is the first to be unveiled, a disk-only or blended disk-and-tape Information Infrastructure solution that offers a "unified storage" approach with amazing flexibility for dealing with various archive requirements:
For those with applications using the IBM Tivoli Storage Manager (TSM) or IBM System Storage Archive Manager (SSAM) API of the IBM System Storage DR550 data retention solution, the Information Archive will provide a direct migration, supporting this API for existing applications.
For those with IBM N series using SnapLock or the File System Gateway of the DR550, the Information Archive will support various NAS protocols, deployed in stages, including NFS, CIFS, HTTP and FTP access, with Non-Erasable, Non-Rewriteable (NENR) enforcement that are compatible with current IBM N series SnapLock usage.
For those using NAS devices with PACS applications to store X-rays and other medical images, the Information Archive will provide similar NAS protocol interfaces. Information Archive will support both read-only data such as X-rays, as well as read/write data such as Electronic Medical Records.
Information Archive is not just for compliance data that was previously sent to WORM optical media. Instead, it can handle all kinds of data, rewriteable data, read-only data, and data that needs to be locked down for tamper protection. It can handle structured databases, emails, videos and unstructured files, as well as objects stored through the SSAM API.
The Information Archive has all the server, storage and software integrated together into a single machine type/model number. It is based on IBM's General Parallel File System (GPFS) to provide incredible scalability, the same clustered file system used by many of the top 500 supercomputers. Initially, Information Archive will support up to 304TB raw capacity of disk and Petabytes of tape. You can read the [Spec Sheet] for other technical details.
For those who prefer a more "customized" approach, similar to IBM Scale-Out File Services (SoFS), IBM has [Smart Business Storage Cloud]. IBM Global Services can customize a solution that is best for you, using many of the same technologies. In fact, IBM Global Services announced a variety of new cloud-computing services to help enterprises determine the best approach.
In a related announcement, IBM announced [LotusLive iNotes], which you can think of as a "business-ready" version of Google's GoogleApps, Gmail and GoogleCalendar. IBM is focused on security and reliability but leaves out the advertising and data mining that people have been forced to tolerate from consumer-oriented Web 2.0-based solutions. IBM's clients that are already familiar with on-premises version of Lotus Notes will have no trouble using LotusLive iNotes.
There was actually a lot more announced today, which I will try to get to in later posts.
The job is located in Tucson, Arizona, which is a great place to live! Tucson is the headquarters for IBM storage design and development, with the largest collection of engineers, software developers and testers. The IBM Tucson Executive Briefing Center is located on the [University of Arizona Science and Technology Park] campus that houses over 7,000 employees from 50 different companies.
What does the job entail?
Primarily, you will be developing, customizing and presenting Powerpoint presentations and live product demos. For some briefings, you will work with sales reps, IBM Business Partners, and clients to develop an agenda of topics to discuss. At times, the presentation may involve working to solve the client's problems, drawing on the whiteboard or flip charts to help capture the requirements and architect a solution.
Which products are we talking about?
The [IBM System Storage product line] includes solid-state drives (SSD), block and file-based disk systems, tape drives and libraries, storage virtualization, and storage management software.
Is there any opportunity for travel?
Most of the presentations will be performed in Tucson, either in person, by webcast or video conference call. Sometimes, this includes discussions over drinks, dinner or golfing. Occasionally, there will be travel to present at client locations, IBM branch offices, events or conferences. My manager estimates approximately 10 percent travel.
Is the pay based on a commission?
Absolutely not! We are consultants, not salespeople. To maintain our "trusted advisor" status, it is a flat salary, with possibility for year-end bonus based on how well our division does overall. This allows us to present and position all of the products fairly to the clients at briefings without bias. Our clients appreciate that! The job is considered pre-sales technical support.
Is training included?
Yes. Assuming you already have a strong background in storage hardware and software, and how these connect to SAN and LAN networks for a variety of operating systems like z/OS, AIX, Windows and Linux, there will be training for the latest updates and features of the IBM products throughout the year. Also, there will be professional training to build up your public speaking and meeting facilitation skills.
How do I apply?
If you are an American citizen, fluent in the English language, and have at least a Bachelor's Degree, go to the [IBM Employment website], look for "Storage Support Specialist" position using job code "STG-0524037" or "STG-0525309". IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
My how time flies! It has been nearly a year since our new Tucson Executive Briefing Center had its [Ribbon Cutting Ceremony].
To celebrate this achievement, IBM asked me to write and direct a short film to remind everyone we are here to help clients solve problems, determine an appropriate strategy and make solid purchase decisions.
I have produced other videos for IBM. See my October 2013 blog post [Incorporating Videos] for other examples. This was my first time as writer/director for a project.
This video won't win any Oscars, but I would still like to thank the Academy, my colleagues IBM VP Calline Sanchez, Lee Olguin, Joe Hayward and Kris Keller agreeing to be filmed on camera. Behind the scenes, I want to thank IBM Fellow John Cohn for his superb narration, Andrew Greenfield as cinematographer and editor, Shelly Jost as creative consultant selecting the musical tracks, and Denise White for reviewing the screenplay. Finally, I want to thank our producer, Bill Terry, for funding this effort.
What do you think? Will it go viral? Enter your comments below!
Continuing this week's theme on products that were part of last week'sIBM Information Infrastructure launch, today I'll cover the TS2900.
IBM System Storage TS2900 Tape Autoloader
This little baby is SWEET! At 1U high, it holds a single drive and up to 9 cartridges,up to a total of 14.4 TB at 2:1 compression. Thedrive can be a Half-Height (HH) LTO-3 or LTO-4 drive. (It is called an autoloader because there isonly a single drive. Automation with multiple drives are called libraries).
This can be rack-mounted, or sit on your desktop. There is an I/O station for insertingor removing individual cartridges, as well as a removable tape magazine to populate orremove the tapes in a more efficient manner.
Both LTO3 and LTO4 support a mix of regular and "Write Once, Read Many" (WORM) media tohelp comply with regulations demanding "Non-erasable, Non-rewriteable" storage. TheLTO4 can also support on-drive encryption, managed by the IBM Encryption Key Manager (EKM).
To learn more, see the IBM System Storage[TS2900 page].
Well it's Tuesday, and ["election day"] here in the USA, and again IBM has more announcements.
IBM announced [IBM Tivoli Key Lifecycle Manager v1.0] (TKLM) to manage encryption keys. This provides a graphical interface to manage encryption keys, including retention criteria when sharing keys with other companies.
TKLM is supported on AIX, Solaris, Windows, Red Hat and SUSE Linux. IBM plans to offer TKLM forz/OS in 2009. TKLM can be used with Firefox or Internet Explorer web browser. This will include the Encryption Key Manager (EKM) that IBM offered initially to support encryption keys for the TS1120, TS1130, and LTO-4 drives.
While this is needed today for tape, IBM positions this software to also manage the encryption keys for "Full Drive Encryption" (FDE) disk drive modules (DDM) in IBM disk systems in 2009.
Well, it's Tuesday again, which means IBM announcement day. With our [big launches] we had this year, there might be some confusion on IBM terminology on how announcements are handled.Basically, there are three levels:
Technology demonstrations show IBM's leadership, innovation and investment direction, without having to detail a specificproduct offering.Last month's[Project Quicksilver], for example, demonstrated the ability to handle over 1 million IOPS with Solid State Disk.IBM is committed to develop solid state storage to create real-world uses across a broad range of applications, middleware, and systems offerings.
A preview announcement does entail a specific product offering, but may not necessarily include pricing, packagingor specific availability dates.
An announcement also entails a specific product offering, and does include pricing, packaging and specific availability dates.
With our September 8 launch of the IBM Information Infrastructure strategic initiative, there were a mix of all three of these. Many of the preview announcements will be followed up with full announcements later this year. Today, the IBM Tivoli Advanced Backup andRecovery for z/OS v2.1 was announced.
Note: If you don't use z/OS on a System z mainframe, you can stop reading now.
As many of my loyal readers know, I was lead architect for DFSMS until 2001, and so functions related to DFSMS and z/OS are very near and dear to my heart. For Business Continuity, IBM created Aggregate Backup andRecovery Support (ABARS) as part of the DFSMShsm component. This feature created a self-contained backupimage from data that could be either on disk or tape, including migrated data. In the event of a disaster,an ABARS backup image can be used to bring back just the exact programs and data needed for a specific application, speeding up the recovery process, and allowing BC/DR plans to prioritize what is most important.
To help manage ABARS, IBM has partnered with [Mainstar Software Corporation]to offer a product that helps before, during and after the ABARS processing.
ABARS requires the storage admin to have a "selection list" of data sets to process as an aggregate.IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® ASAP™ to help identify the appropriatedata sets for specific applications, using information from job schedulers, JCL, and SMF records.
ABARS has two simple commands: ABACKUP to produce the backup image, and ARECOVER to recover it. However, ifyou have hundreds of aggregates, and each aggregate has several backups, you may need some help identifyingwhich image to recover from.IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® ABARS Manager™ to present a list ofinformation, making it easy to choose from. To help prep the ICF Catalogs, there is a CATSCRUB feature for either"empty" or "full" catalog recovery at the recovery site.
The fact that storage admins may not be intimately familiar with the applications they are backing up is a commonsource of human error. IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® All/Star™ to help validate that the data setsprocessed by ABACKUP are complete, to support any regulatory audit or application team verification.This critical data tracking/inventory reporting not only identifies what isn't backed up, so you can ensure that you are not missing critical data, but also can identify which data sets are being backed up multiple times by more than one utility, so you can reduce the occurrence of redundant backups.
With v2.1 of Tivoli Advanced Backup and Recovery for z/OS, IBM has integrated Tivoli Enterprise Portal (TEP)support. This allows you to access these functions through IBM Tivoli Monitor v6 GUI on a Linux, UNIX or Windowsworkstation. IBM Tivoli Monitor has full support to integrate Web 2.0, multi-media and frames. This meansthat any other product that can be rendered in a browser can be embedded and supported with launch-in-contextcapability.
(If you have not separately purchased a license to IBM Tivoli Monitoring V6.2, don't worry, you can obtainthe TEP-based function by acquiring a no-charge, limited use license to IBM Tivoli MonitoringServices on z/OS, V6.2.)
In addition to supporting IBM's many DFSMS backup methods, from ABARS to IDCAMS to IEBGENER, IBM Tivoli Advanced Backup and Recovery v2.1 can also support third-party products from Innovation Data Processing and Computer Associates.
As many people re-discover the mainframe as the cost-effective platform that it has always been, migratingapplications back to the mainframe to reduce costs, they need solutions that work across both mainframe anddistributed systems during this transition. IBM Tivoli Advanced Backup and Recovery for z/OS can help.
Today I spoke at the IBM Think Green Roadshow in Phoenix, Arizona. This is justone of a 15-city tour to help make people aware of Green data center issues.Here is the schedule forthe remaining cities. Contact your local IBM rep for details.
Victor Ferreira was our moderator and host. He is the site level executive for the2000 IBM employees in the Phoenix area, and manages the Public Sector for our Westernregion.
The first speaker was Dave McCoy, IBM principal in our Data Center services group.He explained IBM's Project Big Green and the Energy Efficiency Initiative, and wentinto details on how IBM can act as general contractor to design, plan and build theideal Green Data Center for you. IBM can also retrofit existing buildings, with new technologies like stored cooling, optimized airflow assessments, and modulardata center floorspace. While not related to energy, but still important to ourenvironment was IBM Asset Recovery Services, where IBM can take all those old PCmonitors, keyboards and other outdated equipment and refurbish or melt down to recapture useful metals and plastics, and disposing the rest in an environmentally-friendly,non-toxic manner.
I was the second speaker, covering "How to get it done". While Dave covered the issuesand technologies available, I explained how to put it all into practice. This includesIT systems assessments, health audits, and thermal profiling. Using server and storagevirtualization, you can increase resource utilization and reduce energy waste. IBM's CoolBlueproduct line, which includes the IBM PowerExecutive software to monitor your IT environment, and the "Rear Door Heat Exchanger" that uses chilled water to remove asmuch as 60% of the heat coming out of the back of a server rack, greatly reducing hot-spotson the data center floor, and allowing you to run the entire room at warmer, less-expensivetemperatures.
On the server side, I covered IBM's System z mainframe and the BladeCenter as examples of how innovative technologies can be used to run more applications with less energy. The newSystem p570 based on the energy-intelligent POWER6 processor has twice the performance for the same amountof power as its POWER5 predecessor. On thestorage side, I explained how Information Lifecycle Management (ILM), storage virtualization,and the use of a blended disk and tape environment can greatly reduce energy costs.
Reps from our many technology partners Eaton, APC, Schneider Electric, Liebert, and Anixter werethere to support this event.
The session ended with a Q&A Panel, with Dave McCoy, myself, and Greg Briner from IBM GlobalFinancing. IBM is able to offer creative "project financing" that can often times match theactual monthly savings, resulting in net zero cost to your operational budget, with payback periods as little as 2.5 years.
To learn more about IBM's efforts to help clients create "Green" data centers, clickGreen Data Center.
IBM Master Inventor, Senior IT Architect, and Event Content Manager
Well, it's Tuesday again, and you know what that means? IBM Announcements!
It has been 71 days since my last post, and people were beginning to worry. Did Tony with the lottery? Was Tony hit by a bus? Did Tony get abducted by aliens from another planet? No, none of these things happened.
I got a new job! I am still working with IBM Systems, but now am one of the Event Content Managers for the [IBM Systems Technical University] events, or TechU for short. For those who have attended these events, I have taken over for Glenn Anderson, who retired December 31, 2018.
Last year, the IBM TechU team asked me to present eight topics at an event in Johannesburg, South Africa, as a last-minute replacement for other speakers. While there, Glenn Anderson mentioned to me that he was planning to retire. "Do you know anyone interested in taking over for me?" he asked.
I jumped at the chance to apply for the job! There was stiff competition, but after three rounds of interviews and background checks, I was offered the position! This all happened after the zTechU event in Hollywood, Florida last October, so if this is the first you have heard of it, you are not alone.
For those wondering "What about the IBM Tucson Client Experience Center?" you have good reason to ask. I had worked at the center for the past 12 years, the last six as the chief subject matter expert (SME) for all things IBM System Storage. Who is going to replace me? The job posting is still open, and the new manager, John Zupetz, has been reviewing resumes.
As time permits, I will continue to do storage briefings to help out during this transition, both here in Tucson, Arizona, as well as outbound to various client and IBM locations. I have also offered to help train whomever gets hired for the job.
In my new role, I will be managing TechU events, selecting topics, accepting speakers, scheduling sessions, and even presenting sessions at the events themselves. I will be focused on IBM Z and LinuxONE mainframe servers and related Storage solutions, but will also manage sessions on soft skills for IT Leadership and Professional Development.
We plan to have about 18 events this year, spanning countries across six continents! I just finished smaller 3-day events in Istanbul, Turkey and Cairo, Egypt, and am now working on larger events to be held in Dubai, Atlanta, and Berlin!
Shameless plug! Registration is open for these TechU events. I plan to be at all three, if you want to meet me in person!
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19-plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory.
I also have known Lloyd for years. In prior roles, Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the Advanced Technical Skills (ATS) organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
I have spent the last 10 weeks working with the IBM developerWorks team converting from a single-author blog to a multi-author blog. I had no idea it would be so complicated to re-work the HTML templates, acquire all the legal and managerial approvals, and then authorize additional contributors!
Mark your calendars! IBM plans to have back-to-back Technical University events in Hollywood, Florida:
October 8-12, will focus on IBM Z mainframe, and a subset of IBM Storage that offer synergy for IBM Z, such as DS8880 storage system, and the TS7760 Virtual Tape Engine.
October 15-19, will focus on IBM Power Systems and the entire IBM Storage portfolio.
When I first learned of this, I was not aware there was a city called Hollywood in Florida. The Hollywood in Florida is situated between Fort Lauderdale and Miami, so you can fly into either of those two airports to get to the conference.
(Did you know? The Hollywood most people know in California is no longer its own city, but rather incorporated as a neighborhood district into Los Angeles back in 1910. There are actually thirty different places called "Hollywood" around the world, two dozen in the United States, with the rest scattered in Ireland, Turkey, Russia, Singapore and the Philippines. Not all of these are formally "cities", but in some cases neighborhoods, districts, unincorporated areas, or other populated places. The Hollywood in Maryland claims to be the first, established in 1867!)
I only plan to attend the second week only, October 15-19. Here are some highlights:
In the past, IBM had keynote sessions for each brand, for example, one focused on IBM Power systems, and another on IBM Storage. However, these were scheduled during the same time slot, forcing some people to make a tough choice.
To solve this, the two keynote sessions will be staggered, so attendees can attend both!
The storage keynote will take on a new format, with a panel of experts. I have been invited as one of the experts to participate! If there is a particular topic you want to hear about on the panel, please enter your comments below.
As with most conferences, there is a "Call for Papers" requesting speakers submit the topics they can present, and then conference coordinators accept, adjust or reject them in building the final agenda.
Here are the topics I submitted:
Build your personal brand! Social Media tips from an experienced blogger
The Pendulum Swings Back - Understanding Converged and Hyperconverged Systems
IBM Hybrid and Multi-Cloud storage solutions
IBM Cloud Object Storage (powered by Cleversafe)
Managing Risks with Data Footprint Reduction
Information Lifecycle Management: Why Archive is different than Backup
The Seven Tiers of Business Continuity and Disaster Recovery
If you attended the IBM Technical Universtiy in Orlando last May, the conference in October will have six months' worth of new announcements and products to cover.
I also plan to be at the IBM Technical University events in Johannesburg, South Africa (September 11-13), and Rome, Italy (October 22-26). If you plan to be at any of these events, let me know! If not, you can follow along with Twitter hashtag: #IBMtechU
This session had four parts. First, an overview of "Data Footprint Reduction" technologies, like compression, data deduplication, space-efficient snapshots and thin provisioning.
Second, a look at how these technologies can get storage administrators in trouble. Much like airlines selling more tickets than seats on the airplane, storage administrators may over-provision based on data reduction estimates, and then suddenly run out of storage capacity.
Third, an overview of IBM FlashSystem A9000 and A9000R products, often referred to as "A9000/R" to cover both as a family. These models offer data footprint reduction for all data.
Finally, I explain how the Hyper-Scale Manager GUI can help with reporting and analytics to avoid these risks. This GUI is available for the FlashSystem A9000/R, as well as XIV Gen3 and Spectrum Accelerate software clusters.
Special thanks to Rivka Matosevich for her help in preparing this presentation.
The Pendulum Swings: Understanding Converged and Hyperconverged Integrated Systems
With IBM's partnership with Cisco for VersaStack, and Nutanix for the IBM Power systems, this has become a particularly popular topic.
I started with an overview of the last 50 years of storage evolution, from internal storage and external storage to NAS and SAN storage networks. An estimated 96 percent of the storage in corporate data centers are connected via NAS or SAN networks.
More recently, people have been willing to give up all those gains for something simpler, less powerful, less reliable, less expensive. Enter Converged and Hyperconverged Systems. IBM PureApplication and VersaStack lead the pack for Converged Systems, along with IBM Spectrum Scale, Spectrum Accelerate and Nutanix on IBM Power Systems for Hyperconverged Integrated Systems.
We had 1,600 attendees, much higher than expected. This is a good sign, when you consider IBM just had its "Think 2018" conference last March, and Dell EMC had their big conference the same week in Las Vegas.
When people asked me what was the main difference between "Think 2018" and "IBM Technical University", I explain it as follows:
Think 2018 is a big conference focused on uni-directional communication. IBM executives present the corporate line repeatedly to large audiences. Its size and scale means they can have big name bands and celebrity speakers.
IBM Technical University is a smaller conference focused on bi-directional communication. Audiences are small and encouraged to ask questions. Demos, Labs and Meetups allow for conversations with IBM technical experts. There are no crowds in the hallways to hamper ad-hoc side conversations. The IBM speakers listen to the clients concerns and bring that feedback to development.
Confused yet? Fortunately, the speaker Mark Rader, IBM Z, focused entirely on the z/OS platform version of these tools.
Storage Meetup: Cloud and Object Storage
In past years, the conference would organize three huge rooms, one for IBM Z, one for IBM POWER, and the last for IBM Storage. These would be Q&A panels, with a dozen experts at the front of the room, and a large audience asking questions.
This year, these were all split up into smaller "Meetup" sessions. There were Meetups for Spectrum Protect, Spectrum Scale, Encryption, IBM i, z/OS, Power, FlashSystem, TS7700, DS8000, Blockchain, DevOPS, Machine Learning, Spectrum Virtualize.
Andy Kutner and I led the Storage Meetup for "Cloud and Object Storage". Andy and I had both done several presentations on these topics during the week, and we were able to handle the questions that came from the audience. Here is a sample:
Is IBM Spectrum Virtualize Transparent Cloud Tiering mature enough to use now?
It is unfortunate that IBM chose "Transparent Cloud Tiering" (TCT) to describe four implementations on four different product families. The one thing they have in common is that they send data to the Cloud or IBM Cloud Object Storage. The TCT feature of Spectrum Virtualize, including SAN Volume Controller (SVC), Storwize and FlashSystem V9000 was made generally available for use last year.
Our use of S3FS does not perform well?
S3FS was developed to allow file system access through the FUSE driver for Linux and MacOS operating systems. It was not optimized for performance, and was meant as a convenience instead.
Does Spectrum Copy Data Management support the Cloud?
Yes, Spectrum CDM works with a variety of IBM and non-IBM storage to take volume snapshots that can be used for DevOPS, dev/test, and other purposes. These snapshots can be moved to the cloud to make them more broadly accessible to developers.
How do we size backup configurations to the Cloud
Several backup software support IBM Cloud and IBM Cloud Object Storage, including Veritas NetBackup and Commvault Simpana. For IBM Spectrum Protect, IBM has published "blueprints" now that include Cloud configurations that have done all the sizing work for you.
When should we consider using a Co-location facility instead of Public Cloud
There are pros and cons for each, and there are also options in between. IBM Cloud offers "Dedicated" bare-metal servers, giving you complete control, but still publicly accessible. IBM Cloud also offers "Private" bare-metal behind your firewall.
Can IBM COS Vault Mirroring employ the Aspera file transfer protocol?
Andy and I were stumped on this one. Andy agreed to get back to the attendee on this.
Can IBM COS support Spark Analytic workloads
Yes, IBM COS supports [Stocator], a high performing connector to object storage for Apache Spark, achieving performance by leveraging object storage semantics.
We are currently using IBM FileNet to Centera, can the data be moved to IBM COS?
Absolutely, not only does FileNet offer utilities to make this possible, IBM has partnered with service providers that offer fast data movement from Centera to IBM COS for a variety of specific applications, including FileNet.
What are the key selling points over Dell EMC Centera?
Basically, IBM COS is more scalable, performs faster, at a lower Total Cost of Ownership (TCO). Contact your local storage seller or IBM Business Partner for a full presentation!
Is there a database to quickly search through all of the valuable metadata stored in IBM COS and other repositories?
Great idea! I will pass this on to development.
How does IBM COS protect against disk failure or data corruption?
IBM COS slowly rolls through all the data checking the integrity. Checksums are validated, and any corrupted or missing slices are reconstructed using Erasure Coding.
Who competes with IBM COS?
IBM ranks #1 in Object Storage. There are several competitors. Established storage companies like Dell EMC, NetApp, Hitachi Vantara, and Hewlett Packard Enterprise (HPE) Simplivity have object storage offerings. Companies also have attempted to build object stores from open source code, OpenStack Swift and Ceph.
How can IBM Business Partners demonstrate IBM COS to clients?
The IBM Systems Client Experience Portal [ISCEP] provides a list of demos available for all IBM storage products, including IBM COS.
Event Night: Universal Studios
IBM rented out a section of Universal Studios! We were welcomed by various characters to take pictures with! Here are Scooby Doo and Shaggy!
There were several rides to chose from. While waiting for the "Race through New York with Jimmy Fallon" ride, we were entertained by these five acapella singers doing popular rap songs. Laura poses for a picture with two Minions.
The other two rides were "Transformers" and "The Mummy". After Transformers ride, we could take pictures with Bumblebee, one of the auto-bots. I posed with Ahmanet, the mummy played by Sofia Boutella in the 2017 movie.
There was also plenty of food, representing Italian, Mexican, Greek, Chinese and American fare.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. This is my recap for Day 3 breakout sessions.
VersaStack for Containers: IBM Cloud Private and Spectrum Access
Chris Vollmar, IBM Canada, presented all day today. In this session, he explained how "Spectrum Access" was not a product, but rather a blueprint of best practices on how to install the IBM Cloud Private and Spectrum Connect software on the VersaStack solution.
Leveraging IBM Cloud Object Storage for z/OS
Louis Hanna, IBM Z Software, presented this session. I was expecting it to either cover the DS8000 Transparent Cloud Tiering, or direct access to IBM Cloud Object Storage, but it turns out neither!
Instead, Louis talked about [IBM Cloud Tape Connector for z/OS], which mimics tape drive interfaces that can be used to move disk and tape data to a public cloud, or to IBM Cloud Object Storage on premises.
Information Lifecycle Management: Why Archive is Different than Backup
Can you believe there are still companies out there keeping backup tapes for seven years and pretending that this meets their long-term retention requirements?
What happens when you try to recover those tapes, and you need the right server, the right operating system, and the right application software to make sense of it all?
Backups should not be used in this manner. Rather, backups are to recover from recent hardware failure or data corruption only. If you have are keeping backups longer than 90 days, you are probably doing something wrong.
Archiving, on the other hand, is an intelligent process for managing inactive or infrequently accessed data, that still has value, while providing the ability to preserve, search and retrieve the information during a specified retention period.
However, some of the product names have changed, so I thought it would be good to do a fresh update on this topic for this conference.
Becoming the person you published on LinkedIn
Frank Degilio, IBM Distinguished Engineer, presented this "Career Development" session. IBM Technical University is not just for technical education, it also offers sessions of general interest to help round out personal skills.
Frank explained that to rise the corporate ranks, you need to learn to communicate, to collaborate, and to network with others. Technical workers should be "T-shaped", with the top part of the letter "T" representing broad, general skills, and the lower part representing deep technical skills in a specific area.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.
This week, I am in Orlando, Florida for the [IBM Technical University], with focus on IBM storage, IBM Z mainframes and IBM Power servers. Here is my recap for morning break out sessions on Day 2.
A Survey of Deep Learning Techniques
Nin Lei, IBM Distinguished Engineer, presented a sample of Deep Learning techniques used today. CNN, RNN, and GAN.
Basic decision making: gather data, reviewed by subject matter expert, have an outcome. This is done for a variety of situations: fraudulent vs. legitimate credit card transaction, approve or reject loan application, tumor is benign or malignant. Machine Learning effectively replaces SME with a mathematical function.
Various tools are available for this: Tensorflow, SnapML, SAS, SPSS, are just a few.
Deep Learning is based on "Neural Networks", a subset of Machine Learning. There are input layer, one or more hidden layers, and then an output layer. For example, for a photo, each pixel could be an input feature. A 200x200 pixel photo represents 40,000 input values. In the past, there weren't more than three hidden layers. Today, we can have 20 to 50 layers, because we now have more computational power, with 95-97 percent accuracy.
For each connection between input layers and hidden layers, and output layers, you identify weights and biases. A research paper by Hornik 1989 posits that any machine learning can be performed by a sufficiently large neural network.
Convolution Neural Network (CNN) is often used for image recognition, for object classification or detection.
Some features are invariant. Location invariant means it doesn't matter where it is located within the photo. Color invariant means it does not matter what color it is, and can work with black-and-white or grayscale photos.
For example, for facial recognition, earlier layers are focused on identifying edges, and later layers identify facial features like eyes, nose and mouth.
Image recognition is used with self-driving cars, drones to determine power line maintenance or crop inspection, social media, video surveillance, medical image diagnosis, car racing, and ripeness of fruits and vegetables.
CNN is used for auto-encoding. This takes detailed photos, compresses them, and then can be used to decode back to something similar. It can takes weeks to train a model with a million photo images.
Recurrent Neural Network (RNN) is focused on time sequence.
This is useful for predicting sequences of letters or words. However, since mathematics are involved, a long sequence of multiplications will either get to zero or infinity, this is known as the "vanishing gradient problem".
The solution is "Long Short Term Memory" (LSTM) cells. Basically, the model selectively remembers information from previous steps, which reduces the number of multiplications.
RNN need to know related words. For example, men-women, king-queen, walking-walked, swimming-swam, Spain-Madrid. These are referred to as "embeddings", which are stored in the hidden layers for quick lookup.
Generative Adversarial Networks (GAN) are used to generate fake photos to train other models.
Sometimes, you do not have enough photos in each category for training, so you can generate fake images to help with the training system. Noise is fed into a "Generator" model, and then the results are evaluated by a "Discriminator" model, comparing the fake with real photos. Repetition allows each model to improve so that the fake photos become more realistic for training purposes.
The death of the one-size-fits-all cloud: The mainstreaming of multi-arch
Elise Spence and Drew Thorstensen, IBM Power Systems for Software Defined Cloud Infrastructure, presented this topic. The session was on IBM Cloud Private, and the multiple architectures supported by Docker and Kubernetes.
There are actually six different architectures supported for Docker containers:
While containers are "portable" between systems, the binaries are typically only written for a single architecture, typically Linux-x86 or Windows-x86, and won't run on POWER or IBM Z.
The solution is to create a multi-arch manifest file, and port all the binaries to all of these different architectures. This way, when the containerized application is run on POWER, the manifest will identify the POWER-based binaries.
Introduction to IBM Cloud Object Storage (powered by Cleversafe)
Before 2015, IBM offered two "Object Storage" products: IBM Spectrum Scale and IBM Spectrum Archive, and I was constantly having to compare and contrast IBM products to Cleversafe.
Not any more! With the IBM acquisition of Cleversafe, IBM now offers all three!
This session explained all of the features and functions of IBM Cloud Object Storage System, available as software, as pre-built systems, including a VersaStack CVD, and as Storage-as-a-Service (STaaS) in the IBM Cloud.
(IBM renamed Cleversafe DSnet to "IBM Cloud Object Storage System". I joked that if IBM ever acquired Coca-Cola, they would probably rename their signature soft drink as the "Brown Carbonated Sugar Liquid", or BroCarb SugarLiq for short!)
I provided a general overview, as well as the latest features of Concentrated Dispersal Mode and Compliance Enabled Vaults.
You can follow along with Twitter hashtag #IBMtechU, or follow me at @az990tony.