This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Tony is author of the Inside System Storage series of books, available on Lulu.com! Order your copies today!
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.
“In times of universal deceit, telling the truth will be a revolutionary act.”
-- George Orwell
Well, it has been over two years since I first covered IBM's acquisition of the XIV company. Amazingly, I still see a lot of misperceptions out in the blogosphere, especially those regarding double drive failures for the XIV storage system. Despite various attempts to [explain XIV resiliency] and to [dispel the rumors], there are still competitors making stuff up, putting fear, uncertainty and doubt into the minds of prospective XIV clients.
Clients love the IBM XIV storage system! In this economy, companies are not stupid. Before buying any enterprise-class disk system, they ask the tough questions, run evaluation tests, and all the other due diligence often referred to as "kicking the tires". Here is what some IBM clients have said about their XIV systems:
“3-5 minutes vs. 8-10 hours rebuild time...”
-- satisfied XIV client
“...we tested an entire module failure - all data is re-distributed in under 6 hours...only 3-5% performance degradation during rebuild...”
-- excited XIV client
“Not only did XIV meet our expectations, it greatly exceeded them...”
In this blog post, I hope to set the record straight. It is not my intent to embarrass anyone in particular, so instead will focus on a fact-based approach.
Fact: IBM has sold THOUSANDS of XIV systems
XIV is "proven" technology with thousands of XIV systems in company data centers. And by systems, I mean full disk systems with 6 to 15 modules in a single rack, twelve drives per module. That equates to hundreds of thousands of disk drives in production TODAY, comparable to the number of disk drives studied by [Google], and [Carnegie Mellon University] that I discussed in my blog post [Fleet Cars and Skin Cells].
Fact: To date, no customer has lost data as a result of a Double Drive Failure on XIV storage system
This has always been true, both when XIV was a stand-alone company and since the IBM acquisition two years ago. When examining the resilience of an array to any single or multiple component failures, it's important to understand the architecture and the design of the system and not assume all systems are alike. At it's core, XIV is a grid-based storage system. IBM XIV does not use traditional RAID-5 or RAID-10 method, but instead data is distributed across loosely connected data modules which act as independent building blocks. XIV divides each LUN into 1MB "chunks", and stores two copies of each chunk on separate drives in separate modules. We call this "RAID-X".
Spreading all the data across many drives is not unique to XIV. Many disk systems, including EMC CLARiiON-based V-Max, HP EVA, and Hitachi Data Systems (HDS) USP-V, allow customers to get XIV-like performance by spreading LUNs across multiple RAID ranks. This is known in the industry as "wide-striping". Some vendors use the terms "metavolumes" or "extent pools" to refer to their implementations of wide-striping. Clients have coined their own phrases, such as "stripes across stripes", "plaid stripes", or "RAID 500". It is highly unlikely that an XIV will experience a double drive failure that ultimately requires recovery of files or LUNs, and is substantially less vulnerable to data loss than an EVA, USP-V or V-Max configured in RAID-5. Fellow blogger Keith Stevenson (IBM) compared XIV's RAID-X design to other forms of RAID in his post [RAID in the 21st Centure].
Fact: IBM XIV is designed to minimize the likelihood and impact of a double drive failure
The independent failure of two drives is a rare occurrence. More data has been lost from hash collisions on EMC Centera than from double drive failures on XIV, and hash collisions are also very rare. While the published worst-case time to re-protect from a 1TB drive failure for a fully-configured XIV is 30 minutes, field experience shows XIV regaining full redundancy on average in 12 minutes. That is 40 times less likely than a typical 8-10 hour window for a RAID-5 configuration.
A lot of bad things can happen in those 8-10 hours of traditional RAID rebuild. Performance can be seriously degraded. Other components may be affected, as they share cache, connected to the same backplane or bus, or co-dependent in some other manner. An engineer supporting the customer onsite during a RAID-5 rebuild might pull the wrong drive, thereby causing a double drive failure they were hoping to avoid. Having IBM XIV rebuild in only a few minutes addresses this "human factor".
In his post [XIV drive management], fellow blogger Jim Kelly (IBM) covers a variety of reasons why storage admins feel double drive failures are more than just random chance. XIV avoids load stress normally associated with traditional RAID rebuild by evenly spreading out the workload across all drives. This is known in the industry as "wear-leveling". When the first drive fails, the recovery is spread across the remaining 179 drives, so that each drive only processes about 1 percent of the data. The [Ultrastar A7K1000] 1TB SATA disk drives that IBM uses from HGST have specified 1.2 million hours mean-time-between-failures [MTBF] would average about one drive failing every nine months in a 180-drive XIV system. However, field experience shows that an XIV system will experience, on average, one drive failure per 13 months, comparable to what companies experience with more robust Fibre Channel drives. That's innovative XIV wear-leveling at work!
Fact: In the highly unlikely event that a DDF were to occur, you will have full read/write access to nearly all of your data on the XIV, all but a few GB.
Even though it has NEVER happened in the field, some clients and prospects are curious what a double drive failure on an XIV would look like. First, a critical alert message would be sent to both the client and IBM, and a "union list" is generated, identifying all the chunks in common. The worst case on a 15-module XIV fully loaded with 79TB data is approximately 9000 chunks, or 9GB of data. The remaining 78.991 TB of unaffected data are fully accessible for read or write. Any I/O requests for the chunks in the "union list" will have no response yet, so there is no way for host applications to access outdated information or cause any corruption.
(One blogger compared losing data on the XIV to drilling a hole through the phone book. Mathematically, the drill bit would be only 1/16th of an inch, or 1.60 millimeters for you folks outside the USA. Enough to knock out perhaps one character from a name or phone number on each page. If you have ever seen an actor in the movies look up a phone number in a telephone booth then yank out a page from the phone book, the XIV equivalent would be cutting out 1/8th of a page from an 1100 page phone book. In both cases, all of the rest of the unaffected information is full accessible, and it is easy to identify which information is missing.)
If the second drive failed several minutes after the first drive, the process for full redundancy is already well under way. This means the union list is considerably shorter or completely empty, and substantially fewer chunks are impacted. Contrast this with RAID-5, where being 99 percent complete on the rebuild when the second drive fails is just as catastrophic as having both drives fail simultaneously.
Fact: After a DDF event, the files on these few GB can be identified for recovery.
Once IBM receives notification of a critical event, an IBM engineer immediately connects to the XIV using remote service support method. There is no need to send someone physically onsite, the repair actions can be done remotely. The IBM engineer has tools from HGST to recover, in most cases, all of the data.
Any "union" chunk that the HGST tools are unable to recover will be set to "media error" mode. The IBM engineer can provide the client a list of the XIV LUNs and LBAs that are on the "media error" list. From this list, the client can determine which hosts these LUNs are attached to, and run file scan utility to the file systems that these LUNs represent. Files that get a media error during this scan will be listed as needing recovery. A chunk could contain several small files, or the chunk could be just part of a large file. To minimize time, the scans and recoveries can all be prioritized and performed in parallel across host systems zoned to these LUNs.
As with any file or volume recovery, keep in mind that these might be part of a larger consistency group, and that your recovery procedures should make sense for the applications involved. In any case, you are probably going to be up-and-running in less time with XIV than recovery from a RAID-5 double failure would take, and certainly nowhere near "beyond repair" that other vendors might have you believe.
Fact: This does not mean you can eliminate all Disaster Recovery planning!
To put this in perspective, you are more likely to lose XIV data from an earthquake, hurricane, fire or flood than from a double drive failure. As with any unlikely disaster, it is best to have a disaster recovery plan than to hope it never happens. All disk systems that sit on a single datacenter floor are vulnerable to such disasters.
For mission-critical applications, IBM recommends using disk mirroring capability. IBM XIV storage system offers synchronous and asynchronous mirroring natively, both included at no additional charge.
Have you ever noticed that sometimes two movies come out that seem eerily similar to each other, released by different studios within months or weeks of each other? My sister used to review film scripts for a living, she would read ten of them and have to pick her top three favorites, and tells me that scripts for nearly identical concepts came all the time. Here are a few of my favorite examples:
1994: [Wyatt Earp] and [Tombstone] were Westerns recounting the famed gunfight at the O.K. Corral. Tombstone, Arizona is near Tucson, and the gunfight is recreated fairly often for tourists.
1998: [Armageddon] and [Deep Impact] were a pair of disaster movies dealing with a large rock heading to destroy all life on earth. I was in Mazatlan, Mexico to see the latter, dubbed in Spanish as "Impacto Profundo".
1998: [A Bug's Life] and [Antz] were computer-animated tales of the struggle of one individual ant in an ant colony.
2000: [Mission to Mars] and [Red Planet] were sci-fi pics exploring what a manned mission to our neighboring planet might entail.
This is different than copy-cat movies that are re-made or re-imagined many years later based on the previous successes of an original. Ever since my blog post [VPLEX: EMC's Latest Wheel is Round] in 2010 comparing EMC's copy-cat product that came our seven years after IBM's SAN Volume Controller (SVC), I've noticed EMC doesn't talk about VPLEX that much anymore.
This week, IBM announced [XIV Gen3 Solid-State Drive support] and our friends over at EMC announced [VFCache SSD-based PCIe cards]. Neither of these should be a surprise to anyone who follows the IT industry, as IBM had announced its XIV Gen3 as "SSD-Ready" last year specifically for this purpose, and EMC has been touting its "Project Lightning" since last May.
Fellow blogger Chuck Hollis from EMC has a blog post [VFCache means Very Fast Cache indeed] that provides additional detail. Chuck claims the VFCache is faster than popular [Fusion-IO PCIe cards] available for IBM servers. I haven't seen the performance spec sheets, but typically SSD is four to five times slower than the DRAM cache used in the XIV Gen3. The VFCache's SSD is probably similar in performance to the SSD supported in the IBM XIV Gen3, DS8000, DS5000, SVC, N series, and Storwize V7000 disk systems.
Nonetheless, I've been asked my opinions on the comparison between these two announcements, as they both deal with improving application performance through the use of Solid-State Drives as an added layer of read cache.
(FTC Disclosure: I am both a full-time employee and stockholder of the IBM Corporation. The U.S. Federal Trade Commission may consider this blog post as a paid celebrity endorsement of IBM servers and storage systems. This blog post is based on my interpretation and opinions of publicly-available information, as I have no hands-on access to any of these third-party PCIe cards. I have no financial interest in EMC, Fusion-IO, Texas Memory Systems, or any other third party vendor of PCIe cards designed to fit inside IBM servers, and I have not been paid by anyone to mention their name, brands or products on this blog post.)
The solutions are different in that IBM XIV Gen3 the SSD is "storage-side" in the external storage device, and EMC VFCache is "server-side" as a PCI Express [PCIe] card. Aside from that, both implement SSD as an additional read cache layer in front of spinning disk to boost performance. Neither is an industry first, as IBM has offered server-side SSD since 2007, and IBM and EMC have offered storage-side SSD in many of their other external storage devices. The use of SSD as read cache has already been available in IBM N series using [Performance Accelerator Module (PAM)] cards.
IBM has offered cooperative caching synergy between its servers and its storage arrays for some time now. The predecessor to today's POWER7-based were the iSeries i5 servers that used PCI-X IOP cards with cache to connect i5/OS applications to IBM's external disk and tape systems. To compete in this space, EMC created their own PCI-X cards to attach their own disk systems. In 2006, IBM did the right thing for our clients and fostered competition by entering in a [Landmark agreement] with EMC to [license the i5 interfaces]. Today, VIOS on IBM POWER systems allows a much broader choice of disk options for IBM i clients, including the IBM SVC, Storwize V7000 and XIV storage systems.
Can a little SSD really help performance? Yes! An IBM client running a [DB2 Universal Database] cluster across eight System x servers was able to replace an 800-drive EMC Symmetrix by putting eight SSD Fusion-IO cards in each server, for a total of 64 Solid-State drives, saving money and improving performance. DB2 has the Data Partitioning Feature that has multi-system DB2 configurations using a Grid-like architecture similar to how XIV is designed. Most IBM System x and BladeCenter servers support internal SSD storage options, and many offer PCIe slots for third-party SSD cards. Sadly, you can't do this with a VFCache card, since you can have only one VFCache card in each server, the data is unprotected, and only for ephemeral data like transaction logs or other temporary data. With multiple Fusion-IO cards in an IBM server, you can configure a RAID rank across the SSD, and use it for persistent storage like DB2 databases.
Here then is my side-by-side comparison:
Category
EMC VFCache
IBM XIV Gen3 SSD Caching
Servers supported
Selected x86-based models of Cisco UCS, Dell PowerEdge, HP ProLiant DL, and IBM xSeries and System x servers
All of these, plus any other blade or rack-optimized server currently supported by XIV Gen3, including Oracle SPARC, HP Titanium, IBM POWER systems, and even IBM System z mainframes running Linux
Operating System support
Linux RHEL 5.6 and 5.7, VMware vSphere 4.1 and 5.0, and Windows 2008 x64 and R2.
All of these, plus all the other operating systems supported by XIV Gen3, including AIX, IBM i, Solaris, HP-UX, and Mac OS X
Protocol support
FCP
FCP and iSCSI
Vendor-supplied driver required on the server
Yes, the VFCache driver must be installed to use this feature.
No, IBM XIV Gen3 uses native OS-based multi-pathing drivers.
External disk storage systems required
None, it appears the VFCache has no direct interaction with the back-end disk array, so in theory the benefits are the same whether you use this VFCache card in front of EMC storage or IBM storage
XIV Gen3 is required, as the SSD slots are not available on older models of IBM XIV.
Shared disk support
No, VFCache has to be disabled and removed for vMotion to take place.
Yes! XIV Gen3 SSD caching shared disk supports VMware vMotion and Live Partition Mobility.
Support for multiple servers
No
An advantage of the XIV Gen3 SSD caching approach is that the cache can be dynamically allocated to the busiest data from any server or servers.
Support for active/active server clusters
No
Yes!
Aware of changes made to back-end disk
No, it appears the VFCache has no direct interaction with the back-end disk array, so any changes to the data on the box itself are not communicated back to the VFCache card itself to invalidate the cache contents.
Yes!
Sequential-access detection
None identified. However, VFCache only caches blocks 64KB or smaller, so any sequential processing with larger blocks will bypass the VFCache.
Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.
Number of SSD supported
One, which seems odd as IBM supports multiple Fusion-IO cards for its servers. However, this is not really a single point of failure (SPOF) as an application experiencing a VFCache failure merely drops down to external disk array speed, no data is lost since it is only read cache.
6 to 15 (one per XIV module) for high availability.
Pin data in SSD cache
Yes, using split-card mode, you can designate a portion of the 300GB to serve as Direct-attached storage (DAS). All data written to the DAS portion will be kept in SSD. However, since only one card is supported per server and the data is unprotected, this should only be used for ephemeral data like logs and temp files.
No, there is no option to designate an XIV Gen3 volume to be SSD-only. Consider using Fusion-IO PCIe card as a DAS alternative, or another IBM storage system for that requirement.
Hot-pluggable/Hot-swappable
Not identified
Yes!
Pre-sales Estimating tools
None identified
Yes! CDF and Disk Magic tools are available to help cost-justify the purchase of SSD based on workload performance analysis.
IBM has the advantage that it designs and manufactures both servers and storage, and can design optimal solutions for our clients in that regard.
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
IBM
HDS
EMC
Why?
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
How?
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
What?
SAN Volume Controller
HDS USP-V and USP-VM
Invista
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
IBM SVC
EMC Invista
EMC VPLEX
Hardware
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
SAN Fabric
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Multi-pathing driver
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Cache
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
No
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Thin Provisioning
Provides thin provisioning to devices that don't offer this natively
No
Like Invista, No
Campus-wide access
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Limited support
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
None
HP SVSP-like, requires at least one EMC storage array to hold metadata
Internal storage
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
None
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
The technology industry is full of trade-offs. Take for example solar cells that convert sunlight to electricity. Every hour, more energy hits the Earth in the form of sunlight than the entire planet consumes in an entire year. The general trade-off is between energy conversion efficiency versus abundance of materials:
Get 9-11 percent efficiency using rare materials like indium (In), gallium (Ga) or cadmium (Cd).
Get only 6.7 percent efficiency using abundant materials like copper (Cu), tin (Sn), zinc (Zn), sulfur (S), and selenium (Se)
A second trade-off is exemplified by EMC's recent GeoProtect announcement. This appears similar to the geographic dispersal method introduced by a company called [CleverSafe]. The trade-off is between the amount of space to store one or more copies of data and the protection of data in the event of disaster. Here's an excerpt from fellow blogger Chuck Hollis (EMC) titled ["Cloud Storage Evolves"]:
"Imagine a average-sized Atmos network of 9 nodes, all in different time zones around the world. And imagine that we were using, say, a 6+3 protection scheme.
The implication is clear: any 3 nodes could be completely lost: failed, destroyed, seized by the government, etc.
-- and the information could be completely recovered from the surviving nodes."
For organizations worried about their information falling into the wrong hands (whether criminal or government sponsored!), any subset of the nodes would yield nothing of value -- not only would the information be presumably encrypted, but only a few slices of a far bigger picture would be lost.
Seized by the government?falling into the wrong hands? Is EMC positioning ATMOS as "Storage for Terrorists"? I can certainly appreciate the value of being able to protect 6PB of data with only 9PB of storage capacity, instead of keeping two copies of 6PB each, the trade-off means that you will be accessing the majority of your data across your intranet, which could impact performance. But, if you are in an illicit or illegal business that could have a third of your facilities "seized by the government", then perhaps you shouldn't house your data centers there in the first place. Having two copies of 6PB each, in two "friendly nations", might make more sense.
(In reality, companies often keep way more than just two copies of data. It is not unheard of for companies to keep three to five copies scattered across two or three locations. Facebook keeps SIX copies of photographs you upload to their website.)
ChuckH argues that the governments that seize the three nodes won't have a complete copy of the data. However, merely having pieces of data is enough for governments to capture terrorists. Even if the striping is done at the smallest 512-byte block level, those 512 bytes of data might contain names, phone numbers, email addresses, credit cards or social security numbers. Hackers and computer forensics professionals take advantage of this.
You might ask yourself, "Why not just encrypt the data instead?" That brings me to the third trade-off, protection versus application performance. Over the past 30 years, companies had a choice, they could encrypt and decrypt the data as needed, using server CPU cycles, but this would slow down application processing. Every time you wanted to read or update a database record, more cycles would be consumed. This forced companies to be very selective on what data they encrypted, which columns or fields within a database, which email attachments, and other documents or spreadsheets.
An initial attempt to address this was to introduce an outboard appliance between the server and the storage device. For example, the server would write to the appliance with data in the clear, the appliance would encrypt the data, and pass it along to the tape drive. When retrieving data, the appliance would read the encrypted data from tape, decrypt it, and pass the data in the clear back to the server. However, this had the unintended consequences of using 2x to 3x more tape cartridges. Why? Because the encrypted data does not compress well, so tape drives with built-in compression capabilities would not be able to shrink down the data onto fewer tapes.
(I covered the importance of compressing data before encryption in my previous blog post
[Sock Sock Shoe Shoe].)
Like the trade-off between energy efficiency and abundant materials, IBM eliminated the trade-off by offering compression and encryption on the tape drive itself. This is standard 256-bit AES encryption implemented on a chip, able to process the data as it arrives at near line speed. So now, instead of having to choose between protecting your data or running your applications with acceptable performance, you can now do both, encrypt all of your data without having to be selective. This approach has been extended over to disk drives, so that disk systems like the IBM System Storage DS8000 and DS5000 can support full-disk-encryption [FDE] drives.
Has EMC stooped so low that they have to resort to Hitachi math for their latest performance claims?
Readers might remember that just a few months ago, I had a blog post [Is this what HDS tells our mainframe clients?] pointing out the outlandish comparison Hitachi was using in their presentations. Their response was to cover it up, forcing me to follow up with my post [The Cover-up is worse than the original crime]. To their credit, they eventually removed the false and misleading information from their materials.
Now an avid reader of my blog has brought this to my attention. Apparently,
EMC has been showing customers a presentation
[Accelerating Storage Transformation with VMAX and VPLEX] with false and misleading comparison claims between IBM DS8000, HDS VSP and EMC VMAX 40K disk system performance.
(FTC Disclosure: This would be a good time to remind my readers that I work for IBM and own IBM stock. I do not endorse any of the EMC or HDS products mentioned in this post, and have no financial affiliation or investments directly with either EMC nor HDS. I am basing my information solely on the presentation posted on the internet and other sources publicly available, and not on any misrepresentations from EMC speakers at the various conferences where these charts might have been shown.)
The problem with misinformation is that it is not always obvious. The EMC presentation is quite pretty and professional-looking. It is the typical slick, attention-getting, low-content, over-simplified marketing puffery you have come to expect from EMC. There are two slides in particular that I have issue with.
This first graphic implies that IBM and HDS are nearly tied in performance, but that EMC VMAX 40K has nearly triple that bandwidth. Overall the slide has very little detail. That makes it difficult to determine what exactly is being claimed and whether a fair comparison is being made.
The title claims that VMAX 40K is "#1 in High Bandwidth Apps". Only three disk systems are shown so the claim appears to be relative to only the three systems. The wording "High Bandwidth Apps" is confusing considering the cited numbers are for disk systems and no application is identified. By comparison, IBM SONAS can drive up to 105 GB/sec sequential bandwidth, nearly double what EMC claims for its VMAX 40K, so EMC is certainly not even close to #1.
Is the workload random or sequential? That is not easy to determine. The use of "GB/s" along with the large block size of 128KB implies the I/O workload is sequential, which is great for some workloads like high performance computing, technical computing and video broadcasts. Random workloads, on the other hand, are usually measured in I/Os per second (IOPS) with a block size ranging 4KB to 64KB. (I am assuming the 128K blocks refers to 128KB block size, and not reading the same block of cache 128,000 times.)
The slide states "Maximum Sustainable RRH Bandwidth 128K Blocks". The acronym "RRH" is not defined; but I suspect this refers to "random read hits". For random workloads, 100 percent random read hits from cache represents one corner of the infamous "four corners" test. Real-world workloads have a mix of reads and writes, and a mix of cache hits and cache misses. It is also unclear whether the hits are from standard data cache or from internal buffers in adapters (perhaps accessing the same blocks repeatedly) or something else. So is this really for a random workload, or a sequential workload?
(The term "Hitachi Math" was coined by an EMC blogger precisely to slam Hitachi Data Systems for their blatant use of four-corners results, claiming that spouting ridiculously large, but equally unrealistic, 100 percent random read hit results don't provide any useful information. I agree. There are much better industry-standard benchmarks available, such as SPC-1 for random workloads, SPC-2 for sequential workloads, and even benchmarks for specific applications, that represent real-world IT environments. To shame HDS for their use of four-corners results, only for EMC themselves to use similar figures in their own presentation is truly hypocritical of them!)
The IBM system is identified as "DS8000". DS8000 is a generic family name that applies to multiple generations of systems first introduced in 2004. The specific model is not identified, but that is critical information. Is this a first generation DS8100, or the latest DS8800, or something in between?
The slide says "Full System Configs", but that is not defined and configuration details are not identified. Configuration details, also critical information in assessing system performance capabilities, are not specified. If the EMC box costs seven times more than IBM or HDS, would you really buy it to get 3x more performance? Is the EMC packed with the maximum amount of SSD? Were there any SSD in the IBM or HDS boxes to match?
The source of the claimed IBM DS8000 performance numbers is not identified. Did they run their own tests? While I cannot tell, the VMAX may have been configured with 64 Fibre Channel 8Gbps host connections. In that case each channel is theoretically capable of supporting about 800 MB/s at 100% channel utilization. Multiplying 64 x 800MB/s = 51.2GB/s, so did EMC just do the performance comparison on the back of a napkin, assuming there are no other bottlenecks in the system? Even then, I would not round up 51.2 to 52!
Response times were not identified. For random I/Os, response time is a very important metric. It is possible that the Symmetrix was operating with some resources at 100% utilization to get the highest GB/s result, but that would likely make I/O response times unacceptable for real-world random I/O workloads.
IBM and HDS have both published Storage Performance Council [SPC] industry-standard performance benchmarks. EMC has not published any SPC benchmarks for VMAX systems. If EMC is interested in providing customers with audited, detailed performance information along with detailed configuration information, all based on benchmarks designed to represent real-world workloads, EMC can always publish SPC benchmark results as IBM and other vendors have done. In past blog fights, EMC resorts to the excuse that SPC isn't perfect, but can they really argue that vague and unrealistic claims cited in its presentation are better?
The second graphic is so absurd, you would think it came directly from Larry Ellison at an Oracle OpenWorld keynote session. EMC is comparing a configuration with VMAX 40K plus an EMC VFCache host-side flash memory cache card to a configuration with an IBM and HDS disk system without host-side flash memory cache also configured. The comparison is clearly apples-to-oranges. Other disk system configuration details are also omitted.
FAST VP is EMC's name for its sub-volume drive tiering feature, comparable to IBM Easy Tier and Hitachi's Dynamic Tiering. The graph implies that IBM and HDS can only achieve a modest increment improvement from their sub-volume tiering. I beg to differ. I have seen various cases where a small amount of SSD on IBM DS8000 series can drastically improve performance 200 to 400 percent.
The "DBClassify" shown on the graph is a tool run as part of an EMC professional services offering called Database Performance Tiering Assessment, makes recommendations for storing various database objects on different drive tiers based on object usage and importance. Do you really need to pay for professional services? With IBM Easy Tier, you just turn it on, and it works. No analysis required, no tools, no professional services, and no additional charge!
VFCache is an optional product from EMC that currently has no integration whatsoever with VMAX. A fair comparison would have included a host-side flash memory cache (from any vendor) when the IBM or HDS storage system was configured. Or leave it out altogether and just focus on the sub-volume tiering comparison.
Keep in mind that EMC's VFCache supports only selected x86-based hosts. IBM has published a [Statement of Direction] indicating that it will also offer this for Power systems running AIX and Linux host-side flash memory cache integrated with DS8000 Easy Tier.
I feel EMC's claims about IBM DS8000 performance are vague and misleading. EMC appears to lack the kind of technical marketing integrity that IBM strives to attain.
Since EMC is not able or willing to publish fair and meaningful performance comparisons, it is up to me to set the record straight and point out EMC's failings in this matter.
Reminder: It's not to late to register for my Webcast "Solving the Storage Capacity Crisis" on Tuesday, September 25. See my blog post [Upcoming events in September] to register!
A long time ago, perhaps in the early 1990s, I was an architect on the component known today as DFSMShsm on z/OS mainframe operationg system. One of my job responsibilities was to attend the biannual [SHARE conference to listen to the requirements of the attendees on what they would like added or changed to the DFSMS, and ask enough questions so that I can accurately present the reasoning to the rest of the architects and software designers on my team. One person requested that the DFSMShsm RELEASE HARDCOPY should release "all" the hardcopy. This command sends all the activity logs to the designated SYSOUT printer. I asked what he meant by "all", and the entire audience of 120 some attendees nearly fell on the floor laughing. He complained that some clever programmer wrote code to test if the activity log contained only "Starting" and "Ending" message, but no error messages, and skip those from being sent to SYSOUT. I explained that this was done to save paper, good for the environment, and so on. Again, howls of laughter. Most customers reroute the SYSOUT from DFSMS from a physical printer to a logical one that saves the logs as data sets, with date and time stamps, so having any "skipped" leaves gaps in the sequence. The client wanted a complete set of data sets for his records. Fair enough.
When I returned to Tucson, I presented the list of requests, and the immediate reaction when I presented the one above was, "What did he mean by ALL? Doesn't it release ALL of the logs already?" I then had to recap our entire dialogue, and then it all made sense to the rest of the team. At the following SHARE conference six months later, I was presented with my own official "All" tee-shirt that listed, and I am not kidding, some 33 definitions for the word "all", in small font covering the front of the shirt.
I am reminded of this story because of the challenges explaining complicated IT concepts using the English language which is so full of overloaded words that have multiple meanings. Take for example the word "protect". What does it mean when a client asks for a solution or system to "protect my data" or "protect my information". Let's take a look at three different meanings:
Unethical Tampering
The first meaning is to protect the integrity of the data from within, especially from executives or accountants that might want to "fudge the numbers" to make quarterly results look better than they are, or to "change the terms of the contract" after agreements have been signed. Clients need to make sure that the people authorized to read/write data can be trusted to do so, and to store data in Non-Erasable, Non-Rewriteable (NENR) protected storage for added confidence. NENR storage includes Write-Once, Read-Many (WORM) tape and optical media, disk and disk-and-tape blended solutions such as the IBM Grid Medical Archive Solution (GMAS) and IBM Information Archive integrated system.
Unauthorized Access
The second meaning is to protect access from without, especially hackers or other criminals that might want to gather personally-identifiably information (PII) such as social security numbers, health records, or credit card numbers and use these for identity theft. This is why it is so important to encrypt your data. As I mentioned in my post [Eliminating Technology Trade-Offs], IBM supports hardware-based encryption FDE drives in its IBM System Storage DS8000 and DS5000 series. These FDE drives have an AES-128 bit encryption built-in to perform the encryption in real-time. Neither HDS or EMC support these drives (yet). Fellow blogger Hu Yoshida (HDS) indicates that their USP-V has implemented data-at-rest in their array differently, using backend directors instead. I am told EMC relies on the consumption of CPU-cycles on the host servers to perform software-based encryption, either as MIPS consumed on the mainframe, or using their Powerpath multi-pathing driver on distributed systems.
There is also concern about internal employees have the right "need-to-know" of various research projects or upcoming acquisitions. On SANs, this is normally handled with zoning, and on NAS with appropriate group/owner bits and access control lists. That's fine for LUNs and files, but what about databases? IBM's DB2 offers Label-Based Access Control [LBAC] that provides a finer level of granularity, down to the row or column level. For example, if a hospital database contained patient information, the doctors and nurses would not see the columns containing credit card details, the accountants would not see the columnts containing healthcare details, and the individual patients, if they had any access at all, would only be able to access the rows related to their own records, and possibly the records of their children or other family members.
Unexpected Loss
The third meaning is to protect against the unexpected. There are lots of ways to lose data: physical failure, theft or even incorrect application logic. Whatever the way, you can protect against this by having multiple copies of the data. You can either have multiple copies of the data in its entirety, or use RAID or similar encoding scheme to store parts of the data in multiple separate locations. For example, with RAID-5 rank containing 6+P+S configuration, you would have six parts of data and one part parity code scattered across seven drives. If you lost one of the disk drives, the data can be rebuilt from the remaining portions and written to the spare disk set aside for this purpose.
But what if the drive is stolen? Someone can walk up to a disk system, snap out the hot-swappable drive, and walk off with it. Since it contains only part of the data, the thief would not have the entire copy of the data, so no reason to encrypt it, right? Wrong! Even with part of the data, people can get enough information to cause your company or customers harm, lose business, or otherwise get you in hot water. Encryption of the data at rest can help protect against unauthorized access to the data, even in the case when the data is scattered in this manner across multiple drives.
To protect against site-wide loss, such as from a natural disaster, fire, flood, earthquake and so on, you might consider having data replicated to remote locations. For example, IBM's DS8000 offers two-site and three-site mirroring. Two-site options include Metro Mirror (synchronous) and Global Mirror (asynchronous). The three-site is cascaded Metro/Global Mirror with the second site nearby (within 300km) and the third site far away. For example, you can have two copies of your data at site 1, a third copy at nearby site 2, and two more copies at site 3. Five copies of data in three locations. IBM DS8000 can send this data over from one box to another with only a single round trip (sending the data out, and getting an acknowledgment back). By comparison, EMC SRDF/S (synchronous) takes one or two trips depending on blocksize, for example blocks larger than 32KB require two trips, and EMC SRDF/A (asynchronous) always takes two trips. This is important because for many companies, disk is cheap but long-distance bandwidth is quite expensive. Having five copies in three locations could be less expensive than four copies in four locations.
Fellow blogger BarryB (EMC Storage Anarchist) felt I was unfair pointing out that their EMC Atmos GeoProtect feature only protects against "unexpected loss" and does not eliminate the need for encryption or appropriate access control lists to protect against "unauthorized access" or "unethical tampering".
(It appears I stepped too far on to ChuckH's lawn, as his Rottweiler BarryB came out barking, both in the [comments on my own blog post], as well as his latest titled [IBM dumbs down IBM marketing (again)]. Before I get another rash of comments, I want to emphasize this is a metaphor only, and that I am not accusing BarryB of having any canine DNA running through his veins, nor that Chuck Hollis has a lawn.)
As far as I know, the EMC Atmos does not support FDE disks that do this encryption for you, so you might need to find another way to encrypt the data and set up the appropriate access control lists. I agree with BarryB that "erasure codes" have been around for a while and that there is nothing unsafe about using them in this manner. All forms of RAID-5, RAID-6 and even RAID-X on the IBM XIV storage system can be considered a form of such encoding as well. As for the amount of long-distance bandwidth that Atmos GeoProtect would consume to provide this protection against loss, you might question any cost savings from this space-efficient solution. As always, you should consider both space and bandwidth costs in your total cost of ownership calculations.
Of course, if saving money is your main concern, you should consider tape, which can be ten to twenty times cheaper than disk, affording you to keep a dozen or more copies, in as many time zones, at substantially lower cost. These can be encrypted and written to WORM media for even more thorough protection.
The new [IBM System Storage Tape Controller 3592 Model C07] is an upgrade to the previous C06 controller. Like the C06, the new 3592-C07 can have up to four FICON (4Gbps) ports, four FC ports, and connect up to 16 drives. The difference is that the C07 supports 8Gbps speed FC ports, and can support the [new TS1140 tape drives that were announced on May 9]. A cool feature of the C07 is that it has a built-in library manager function for the mainframe. On the previous models, you had to have a separate library manager server.
Crossroads ReadVerify Appliance (3222-RV1)
IBM has entered an agreement to resell [Crossroads ReadVerify Appliance], or "RV1" for short. The RV1 is a 1U-high server with software that gathers information on the utilization, performance and health for a physical tape environment, such as an IBM TS3500 Tape Library. The RV1 also offers a feature called "ArchiveVerify" which validates long-term retention archive tapes, providing an audit trail on the readability of tape media. This can be useful for tape libraries attached behind IBM Information Archive compliance storage solution, or the IBM Scale-Out Network Attached Storage (SONAS).
As an added bonus, Crossroads has great videos! Here's one, titled [Tape Sticks]
Linear Tape File System (LTFS) Library Edition Version 2.1
You might be scratching your head on this one. Wasn't the LTFS v2.1 announced May 9? Actually, it was for the TS3500, TS3200 and TS3100. On this announcement, the [
IBM System Storage Linear Tape File System Library Edition Version 2.1] was extended to support the rest of our tape library portfolio, specifically the TS3310 and TS2900.
XIV Storage System - Generation 3
Saving the biggest announcement for last, IBM announces new [XIV Generation 3 hardware] and corresponding [XIV Software Version 11.0.0]. The new hardware is officially the model 114, but since today's new zEnterprise announced today is also "model 114", I will refer unofficially to the current XIV hardware that [IBM launched initially in September 2008] as "Gen2", and the new XIV Gen3 hardware as "Gen3" to avoid confusion.
While the hardware is all refreshed, the overall "scale-out" architecture is unchanged. Kudos to the XIV development team for designing a system that is based entirely on commodity hardware, allowing new hardware generations to be introduced with minimal changes to the vast number of field-proven software features like thin provisioning, space-efficient read-only and writeable snapshots, synchronous and asynchronous mirroring, and Quality of Service (QoS) performance classes.
The new XIV Gen3 features an Infiniband interconnect, faster 8Gbps FC ports, more iSCSI ports, faster motherboard and processors, SAS-NL 2TB drives, 24GB cache memory per XIV module, all in a single frame IBM rack that supports the IBM Rear Door Heat Exchanger. The results are a 2x to 4x boost in performance for various workloads. Here are some example performance comparisons:
Disclaimer: Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. Your mileage may vary.
In a Statement of Direction, IBM also has designed the Gen3 modules to be "SSD-ready" which means that you can insert up to 500GB of Solid-State drive capacity per XIV module, up to 7.5TB in a fully-configured 15 module frame. This SSD would act as an extension of DRAM cache, similar to how Performance Accelerator Modules (PAM) on IBM N series.
IBM will continue to sell XIV Gen2 systems for the next 12-18 months, as some clients like the smaller 1TB disk drives. The new Gen3 only comes with 2TB drives. There are some clients that love the XIV so much, that they also use it for less stringent Tier 2 workloads. If you don't need the blazing speed of the new Gen3, perhaps the lower cost XIV Gen2 might be a great fit!
As if I haven't said this enough times already, the IBM XIV is a Tier-1, high-end, enterprise-class disk storage system, optimized for use with mission critical workloads on Linux, UNIX and Windows operating systems, and is the ideal cost-effective replacement for EMC Symmetrix VMAX, HDS USP-V and VSP, and HP P9000 series disk systems, . Like the XIV Gen2, the XIV Gen3 can be used with IBM System i using VIOS, and with IBM System z mainframes running Linux, z/VM or z/VSE. If you run z/OS or z/TPF with Count-Key-Data (CKD) volumes and FICON attachment, go with the IBM System Storage DS8000 instead, IBM's other high-end disk system.
My series last week on IBM Watson (which you can read [here], [here], [here], and [here]) brought attention to IBM's Scale-Out Network Attached Storage [SONAS]. IBM Watson used a customized version of SONAS technology for its internal storage, and like most of the components of IBM Watson, IBM SONAS is commercially available as a stand-alone product.
Like many IBM products, SONAS has gone through various name changes. First introduced by Linda Sanford at an IBM SHARE conference in 2000 under the IBM Research codename Storage Tank, it was then delivered as a software-only offering SAN File System, then as a services offering Scale-out File Services (SoFS), and now as an integrated system appliance, SONAS, in IBM's Cloud Services and Systems portfolio.
If you are not familiar with SONAS, here are a few of my previous posts that go into more detail:
This week, IBM announces that SONAS has set a world record benchmark for performance, [a whopping 403,326 IOPS for a single file system]. The results are based on comparisons of publicly available information from Standard Performance Evaluation Corporation [SPEC], a prominent performance standardization organization with more than 60 member companies. SPEC publishes hundreds of different performance results each quarter covering a wide range of system performance disciplines (CPU, memory, power, and many more). SPECsfs2008_nfs.v3 is the industry-standard benchmark for NAS systems using the NFS protocol.
(Disclaimer: Your mileage may vary. As with any performance benchmark, the SPECsfs benchmark does not replicate any single workload or particular application. Rather, it encapsulates scores of typical activities on a NAS storage system. SPECsfs is based on a compilation of workload data submitted to the SPEC organization, aggregated from tens of thousands of fileservers, using a wide variety of environments and applications. As a result, it is comprised of typical workloads and with typical proportions of data and metadata use as seen in real production environments.)
The configuration tested involves SONAS Release 1.2 on 10 Interface Nodes and 8 Storage Pods, resulting a single file system over 900TB usable capacity.
10 Interface Nodes; each with:
Maximum 144 GB of memory
One active 10GbE port
8 Storage Pods; each with:
2 Storage nodes and 240 drives
Drive type: 15K RPM SAS hard drives
Data Protection using RAID-5 (8+P) ranks
Six spare drives per Storage Pod
IBM wanted a realistic "no compromises" configuration to be tested, by choosing:
Regular 15K RPM SAS drives, rather than a silly configuration full of super-expensive Solid State Drives (SSD) to plump up the results.
Moderate size, typical of what clients are asking for today. The Goldilocks rule applies. This SONAS is not a small configuration under 100TB, and nowhere close to the maximum supported configuration of 7,200 disks across 30 Interface Nodes and 30 Storage Pods.
Single file system, often referred to as a global name space, rather than using an aggregate of smaller file systems added together that would be more complicated to manage. Having multiple file systems often requires changes to applications to take advantage of the aggregate peformance. It is also more difficult to load-balance your performance and capacity across multiple file systems. Of course, SONAS can support up to 256 separate file systems if you have a business need for this complexity.
The results are stunning. IBM SONAS handled three times more workload for a single file system than the next leading contender. All of the major players are there as well, including NetApp, EMC and HP.
Congratulations to the SONAS development and test teams! Scale-Out NAS is a competitive space. SONAS can handle not only large streaming files but also small random I/O workloads extraordinarily well. Just in the last two years, to compete against IBM's leadership in this realm, [HP acquired Ibrix], [EMC acquired Isilon] and [Dell has acquired what's left of Exanet's assets], THey have a lot of catching up to do!
Wrapping up my week's theme of storage optimization, I thought I would help clarify the confusion between data reduction and storage efficiency. I have seen many articles and blog posts that either use these two terms interchangeably, as if they were synonyms for each other, or as if one is merely a subset of the other.
Data Reduction is LOSSY
By "Lossy", I mean that reducing data is an irreversible process. Details are lost, but insight is gained. In his paper, [Data Reduction Techniques", Rajana Agarwal defines this simply:
"Data reduction techniques are applied where the goal is to aggregate or amalgamate the information contained in large data sets into manageable (smaller) information nuggets."
Data reduction has been around since the 18th century.
Take for example this histogram from [SearchSoftwareQuality.com]. We have reduced ninety individual student scores, and reduced them down to just five numbers, the counts in each range. This can provide for easier comprehension and comparison with other distributions.
The process is lossy. I cannot determine or re-create an individual student's score from these five histogram values.
This next example, complements of [Michael Hardy], represents another form of data reduction known as ["linear regression analysis"]. The idea is to take a large set of data points between two variables, the x axis along the horizontal and the y axis along the vertical, and find the best line that fits. Thus the data is reduced from many points to just two, slope(a) and intercept(b), resulting in an equation of y=ax+b.
The process is lossy. I cannot determine or re-create any original data point from this slope and intercept equation.
In this last example, from [Yahoo Finance], reduces millions of stock trades to a single point per day, typically closing price, to show the overall growth trend over the course of the past year.
The process is lossy. Even if I knew the low, high and closing price of a particular stock on a particular day, I would not be able to determine or re-create the actual price paid for individual trades that occurred.
Storage Efficiency is LOSSLESS
By contrast, there are many IT methods that can be used to store data in ways that are more efficient, without losing any of the fine detail. Here are some examples:
Thin Provisioning: Instead of storing 30GB of data on 100GB of disk capacity, you store it on 30GB of capacity. All of the data is still there, just none of the wasteful empty space.
Space-efficient Copy: Instead of copying every block of data from source to destination, you copy over only those blocks that have changed since the copy began. The blocks not copied are still available on the source volume, so there is no need to duplicate this data.
Archiving and Space Management: Data can be moved out of production databases and stored elsewhere on disk or tape. Enough XML metadata is carried along so that there is no loss in the fine detail of what each row and column represent.
Data Deduplication: The idea is simple. Find large chunks of data that contain the same exact information as an existing chunk already stored, and merely set a pointer to avoid storing the duplicate copy. This can be done in-line as data is written, or as a post-process task when things are otherwise slow and idle.
When data deduplication first came out, some lawyers were concerned that this was a "lossy" approach, that somehow documents were coming back without some of their original contents. How else can you explain storing 25PB of data on only 1PB of disk?
(In some countries, companies must retain data in their original file formats, as there is concern that converting business documents to PDF or HTML would lose some critical "metadata" information such as modificatoin dates, authorship information, underlying formulae, and so on.)
Well, the concern applies only to those data deduplication methods that calculate a hash code or fingerprint, such as EMC Centera or EMC Data Domain. If the hash code of new incoming data matches the hash code of existing data, then the new data is discarded and assumed to be identical. This is rare, and I have only read of a few occurrences of unique data being discarded in the past five years. To ensure full integrity, IBM ProtecTIER data deduplication solution and IBM N series disk systems chose instead to do full byte-for-byte comparisons.
Compression: There are both lossy and lossless compression techniques. The lossless Lempel-Ziv algorithm is the basis for LTO-DC algorithm used in IBM's Linear Tape Open [LTO] tape drives, the Streaming Lossless Data Compression (SLDC) algorithm used in IBM's [Enterprise-class TS1130] tape drives, and the Adaptive Lossless Data Compression (ALDC) used by the IBM Information Archive for its disk pool collections.
Last month, IBM announced that it was [acquiring Storwize. It's Random Access Compression Engine (RACE) is also a lossless compression algorithm based on Lempel-Ziv. As servers write files, Storwize compresses those files and passes them on to the destination NAS device. When files are read back, Storwize retrieves and decompresses the data back to its original form.
As with tape, the savings from compression can vary, typically from 20 to 80 percent. In other words, 10TB of primary data could take up from 2TB to 8TB of physical space. To estimate what savings you might achieve for your mix of data types, try out the free [Storwize Predictive Modeling Tool].
So why am I making a distinction on terminology here?
Data reduction is already a well-known concept among specific industries, like High-Performance Computing (HPC) and Business Analytics. IBM has the largest marketshare in supercomputers that do data reduction for all kinds of use cases, for scientific research, weather prediction, financial projections, and decision support systems. IBM has also recently acquired a lot of companies related to Business Analytics, such as Cognos, SPSS, CoreMetrics and Unica Corp. These use data reduction on large amounts of business and marketing data to help drive new sources of revenues, provide insight for new products and services, create more focused advertising campaigns, and help understand the marketplace better.
There are certainly enough methods of reducing the quantity of storage capacity consumed, like thin provisioning, data deduplication and compression, to warrant an "umbrella term" that refers to all of them generically. I would prefer we do not "overload" the existing phrase "data reduction" but rather come up with a new phrase, such as "storage efficiency" or "capacity optimization" to refer to this category of features.
IBM is certainly quite involved in both data reduction as well as storage efficiency. If any of my readers can suggest a better phrase, please comment below.
Well, I'm back safely from my tour of Asia. I am glad to report that Tokyo, Beijing and Kuala Lumpur are pretty much how I remember them from the last time I was there in each city. I have since been fighting jet lag by watching the last thirteen episodes of LOST season 6 and the series finale.
Recently, I have started seeing a lot of buzz on the term "Storage Federation". The concept is not new, but rather based on the work in database federation, first introduced in 1985 by [A federated architecture for information management] by Heimbigner and McLeod. For those not familiar with database federation, you can take several independent autonomous databases, and treat them as one big federated system. For example, this would allow you to issue a single query and get results across all the databases in the federated system. The advantage is that it is often easier to federate several disparate heterogeneous databases than to merge them into a single database. [IBM Infosphere Federation Server] is a market leader in this space, with the capability to federate DB2, Oracle and SQL Server databases.
Storage expansion: You want to increase the storage capacity of an existing storage system that cannot accommodate the total amount of capacity desired. Storage Federation allows you to add additional storage capacity by adding a whole new system.
Storage migration: You want to migrate from an aging storage system to a new one. Storage Federation allows the joining of the two systems and the evacuation from storage resources on the first onto the second and then the first system is removed.
Safe system upgrades: System upgrades can be problematic for a number of reasons. Storage Federation allows a system to be removed from the federation and be re-inserted again after the successful completion of the upgrade.
Load balancing: Similar to storage expansion, but on the performance axis, you might want to add additional storage systems to a Storage Federation in order to spread the workload across multiple systems.
Storage tiering: In a similar light, storage systems in a Storage Federation could have different capacity/performance ratios that you could use for tiering data. This is similar to the idea of dynamically re-striping data across the disk drives within a single storage system, such as with 3PAR's Dynamic Optimization software, but extends the concept to cross storage system boundaries.
To some extent, IBM SAN Volume Controller (SVC), XIV, Scale-Out NAS (SONAS), and Information Archive (IA) offer most, if not all, of these capabilities. EMC claims its VPLEX will be able to offer storage federation, but only with other VPLEX clusters, which brings up a good question. What about heterogenous storage federation? Before anyone accuses me of throwing stones at glass houses, let's take a look at each IBM solution:
IBM SAN Volume Controller
The IBM SAN Volume Controller has been doing storage federation since 2003. Not only can IBM SAN Volume Controller bring together storage from a variety of heterogenous storage, the SVC cluster itself can be a mix of different hardware models. You can have a 2145-8A4 node pair, 2145-8G4 node pair, and the new 2145-CF8 node pair, all combined together into a single SVC cluster. Upgrading SVC hardware nodes in an SVC cluster is always non-disruptive.
IBM XIV storage system
The IBM XIV has two kinds of independent modules. Data modules have processor, cache and 12 disks. Interface modules are data modules with additional processor, FC and Ethernet (iSCSI) adapters. Because these two modules play different roles in an XIV "colony", that number of each type is predetermined. Entry-level six-module systems have 2 interface and 4 data modules. Full 15-module systems have 6 interface and 9 data modules. Individual modules can be added or removed non-disruptively in an XIV.
IBM Scale-Out NAS
The SONAS is comprised of three kinds of nodes that work together in concert. A management node, one or more interface nodes, and two or more storage nodes. The storage nodes are paired to manage up to 240 nodes in a storage pod. Individual interface or data nodes can be added or removed non-disruptively in the SONAS. The underlying technology, the General Parallel File System, has been doing storage federation since 1996 for some of the largest top 500 supercomputers in the world.
IBM Information Archive (IA)
For the IA, there are 1, 2 or 3 nodes, which manages a set of collections. A collection can either be file-based using industry-standard NAS protocols, or object-based using the popular System Storage™ Archive Manager (SSAM) interface. Normally, you have as many collections as you have nodes, but nodes are powerful enough to manage two collections to provide N-1 availability. This allows a node to be removed, and a new node added into the IA "colony", in a non-disruptive manner.
Even in an ant colony, there are only a few types of ants, with typically one queen, several males, and lots of workers. But all the ants are red. You don't see colonies that mix between different species of ants. For databases, federation was a way to avoid the much harder task of merging databases from different platforms. For storage, I am surprised people have latched on to the term "federation", given our mixed results in the other "federations" we have formed, which I have conveniently (IMHO) ranked from least effective to most effective:
The Union of Soviet Socialist Republics (USSR)
My father used to say, "If the Soviet Union were in charge of the Sahara desert, they would run out of sand in 50 years." The [Soviet Union] actually lasted 68 years, from 1922 to 1991.
The United Nations (UN)
After the previous League of Nations failed, the UN was formed in 1945 to facilitate cooperation in international law, international security, economic development, social progress, human rights, and the achieving of world peace by stopping wars between countries, and to provide a platform for dialogue.
The European Union (EU)
With the collapse of the Greek economy, and the [rapid growth of debt] in the UK, Spain and France, there are concerns that the EU might not last past 2020.
The United States of America (USA)
My own country is a federation of states, each with its own government. California's financial crisis was compared to the one in Greece. My own state of Arizona is under boycott from other states because of its recent [immigration law]. However, I think the US has managed better than the EU because it has evolved over the past 200 years.
The Organization of the Petroleum Exporting Countries [OPEC]
Technically, OPEC is not a federation of cooperating countries, but rather a cartel of competing countries that have agreed on total industry output of oil to increase individual members' profits. Note that it was a non-OPEC company, BP, that could not "control their output" in what has now become the worst oil spill in US history. OPEC was formed in 1960, and is expected to collapse sometime around 2030 when the world's oil reserves run out. Matt Savinar has a nice article on [Life After the Oil Crash].
United Federation of Planets
The [Federation] fictitiously described in the Star Trek series appears to work well, an optimistic view of what federations could become if you let them evolve long enough.
Given the mixed results with "federation", I think I will avoid using the term for storage, and stick to the original term "scale-out architecture".
Am I dreaming? On his Storagezilla blog, fellow blogger Mark Twomey (EMC) brags about EMC's standard benchmark results, in his post titled [Love Life. Love CIFS.]. Here is my take:
A Full 180 degree reversal
For the past several years, EMC bloggers have argued, both in comments on this blog, and on their own blogs, that standard benchmarks are useless and should not be used to influence purchase decisions. While we all agree that "your mileage may vary", I find standard benchmarks are useful as part of an overall approach in comparing and selecting which vendors to work with, and which architectures or solution approaches to adopt, and which products or services to deploy. I am glad to see that EMC has finally joined the rest of the planet on this. I find it funny this reversal sounds a lot like their reversal from "Tape is Dead" to "What? We never said tape was dead!"
Impressive CIFS Results
The Standard Performance Evaluation Corporation (SPEC) has developed a series of NFS benchmarks, the latest, [SPECsfs2008] added support for CIFS. So, on the CIFS side, EMC's benchmarks compare favorably against previous CIFS tests from other vendors.
On the NFS side, however, EMC is still behind Avere, BlueArc, Exanet, and IBM/NetApp. For example, EMC's combination of Celerra gateways in front of V-Max disk systems resulted in 110,621 OPS with overall response time of 2.32 milliseconds. By comparison, the IBM N series N7900 (tested by NetApp under their own brand, FAS6080) was able to do 120,011 OPS with 1.95 msec response time.
Even though Sun invented the NFS protocol in the early 1980s, they take an EMC-like approach against standard benchmarks to measure it. Last year, fellow blogger Bryan Cantrill (Sun) gives his [Eulogy for a Benchmark]. I was going to make points about this, but fellow blogger Mike Eisler (NetApp) [already took care of it]. We can all learn from this. Companies that don't believe in standard benchmarks can either reverse course (as EMC has done), or continue their downhill decline until they are acquired by someone else.
(My condolences to those at Sun getting laid off. Those of you who hire on with IBM can get re-united with your former StorageTek buddies! Back then, StorageTek people left Sun in droves, knowing that Sun didn't understand the mainframe tape marketplace that StorageTek focused on. Likewise, many question how well Oracle will understand Sun's hardware business in servers and storage.)
What's in a Protocol?
Both CIFS and NFS have been around for decades, and comparisons can sometimes sound like religious debates. Traditionally, CIFS was used to share files between Windows systems, and NFS for Linux and UNIX platforms. However, Windows can also handle NFS, while Linux and UNIX systems can use CIFS. If you are using a recent level of VMware, you can use either NFS or CIFS as an alternative to Fibre Channel SAN to store your external disk VMDK files.
The Bigger Picture
There is a significant shift going on from traditional database repositories to unstructured file content. Today, as much as [80 percent of data is unstructured]. Shipments this year are expected to grow 60 percent for file-based storage, and only 15 percent for block-based storage. With the focus on private and public clouds, NAS solutions will be the battleground for 2010.
So, I am glad to see EMC starting to cite standard benchmarks. Hopefully, SPC-1 and SPC-2 benchmarks are forthcoming?
This week, I am in beautiful Sao Paulo, Brazil, teaching Top Gun class to IBM Business Partners and sales reps. Traditionally, we have "Tape Thursday" where we focus on our tape systems, from tape drives, to physical and virtual tape libraries. IBM is the number #1 tape vendor, and has been for the past eight years.
(The alliteration doesn't translate well here in Brazil. The Portuguese word for tape is "fita", and Thursday here is "quinta-feira", but "fita-quinta-feira" just doesn't have the same ring to it.)
In the class, we discussed how to handle common misperceptions and myths about tape. Here are a few examples:
Myth 1: Tape processing is manually intensive
In my July 2007 blog post [Times a Million], I coined the phrase "Laptop Mentality" to describe the problem most people have dealing with data center decisions. Many folks extend linearly their experiences using their PCs, workstations or laptops to apply to the data center, unable to comprehend large numbers or solutions that take advantage of the economies of scale.
For many, the only experience dealing with tape was manual. In the 1980s, we made "mix tapes" on little cassettes, and in the 1990s we recorded our favorite television shows on VHS tapes in the VCR. Today, we have playlists on flash or disk-based music players, and record TV shows on disk-based video recorders like Tivo. The conclusion is that tapes are manual, and disk are not.
Manual processing of tapes ended in 1987, with the introduction of a silo-like tape library from StorageTek. IBM quickly responded with its own IBM 3495 Tape Library Data Server in 1992. Today, clients have many tape automation choices, from the smallest IBM TS2900 Tape Autoloader that has one drive and nine cartridges, all the way to the largest IBM TS3500 multiple-library shuttle complex that can hold exabytes of data. These tape automation systems eliminate most of the manual handling of cartridges in day-to-day operations.
Myth 2: Tape media is less reliable than disk media
For any storage media to be unreliable is to return the wrong information that is different than what was originally stored. There are only two ways for this to happen: if you write a "zero" but read back a "one", or write a "one" and read a "zero". This is called a bit error. Every storage media has a "bit error rate" that is the average likelihood for some large amount of data written.
According to the latest [LTO Bit Error rates, 2012 March], today's tape expects only 1 bit error per 10E17 bits written (about 100 Petabytes). This is 10 times more reliable than Enterprise SAS disk (1 bit per 10E16), and 100 times more reliable than Enterprise-class SATA disk (1 bit per 10E15).
Tape is the media used in "black boxes" for airplanes. When an airplane crashes, the black box is retrieved and used to investigate the causes of the crash. In 1986, the Space Shuttle Challenger exploded 73 seconds after take-off. The tapes in the black box sat on the ocean floor for six weeks before being recovered. Amazingly, IBM was able to successfully restore [90 percent of the block data, and 100 percent of voice data].
Analysts are quite upset when they are quoted out of context, but in this case, Gartner never said anything closely similar to this. Nor did the other analysts that Curtis investigated for similar claims. What Garnter did say was that disk provides an attractive alternative storage media for backup which can increase the performance of the recovery process.
Back in the 1990s, Savur Rao and I developed a patent to help backup DB2 for z/OS by using the FlashCopy feature of IBM's high-end disk system. The software method to coordinate the FlashCopy snapshots with the database application and maintain multiple versions was implemented in the DFSMShsm component of DFSMS. A few years later, this was part of a set of patents IBM cross-licensed to Microsoft for them to implement a similar software for Windows called Data Protection Manager (DPM). IBM has since introduced its own version for distributed systems called IBM Tivoli FlashCopy Manager that runs not just on Windows, but also AIX, Linux, HP-UX and Solaris operating systems.
Curtis suspects the "71 percent" citation may have been propogated by an ambitious product manager of Microsoft's Data Protection Manager, back in 2006, perhaps to help drive up business to their new disk-based backup product. Certainly, Microsoft was not the only vendor to disparage tape in this manner.
A few years ago, an [EMC failure brought down the State of Virginia] due to not just a component failure it its production disk system, but then made it worse by failing to recover from the disk-based remote mirror copy. Fortunately, the data was able to be restored from tape over the next four days. If you wonder why nobody at EMC says "Tape is Dead" anymore, perhaps it is because tape saved their butts that week.
(FTC Disclosure: I work for IBM and this post can be considered a paid, celebrity endorsement for all of the IBM tape and software products mentioned on this post. I own shares of stock in both IBM and Google, and use Google's Gmail for my personal email, as well as many other Google services. While IBM, Google and Microsoft can be considered competitors to each other in some areas, IBM has working relationships with both companies on various projects. References in this post to other companies like EMC are merely to provide illustrative examples only, based on publicly available information. IBM is part of the Linear Tape Open (LTO) consortium.)
Myth 4: Vendors and Manufacturers are no longer investing in tape technology
IBM and others are still investing Research and Development (R&D) dollars to improve tape technology. What people don't realize is that much of the R&D spent on magnetic media can be applied across both disk and tape, such as IBM's development of the Giant Magnetoresistance read/write head, or [GMR] for short.
Most recently, IBM made another major advancement with tape with the introduction of the Linear Tape File Systems (LTFS). This allows greater portability to share data between users, and between companies, but treating tape cartridges much like USB memory sticks or pen drives. You can read more in my post [IBM and Fox win an Emmy for LTFS technology]!
Next month, IBM celebrates the 60th anniversary for tape. It is good to see that tape continues to be a vibrant part of the IT industry, and to IBM's storage business!
This week, July 26-30, 2010, I am in Washington DC for the annual [2010 System Storage Technical University]. As with last year, we have joined forces with the System x team. Since we are in Washington DC this time, IBM added a "Federal Track" to focus on government challenges and solutions. So, basically, offering attendees the option to attend three conferences for one low price.
This conference was previously called the "Symposium", but IBM changed the name to "Technical University" to emphasize the technical nature of the conference. No marketing puffery like "Journey to the Private Cloud" here! Instead, this is bona fide technical training, qualifying attendees to count this towards their Continuing Professional Education (CPE).
(Note to my readers:The blogosphere is like a playground. In the center are four-year-olds throwing sand into each other's faces, while mature adults sit on benches watching the action, and only jumping in as needed. For example, fellow blogger Chuck Hollis (EMC) got sand in his face for promising to resign if EMC ever offered a tacky storage guarantee, and then [failed to follow through on his promise] when it happened.
Several of my readers asked me to respond to another EMC blogger's latest [fistful of sand].
A few months ago, fellow blogger Barry Burke (EMC) committed to [stick to facts] in posts on his Storage Anarchist blog. That didn't last long! BarryB apparently has fallen in line with EMC's over-promise-then-under-deliver approach. Unfortunately, I will be busy covering the conference and IBM's robust portfolio of offerings, so won't have time to address BarryB's stinking pile of rumor and hearsay until next week or later. I am sorry to disappoint.)
This conference is designed to help IT professionals make their business and IT infrastructure more dynamic and, in the process, help reduce costs, mitigate risks, and improve service. This technical conference event is geared to IT and Business Managers, Data Center Managers, Project Managers, System Programmers, Server and Storage Administrators, Database Administrators, Business Continuity and Capacity Planners, IBM Business Partners and other IT Professionals. This week will offer over 300 different sessions and hands-on labs, certification exams, and a Solutions Center.
For those who want a quick stroll through memory lane, here are my posts from past events:
In keeping up with IBM's leadership in Social Media, IBM Systems Lab Services and Training team running this event have their own [Facebook Fan Page] and
[blog]. IBM Technical University has a Twitter account [@ibmtechconfs], and hashtag #ibmtechu. You can also follow me on Twitter [@az990tony].
Those that prefer to work with one-stop shopping of an IT Supermarket, with companies like IBM, HP and Dell who offer a complete set of servers, storage, switches, software and services, what we call "The Five S's".
Those that perfer shopping for components at individual specialty shops, like butchers, bakers, and candlestick makers, hoping that this singular focus means the products are best-of-breed in the market. Companies like HDS for disk, Quantum for tape, and Symantec for software come to mind.
My how the IT landscape for vendors has evolved in just the past five years! Cisco starts to sell servers, and enters a "mini-mall" alliance with EMC and VMware to offer vBlock integrated stack of server, storage and switches with VMware as the software hypervisor. For those not familiar with the concept of mini-malls, these are typically rows of specialty shops. A shopper can park their car once, and do all their shopping from the various shops in the mini-mall. Not quite "one-stop" shopping of a supermarket, but tries to address the same need.
("Who do I call when it breaks?" -- The three companies formed a puppet company, the Virtual Computing Environment company, or VCE, to help answer that question!)
Among the many things IBM has learned in its 100+ years of experience, it is that clients want choices. Cisco figured this out also, and partnered with NetApp to offer the aptly-named FlexPod reference architecture. In effect, Cisco has two boyfriends, when she is with EMC, it is called a Vblock, and when she is with NetApp, it is called a FlexPod. I was lucky enough to find this graphic to help explain the three-way love triangle.
Did this move put a strain on the relationship between Cisco and EMC? Last month, EMC announced VSPEX, a FlexPod-like approach that provides a choice of servers, and some leeway for resellers to make choices to fit client needs better. Why limit yourself to Cisco servers, when IBM and HP servers are better? Is this an admission that Vblock has failed, and that VSPEX is the new way of doing things? No, I suspect it is just EMC's way to strike back at both Cisco and NetApp in what many are calling the "Stack Wars". (See [The Stack Wars have Begun!], [What is the Enterprise Stack?], or [The Fight for the Fully Virtualized Data Center] for more on this.)
(FTC Disclosure: I am both an employee and shareholder of IBM, so the U.S. Federal Trade Commission may consider this post a paid, celebrity endorsement of the IBM PureFlex system. IBM has working relationships with Cisco, NetApp, and Quantum. I was not paid to mention, nor have I any financial interest in, any of the other companies mentioned in this blog post. )
Last month, IBM announced its new PureSystems family, ushering in a [new era in computing]. I invite you all to check out the many "Paterns of Expertise" available at the [IBM PureSystems Centre]. This is like an "app store" for the data center, and what I feel truly differentiates IBM's offerings from the rest.
The trend is obvious. Clients who previously purchased from specialty shops are discovering the cost and complexity of building workable systems from piece-parts from separate vendors has proven expensive and challenging. IBM PureFlex™ systems eliminate a lot of the complexity and effort, but still offer plenty of flexibility, choice of server processor types, choice of server and storage hypervisors, and choice of various operating systems.
Did IBM XIV force EMC's hand to announce VMAXe? Let's take a stroll down memory lane.
In 2008, IBM XIV showed the world that it could ship a Tier-1, high-end, enterprise-class system using commodity parts. Technically, prior to its acquisition by IBM, the XIV team had boxes out in production since 2005. EMC incorrectly argued this announcement meant the death of the IBM DS8000. Just because EMC was unable to figure out how to have more than one high-end disk product, doesn't mean IBM or other storage vendors were equally challenged. Both IBM XIV and DS8000 are Tier-1, high-end, enterprise-class storage systems, as are the IBM N series N7900 and the IBM Scale-Out Network Attached Storage (SONAS).
In April 2009, EMC followed IBM's lead with their own V-Max system, based on Symmetrix Engenuity code, but on commodity x86 processors. Nobody at EMC suggested that the V-Max meant the death of their other Symmetrix box, the DMX-4, which means that EMC proved to themselves that a storage vendor could offer multiple high-end disk systems. Hitachi Data Systems (HDS) would later offer the VSP, which also includes some commodity hardware as well.
In July 2009, analysts at International Technology Group published their TCO findings that IBM XIV was 63 percent less expensive than EMC V-Max, in a whitepaper titled [COST/BENEFIT CASE
FOR IBM XIV STORAGE SYSTEM Comparing Costs for IBM XIV and EMC V-Max Systems]. Not surprisingly, EMC cried foul, feeling that EMC V-Max had not yet been successful in the field, it was too soon to compare newly minted EMC gear with a mature product like XIV that had been in production accounts for several years. Big companies like to wait for "Generation 1" of any new product to mature a bit before they purchase.
To compete against IBM XIV's very low TCO, EMC was forced to either deeply discount their Symmetrix, or counter-offer with lower-cost CLARiiON, their midrange disk offering. An ex-EMCer that now works for IBM on the XIV sales team put it in EMC terms -- "the IBM XIV provides a Symmetrix-like product at CLARiiON-like prices."
(Note: Somewhere in 2010, EMC dropped the hyphen, changing the name from V-Max to VMAX. I didn't see this formally announced anywhere, but it seems that the new spelling is the officially correct usage. A common marketing rule is that you should only rename failed products, so perhaps dropping the hyphen was EMC's way of preventing people from searching older reviews of the V-Max product.)
This month, IBM introduced the IBM XIV Gen3 model 114. The analysts at ITG updated their analysis, as there are now more customers that have either or both products, to provide a more thorough comparison. Their latest whitepaper, titled [Cost/Benefit Case for IBM XIV Systems: Comparing Cost
Structures for IBM XIV and EMC VMAX Systems], shows that IBM maintains its substantial cost savings advantage, representing 69 percent less Total Cost of Ownership (TCO) than EMC, on average, over the course of three years.
In response, EMC announced its new VMAXe, following the naming convention EMC established for VNX and VNXe. Customers cannot upgrade VNXe to VNX, nor VMAXe to VMAX, so at least EMC was consistent in that regard. Like the IBM XIV and XIV Gen3, the new EMC VMAXe eliminated "unnecessary distractions" like CKD volumes and FICON attachment needed for the IBM z/OS operating system on IBM System z mainframes. Fellow blogger Barry Burke from EMC explains everything about the VMAXe in his blog post [a big thing in a small package].
So, you have to wonder, did IBM XIV force EMC's hand into offering this new VMAXe storage unit? Surely, EMC sales reps will continue to lead with the more profitable DMX-4 or VMAX, and then only offer the VMAXe when the prospective customer mentions that the IBM XIV Gen3 is 69 percent less expensive. I haven't seen any list or street prices for the VMAXe yet, but I suspect it is less expensive than VMAX, on a dollar-per-GB basis, so that EMC will not have to discount it as much to compete against IBM.
Normally, when EMC fails, it is worth a giggle. Companies are run by humans, and nobody is perfect. However, their latest one, failing to defend their RSA SecurID two-factor website, is no laughing matter. Breaches like this undermine the trust needed for business and commerce to be done with Information Technology, so it affects the entire IT industry.
(FTC Disclosure: I do not work or have any financial investments in either EMC nor ENC Security Systems. Neither EMC nor ENC Security Systems paid me to mention them on this blog. Their mention in this blog is not an endorsement of either company or their products. Information about EMC was based solely on publicly available information made available by EMC and others. My friends at ENC Security Systems provided me an evaluation license for their latest software release so that I could confirm the use cases posed in this post.)
Of course, EMC did the right thing by making this breach public in an [Open Letter to RSA Customers]. While this may affect their revenues, as clients question whether they should do business with EMC, or affect their stock price, as investors question whether they should invest in EMC, they were very clear and public that the breach occurred. As far as I know, none of the executives of the RSA security division have stepped down. The disclosure of the breach was the right thing to do, and required by law from the [US Securities Exchange Commission]. This law was created to prevent companies from trying to hide breaches that expose external client information.
The breach does not affect RSA public/private key pairs used by IBM and most every other large company. Rather, this breach was targeted to RSA SecurID two-factor authentication. I explained two-factor authentication in my blog post [Day 5 Grid, SOA and Cloud Computing - System x KVM solutions], but basically it is an added level of security, requiring something you know (your password) with something you have (such as a magnetic card or key fob). Both are required to gain access to the system.
Breaches happen. Recently, [Hackers found vulnerabilities in the McAfee.com website]. Last month, fellow blogger Chuck Hollis from EMC had a blog post on [Understanding Advanced Persistent Threats (APT)] in the week leading up to their RSA Conference. It was precisely an APT that hit RSA, so the irony of this breach was not lost on the blogosphere. Perhaps Chuck's blog post gave hackers the idea to do this, like saying "I hope terrorists don't bomb this building that hold all of our chemical weapons..." or "I hope bank robbers don't rob this repository where we keep all the cash..."
(The sinister counter-theory, that EMC staged this breach as a marketing stunt to undermine trust in hybrid or public cloud offerings, such as those offered by IBM, Amazon or Salesforce.com, offers an interesting twist. While computer breaches in general are fodder for [Luddites] to argue we should not use computers at all, this particular breach could be used by EMC salesmen to encourage their customers to choose private cloud over hybrid cloud or public cloud deployments. Given all the extra work that RSA SecurID customers have to now do to harden their environments, that would be in bad taste.)
Today, March 31, is World Backup Day. This is because many viruses are triggered to operate on April 1. Just like checking the batteries in your smoke alarms every year, you should ensure that your backup methodology remains valid.
Back in 2008, I was a volunteer for the One Laptop Per Child (OLPC) initiative, and built an XS server to be used for Uruguay. I shipped [this baby off to school] to be the central server that all the student and teacher laptops connected to. It was the gateway to the Internet, as well as the [repository for the blogs of each student]. The blogs were accessible to the public, so that parents could read what their students were writing.
Unfortunately, this public access resulted in my little XS server being attacked by hackers, with IP addresses in Russia and China. Why anyone from either of those two countries wanted to ruin the hopes and dreams of small school children in Uruguay was beyond me. Fortunately, I had planned for remote administration. Backups were taken by me weekly to a second drive that was only mounted when I was dialed in to take the backup. The rest of the time, it was offline, so as not to be written to by hackers.
I also shipped along with the server a bootable DVD that contained a modified version of [System Rescue CD], scripts to start up SSHD daemon, and pre-populated for use with public/private RSA keys for me and eight other administrators located in various countries. To effect repairs, the local operator would reboot to the DVD, and then I could login via "ssh" and restore the operating system, programs and data. Sadly, this meant that the students might have lost some of their most recent blog posts since the last backup.
Please consider reviewing your own backup strategies. If your security were compromised, data was corrupted or lost, would you be able to recover from your backups?
Use Encryption where Appropriate
If you plan to travel this Summer, you may want to consider encryption to protect yourself. ENC Security Systems has just released their latest [Encrypt Stick] which is a USB memory stick pre-loaded with software that provides three features:
Encryption for your files
A secure web browser for accessing sensitive websites
Secure password manager
Hotel Lobby
Many hotels now offer computers for use by the guests. These are typically running some flavor of Windows operating system. Encrypt Stick comes with an EXE file that you can run to browse the web securely, and have access to your encrypted files and passwords, leaving no trace on the hotel lobby computer.
Friends and Family
What if you are visiting friends and family, and they have a Mac instead? No problem, as Encrypt Stick has a DMG file to use on Mac OS X operating system. While you may not be worried about your siblings hacking into your bank account, you may not want them necessarily seeing what sites you visited.
Airport Lounge
I have been to several airport lounges now that use Linux for their public computers. Makes sense to me, as there are fewer viruses for Linux, and updating Linux is relatively straightforward. However, Encrypt Stick does not support Linux. For my Linux-knowledgeable readers, you can build your own with [Unetbootin] bootable USB memory stick to launch your favorite Linux browser in memory on whatever system you are using. The [Gparted Magic] utility rescue tool includes [TrueCrypt] to encrypt your files. Lastly, you can use [MyPasswordSafe] to hold all of your passwords securely.
Several clients have asked if any of the IBM data-at-rest encrypted disks or tapes are affected by this breach. IBM uses AES encryption for the actual disk and tape media, but we do use RSA keys to encrypt the generated keys used on the TS1120 and TS1130 drives. However, these were not affected by the RSA SecurID breach, and your tapes are safely protected.
Advanced Persistent Threats, viruses and other malware are no laughing matter. If you are concerned about security, contact IBM to help you assess your current environment and help you plan a robust protection strategy.
Continuing coverage of my week in Washington DC for the annual [2010 System Storage Technical University], I attended several XIV sessions throughout the week. There were many XIV sessions. I could not attend all of them. Jack Arnold, one of my colleagues at the IBM Tucson Executive Briefing Center, often presents XIV to clients and Business Partners. He covered all the basics of XIV architecture, configuration, and features like snapshots and migration. Carlos Lizarralde presented "Solving VMware Challenges with XIV". Ola Mayer presented "XIV Active Data Migration and Disaster Recovery".
Here is my quick recap of two in particular that I attended:
XIV Client Success Stories - Randy Arseneau
Randy reported that IBM had its best quarter ever for the XIV, reflecting an unexpected surge shortly after my blog post debunking the DDF myth last April. He presented successful case studies of client deployments. Many followed a familiar pattern. First, the client would only purchase one or two XIV units. Second, the client would beat the crap out of them, putting all kinds of stress from different workloads. Third, the client would discover that the XIV is really as amazing as IBM and IBM Business Partners have told them. Finally, in the fourth phase, the client would deploy the XIV for mission-critical production applications.
A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. They now have 14 XIV boxes deployed in mission-critical applications.
A large equipment manufacturer compared the offerings among seven different storage vendors, and IBM XIV came out the winner. They now have 11 XIV boxes in production and another four boxes for development/test. They have moved their entire VMware infrastructure to IBM XIV, running over 12,000 guest instances.
A financial services company bought their first XIV in early 2009 and now has 34 XIV units in production attached to a variety of Windows, Solaris, AIX, Linux servers and VMware hosts. Their entire Microsoft Exchange was moved from HP and EMC disk to IBM XIV, and experienced noticeable performance improvement.
When a University health system replaced two competitive disk systems with XIV, their data center temperature dropped from 74 to 68 degrees Fahrenheit. In general, XIV systems are 20 to 30 percent more energy efficient per usable TB than traditional disk systems.
A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC's V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. But, in the end, IBM XIV proved to be the better fit, and now the customer is happy having made the switch.
The manager of an information communications technology service provider was impressed that the XIV was up and running in just a couple of days. They now have over two dozen XIV systems.
Another XIV client had lost all of their Computer Room Air Conditioning (CRAC) units for several hours. The data center heated up to 126 degrees Fahrenheit, but the customer did not lose any data on either of their two XIV boxes, which continued to run in these extreme conditions.
Optimizing XIV Performance - Brian Cormody
This session was an update from the [one presented last year] by Izhar Sharon. Brian presented various best practices for optimizing the performance when using specific application workloads with IBM XIV disk systems.
Oracle ASM: Many people allocate lots of small LUNs, because this made sense a long time ago when all you had was just a bunch of disks (JBOD). In fact, many of the practices that DBAs use to configure databases across disks become unnecessary with XIV. Wth XIV, you are better off allocating a few number of very large LUNs from the XIV. The best option was a 1-volume ASM pool with 8MB AU stripe. A single LUN can contain multiple Oracle databases. A single LUN can be used to store all of the logs.
VMware: Over 70 percent of XIV customers use it with VMware. For VMFS, IBM recommends allocating a few number of large LUNs. You can specify the maximum of 2181 GB. Do not use VMware's internal LUN extension capability, as IBM XIV already has thin provisioning and works better to allow XIV to do this for you. XIV Snapshots provide crash-consistent copies without all the VMware overhead of VMware Snapshots.
SAP: For planning purposes, the "SAPS" unit equates roughly to 0.4 IOPS for ERP OLTP workloads, and 0.6 IOPS for BW/BI OLAP workloads. In general, an XIV can deliver 25-30,000 IOPS at 10-15 msec response time, and 60,000 IOPS at 30 msec response time. With SAP, our clients have managed to get 60,000 IOPS at less than 15 msec.
Microsoft Exchange: Even my friends in Redmond could not believe how awesome XIV was during ESRP testing. Five Exchange 2010 servers connected two a pair of XIV boxes using the new 2TB drawers managed 40,000 mailboxes at the high profile (0.15 IOPS per mailbox). Another client found four XIV boxes (720 drives) was able to handle 60,000 mailboxes (5GB max), which would have taken over 4000 drives if internal disk drives were used instead. Who said SANs are obsolete for MS Exchange?
Asynchronous Replication: IBM now has an "Async Calculator" to model and help design an XIV async replication solution. In general, dark fiber works best, and MPLS clouds had the worst results. The latest 10.2.2 microcode for the IBM XIV can now handle 10 Mbps at less than 250 msec roundtrip. During the initial sync between locations, IBM recommends setting the "schedule=never" to consume as much bandwidth as possible. If you don't trust the bandwidth measurements your telco provider is reporting, consider testing the bandwidth yourself with [iPerf] open source tool.
Several members of the XIV team thanked me for my April 5th post [Double Drive Failure Debunked: XIV Two Years Later]. Since April 5th, IBM has sold more XIV units this quarter than any prior quarters. I am glad to have helped!
Well, it's Tuesday again, and that means IBM announcements! Right on the heels of our big storage launch on February 9, today IBM announced some exciting options for its modular disk systems. Let's take a look:
2TB SATA-II drives
That's right, you can now DOUBLE your capacity with 2TB SATA type-II drives on the DS3950, DS4200, DS4700, DS5020, DS5100 and DS5300 disk controllers, as well as the DS4000 EXP420, EXP520, EXP810, EXP5000 and EXP5060 expansion drawers. Here are the Announcement Letters for the [HVEC] and [AAS] ordering systems.
300GB Solid State Drives
IBM also announces 300GB solid state drives (SSD) for the DS5100 and DS5300. These are four times larger than the 73GB drives IBM offered last year, for those workloads that need high read IOPS such as Online Transaction Processing (OLTP) and Enterprise Resource Planning (ERP) applications. Here is the [Announcement Letter].
New N series model N3400
For customers that need less than the minimum 21TB that our IBM Scale-Out Network Attach Storage (SONAS) can provide, IBM offers the new N3400 unified storage disk system, with support for NFS, CIFS, iSCSI and FCP. This is a 2U high 12 drive model that can be expanded up to 136 drives, basically doubling all the stats from last year's N3300 model. Fellow blogger, Rich Swain (IBM), does a great job recapping the speeds and feeds over on his blog [News and Information about IBM N series].
It also appears that the reports and rumors of the death of the DS6800 are premature. Don't believe misleading statements from competitors, such as those found written by fellow blogger BarryB (EMC), aka "the Storage Anarchist", in his latest post [Bring Out Your Dead] showing a cute little tombstone with "Feb 2010" on the bottom. Actually, if he had bothered to read IBM's [Announcement Letter], he would have realized that IBM plans to continue to sell these until June. Of course, IBM will continue to support both new and existing DS6800 customers for many years to come.
Technically, BarryB does not make any factually incorrect statements for me to correct on his blog. The idea that a product is "dead" is, of course, just opinion, and competitors poke fun at each others' announcements every day. One could argue that the EMC V-Max was "dead" after the ITG whitepaper [Cost/Benefit Case for IBM XIV Storage System - Comparing Costs for IBM XIV and EMC V-Max Systems] demonstrated that the IBM XIV cost 63 percent less than a comparable EMC V-Max over the life of three years total cost of ownership (TCO) back in July 2009. The comparison was made with data from clients in a variety of industries including manufacturing, health care, life sciences, telecommunications, financial services, and the public sector. This could explain why so many EMC customers are buying or investigating the IBM XIV and the rest of the IBM storage portfolio.
This week, Hitachi Ltd. announced their next generation disk storage virtualization array, the Virtual Storage Platform, following on the success of its USP V line. It didn't take long for fellow blogger Chuck Hollis (EMC) to comment on this in his blog post [Hitachi's New VSP: Separating The Wheat From The Chaff]. Here are some excerpts:
"Well, we all knew that Hitachi (through HDS and HP) would be announcing some sort of refresh to their high-end storage platform sooner or later.
As EMC is Hitachi's only viable competitor in this part of the market, I think people are expecting me to say something.
If you're a high-end storage kind of person, your universe is basically a binary star: EMC and Hitachi orbiting each other, with the interesting occasional sideshow from other vendors trying to claim relevance in this space."
Chuck implies that neither Hewlett-Packard (HP) nor Hitachi Data Systems (HDS) as vendors provide any value-add from the box manufactured by Hitachi Ltd. so combines them into a single category. I suspect the HP and HDS folks might disagree with that opinion.
When I reminded Chuck that IBM was also a major player in the high-end disk space, his response included the following gem:
"Many of us in the storage industry believe that IBM currently does not field a competitive high-end storage platform. IDC market share numbers bear out this assertion, as you probably know."
While Chuck is certainly entitled to his own beliefs and opinions, believing the world is flat does not make it so. Certainly, I doubt IDC or any other market research firm has put out a survey asking "Do you think IBM offers a competitive high-end disk storage platform?" Of course, if Chuck is basing his opinion on anecdotal conversations with existing EMC customers, I can certainly see how he might have formed this misperception. However, IDC market share numbers don't support Chuck's assertion at all.
There is no industry-standard definition of what is a "high-end" or "enterprise-class" disk system. Some define high-end as having the option for mainframe attachment via ESCON and/or FICON protocol. Others might focus on features, functionality, scalability and high 99.999+ percent availability. Others insist high-end requires block-oriented protocols like FC and iSCSI, rather than file-based protocols like NAS and CIFS.
For the most demanding mission-critical mix of random and sequential workloads, IBM offers the [IBM System Storage DS8000 series] high-end disk system which connects to mainframes and distributed servers, via FCP and FICON attachment, and supports a variety of drive types and RAID levels. The features that HP and HDS are touting today for the VSP are already available on the IBM DS8000, including sub-LUN automatic tiering between Solid-State drives and spinning disk, called [Easy Tier], thin provisioning, wide striping, point-in-time copies, and long distance synchronous and asynchronous replication.
There are lots of analysts that track market share for the IT storage industry, but since Chuck mentions [IDC] specifically, I reviewed the most recent IDC data, published a few weeks ago in their "IDC Worldwide Quarter Disk Storage Tracker" for 2Q 2010, representing April 1 to June 30, 2010 sales. Just in case any of the rankings have changed over time, I also looked at the previous four quarters: 2Q 2009, 3Q 2009, 4Q 2009 and 1Q 2010.
(Note: IDC considers its analysis proprietary, out of respect for their business model I will not publish any of the actual facts and figures they have collected. If you would like to get any of the IDC data to form your own opinion, contact them directly.)
In the case of IDC, they divide the disk systems into three storage classes: entry-level, midrange and high-end. Their definition of "high-end" is external RAID-protected disk storage that sells for $250,000 USD or more, representing roughly 25 to 30 percent of the external disk storage market overall. Here are IDC's rankings of the four major players for high-end disk systems:
EMC
IBM
HDS
HP
By either measure of market share, units (disk systems) or revenue (US dollars), IDC reports that IBM high-end disk outsold both HDS and HP combined. This has been true for the past five quarters. If a smaller start-up vendor has single digit percent market share, I could accept it being counted as part of Chuck's "occasional sideshow from other vendors trying to claim relevance", but IBM high-end disk has consistently had 20 to 30 percent market share over the past five quarters!
Not all of these high-end disk systems are connected to mainframes. According to IDC data, only about 15 to 25 percent of these boxes are counted under their "Mainframe" topology.
Chuck further writes:
"It's reasonable to expect IBM to sell a respectable amount of storage with their mainframes using a protocol of their own design -- although IBM's two competitors in this rather proprietary space (notably EMC and Hitachi) sell more together than does IBM."
The IDC data doesn't support that claim either, Chuck. By either measure of market share, units (disk systems) or revenue (US dollars), IDC reports that IBM disk for mainframes outsold all other vendors (including EMC, HDS, and HP) combined. And again, this has been true for the past five quarters. Here is the IDC ranking for mainframe disk storage:
IBM
EMC
HDS
HP
IBM has over 50 percent market share in this case, primarily because IBM System Storage DS8000 is the industry leader in mainframe-related features and functions, and offers synergy with the rest of the z/Architecture stack.
So Chuck, I am not picking a fight with you or asking you to retract or correct your blog post. Your main theme, that the new VSP presents serious competition to EMC's VMAX high-end disk arrays, is certainly something I can agree with. Congratulations to HDS and HP for putting forth what looks like a viable alternative to EMC's VMAX.
To learn more about IBM's upcoming products, register for next week's webcast "Taming the Information Explosion with IBM Storage" featuring Dan Galvan, IBM Vice President, and Steve Duplessie, Senior Analyst and Founder of Enterprise Storage Group (ESG).
Over on the Tivoli Storage Blog, there is an exchange over the concept of a "Storage Hypervisor". This started with fellow IBMer Ron Riffe's blog post [Enabling Private IT for Storage Cloud -- Part I], with a promise to provide parts 2 and 3 in the next few weeks. Here's an excerpt:
"Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them.
In August, Gartner published a paper [Use Heterogeneous Storage Virtualization as a Bridge to the Cloud] that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration.
A good storage hypervisor helps you be smart.
Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s 'in the cloud'."
"Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing."
At this point, you might scratch your head and ask "Does this Storage Hypervisor exist, or is this just a theoretical exercise?" The answer of course is "Yes, it does exist!" Just like VMware offers vSphere and vCenter, IBM offers block-level disk virtualization through the SAN Volume Controller(SVC) and Storwize V7000 products, with a full management support from Tivoli Storage Productivity Center Standard Edition.
SVC has supported every release of VMware since the 2.5 version. IBM is the leading reseller of VMware, so it makes sense for IBM and VMware development to collaborate and make sure all the products run smoothly together. SVC presents volumes that can be formatted for VMFS file system to hold your VMDK files, accessible via FCP protocol. IBM and VMware have some key synergies:
Management integration with Tivoli Storage Productivity Center and VMware vCenter plug-in
VAAI support: Hardware-assisted locking, hardware-assisted zeroing, and hardware-assisted copying. Some of the competitors, like EMC VPLEX, don't have this!
Space-efficient FlashCopy. Let's say you need 250 VM images, all running a particular level of Windows. A boot volume of 20GB each would consume 5000GB (5 TB) of capacity. Instead, create a Golden Master volume. Then, take 249 copies with space-efficient FlashCopy, which only consumes space for the modified portions of the new volumes. For each copy, make the necessary changes like unique hostname and IP address, changing only a few blocks of data each. The end result? 250 unique VM boot volumes in less than 25GB of space, a 200:1 reduction!
Support for VMware's Site Recovery Manager using SVC's Metro Mirror or Global Mirror features for remote-distance replication.
Data center federation. SVC allows you to seamlessly do vMotion from one datacenter to another using its "stretched cluster" capability. Basically, SVC makes a single image of the volume available to both locations, and stores two physical copies, one in each location. You can lose either datacenter and still have uninterrupted access to your data. VMware's HA or Fault Tolerance features can kick in, same as usual.
But unlike tools that work only with VMware, IBM's storage hypervisor works with a variety of server virtualization technologies, including Microsoft Hyper-V, Xen, OracleVM, Linux KVM, PowerVM, z/VM and PR/SM. This is important, as a recent poll on the Hot Aisle blog indicates that [44 percent run 2 or more server hypervisors]!
Join the conversation! The virtual dialogue on this topic will continue in a [live group chat] this Friday, September 23, 2011 from 12 noon to 1pm EDT. Join me and about 20 other top storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
I took over a hundred pictures at this event. Here are a few of my favorites from Monday and Tuesday.
The IBM Booth #1111 Moscone South
I spent most of my time at the booth in the exhibition area. It was a huge booth, covering various software offerings in the front, and servers and storage systems in the back. Here I am next to the "IBM Watson" simulator, allowing people to play Jeopardy! game against Watson.
In the front was "EoS" which stands for "Exchanging Opinions for Solutions" -- an interactive screen developed by Somnio that allows people to enter questions and opinions and get crowd-sourced answers from people following the Twitter stream. The EoS was connected to the [IBM Mobile App] so people could follow the conversation.
.
IBM Customer Appreciation Events
On Monday evening we had some customer appreciation events. First was for IBM customers of "JD Edwards", which runs on "IBM i" operating system on POWER servers. This was an elegant affair at the [Weinstein Gallery] surrounded by works of art by Pablo Picasso and Marc Chagall. One customer expressed concern that Oracle would functionally stabilize JD Edwards "World" software and force everyone to move over to "Enterprise One". I told him that I had seen the roadmap for "World" and there are three healthy releases planned for its future. He should have nothing to worry about. IBM and Oracle will work together to make sure our mutual customers get the solutions they need.
Later, we went to the "Infusion" bar for another "IBM appreciation" event with a live band. Here's a Polaroid photo taken of me in the crowd.
Titan Gala Award Reception
On Tuesday night, Oracle gave out awards in 29 categories. IBM won three this year. I took a photo with the ladies from Beach Blanket Babylon, and a mermaid! Joining me to celebrate the awards were IBMers Carolann Kohler, Boyd Fenton, Sue Haad, and Susan Adomovich.
This is my first time attending Oracle OpenWorld, so naively I asked why there were only 29 categories and not an even 30. The IBMers joked that the 30th might as well have been "Best Server/Storage Platform for Integer Math" which Larry Ellison conceded that IBM's POWER 795 server wins over Oracle's new SPARC T4 Supercluster. As Larry said during his keynote "We still have some work to do to beat IBM!"
The event was held at the San Francisco City Hall, I got to walk on the red carpet, with lavish food and drink. I was even given a hand-rolled cigar! Thank you Oracle! We are proud to be your "Diamond Partner" helping our mutual customers get the most out of our solutions.
.
The "Booth Babes" Controversy
At the EMC booth, these three lovely ladies, Jennifer, Tamara and Manuela, were just a few of the dozen so-called booth babes EMC hired from a local agency. Attendees with technical questions were directed to the EMC guys in the back of the booth, behind the wall.
IBM stopped using "booth babes" a long while ago. At IBM Booth #1111, we had a healthy balance of real men and women executives, technical experts, and support staff at the IBM booth.
A guy from EMC came over to our booth later to explain that EMC is at two other events this same week, and their technical staff is spread thin. EMC is a small company, and skilled technical people are in short supply. We get it. Not every IT vendor has an army of experts in every category like IBM.
I want to thank the IBM-Oracle Alliance team, especially Nancy Spurry and Carolann Kohler for having me involved in these events.
I'm down here in Australia, where the government is a bit stalled for the past two weeks at the moment, known formally as being managed by the [Caretaker government]. Apparently, there is a gap between the outgoing administration and the incoming administration, and the caretaker government is doing as little as possible until the new regime takes over. They are still counting votes, including in some cases dummy ballots known as "donkey votes", the Australian version of the hanging chad. Three independent parties are also trying to decide which major party they will support to finalize the process.
While we are on the topic of a government stalled, I feel bad for the state of Virginia in the United States. Apparently, one of their supposedly high-end enterprise class EMC Symmetrix DMX storage systems, supporting 26 different state agencies in Virginia, crashed on August 25th and now more than a week later, many of those agencies are still down, including the Department of Motor Vehicles and the Department of Taxation and Revenue.
Many of the articles in the press on this event have focused on what this means for the reputation of EMC. Not surprisingly, EMC says that this failure is unprecedented, but really this is just one in a long series of failures from EMC. It reminds me of the last time EMC had a public failure with a dual-controller CLARiiON a few months ago that stopped another company from their operations. There is nothing unique in the physical equipment itself, all IT gear can break or be taken down by some outside force, such as a natural disaster. The real question, though, is why haven’t EMC and the State Government been able to restore operations many days after the hardware was fixed?
In the Boston Globe, Zeus Kerravala, a data storage analyst at Yankee Group in Boston, is quoted as saying that such a high-profile breakdown could undermine EMC’s credibility with large businesses and government agencies. “I think it’s extremely important for them,’’ said Kerravala. “When you see a failure of this magnitude, and their inability to get a customer like the state of Virginia up and running almost immediately, all companies ought to look at that and raise their eyebrows.’’
Was the backup and disaster recovery solution capable of the scale and service level requirements needed by vital state
agencies? Had they tested their backups to ensure they were running correctly, and had they tested their recovery plans? Were they monitoring the success of recent backup operations?
Eventually, the systems will be back up and running, fines and penalties will be paid, and perhaps the guy who chose to go with EMC might feel bad enough to give back that new set of golf clubs, or whatever ridiculously expensive gift EMC reps might offer to government officials these days to influence the purchase decision making process.
(Note: I am not accusing any government employee in particular working at the state of Virginia of any wrongdoing, and mention this only as a possibility of what might have happened. I am sure the media will dig into that possibility soon enough during their investigations, so no sense in me discussing that process any further.)
So what lessons can we learn from this?
Lesson 1: You don't just buy technology, you also are choosing to work with a particular vendor
IBM stands behind its products. Choosing a product strictly on its speeds and feeds misses the point. A study IBM and Mercer Consulting Group conducted back in 2007 found that only 20 percent of the purchase decision for storage was from the technical capabilities. The other 80 percent were called "wrapper attributes", such as who the vendor was, their reputation, the service, support and warranty options.
Lesson 2: Losing a single disk system is a disaster, so disaster recovery plans should apply
IBM has a strong Business Continuity and Recovery Services (BCRS) services group to help companies and government agencies develop their BC/DR plans. In the planning process, various possible incidents are identified, recovery point objectives (RPO) and recovery time objectives (RTO) and then appropriate action plans are documentede on how to deal with them. For example, if the state of Virginia had an RPO of 48 hours, and an RTO of 5 days, then when the failure occurred on August 25, they could have recovered up to August 23 level data(48 hours prior to the incident) and be up and running by August 30 (five days after the incident). I don't personally know what RPO and RTO they planned for, but certainly it seems like they missed it by now already.
Lesson 3: BC/DR Plans only work if you practice them often enough
Sadly, many companies and government agencies make plans, but never practice them, so they have no idea if the plans will work as expected, or if they are fundamentally flawed. Just as we often have fire drills that force everyone to stop what they are doing and vacate the office building, anyone with an IT department needs to practice BC/DR plans often enough so that you can ensure the plan itself is solid, but also so that the people involved know what to do and their respective roles in the recovery process.
Lesson 4: This can serve as a wake-up call to consider Cloud Computing as an alternative option
Are you still doing IT in your own organization? Do you feel all of the IT staff have been adequately trained for the job? If your biggest disk system completely failed, not just a minor single or double drive failure, but a huge EMC-like failure, would your IT department know how to recover in less than five days? Perhaps this will serve as a wake-up call to consider alternative IT delivery options. The advantage of big Cloud Service Providers (Microsoft, Google, Yahoo, Amazon, SalesForce.com and of course, IBM) is that they are big enough to have worked out all the BC/DR procedures, and have enough resources to switch over to in case any individual disk system fails.
To avoid overwhelming people with too many features and functions, IBM decided to keep things simple for the first release. Let's take a look:
Frames
The base frame (2231-IA3) supports a single collection, from as small as 3.6 TB to as large as 72 TB of usable capacity. You can attach one expansion frame (2231-IS3) that holds two additional collections, 63 TB usable capacity for each collection. Disk capacity is increased in eight-drive (half-drawer) increments of 3.6 TB usable capacity each. A full configured IA system (304 drives, 1 TB raw capacity per drive) provides 198 TB usable capacity.
Of course, that is just the disk side of the solution. Like its predecessor, the IBM System Storage DR550, the IA v1.1 can also attach to external tape storage to store and protect petabytes (PB) of archive data. Hundreds of different IBM and non-IBM tape drives and libraries are supported, so that this can be easily incorporated into existing tape environments.
Protection Levels
Each collection can be configured to one of three protection levels: basic, intermediate, and maximum.
Basic protection provides RAID protection of data using standard NFS group/user controls for access to read and write data. This can be useful for databases that need full read/write access. Users can assign expiration dates, but in Basic mode they can delete the data before the expiration date is reached.
Intermediate adds Non-Erasable Non-Rewriteable (NENR) protection against user actions to delete or modify protected data. However, similar to IBM N series "Enterprise SnapLock", intermediate mode allows authorized storage admins to clean up the mess, increase or reduce retention periods, and delete data if it is inadvertently protected. I often refer to this as "training wheels" for those who are trying to work out their workflow procedures before moving on to Maximum mode.
Maximum provides the strictest NENR protection for business, legal, government and industry requirements, comparable to IBM N series "Compliance SnapLock" mode, for data that traditionally were written to WORM optical media. Data cannot be deleted until the retention period ends. Retention periods of individual files and objects can be increased, but not decreased. Retention Hold (often referred to as Litigation Hold) can be used to keep a set of related data even longer in specific circumstances.
You can decide to upgrade your protection after data is written to a collection. Basic mode can be upgraded to Intermediate mode, for example, or Intermediate mode upgraded to Maximum.
Protocols
To keep things simple, v1.1 of the Information Archive supports only two industry standard protocols: NFS and SSAM API. The NFS option allows standard file commands to read/write data. The System Storage Archive Manager (SSAM) API allows smooth transition from earlier IBM System Storage DR550 deployments. With this announcement, IBM will [discontinue selling the DR550 DR2 models].
As we say here at IBM, "Today is the best day to stop using EMC Centera." For more information, see the
IBM [Announcement Letter].
"With Cisco Systems, EMC, and VMware teaming up to sell integrated IT stacks, Oracle buying Sun Microsystems to create its own integrated stacks, and IBM having sold integrated legacy system stacks and rolling in profits from them for decades, it was only a matter of time before other big IT players paired off."
Once again we are reminded that IBM, as an IT "supermarket", is able to deliver integrated software/server/storage solutions, and our competitors are scrambling to form their own alliances to be "more like IBM." This week, IBM announced new ordering options for storage software with System x servers, including BladeCenter blade servers and IntelliStation workstations. Here's a quick recap:
FastBack v6.1
IBM Tivoli Storage Manager FastBack v6.1 supports both Windows and Linux! FastBack is a data protection solution for ROBO (Remote Office, Branch Office) locations. It can protect Microsoft Exchange, Lotus Domino, DB2, Oracle applications. FastBack can provide full volume-level recovery, as well as individual file recovery, and in some cases Bare Machine Recovery. FastBack v6.1 can be run stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution.
FlashCopy Manager v2.1
FlashCopy Manager uses point-in-time copy capabilities, such as SnapShot or FlashCopy, to protect application data using an application-aware approach for Microsoft Exchange, Microsoft SQL server, DB2, Oracle, and SAP. It can be used with IBM SAN Volume Controller (SVC), DS8000 series, DS5000 series, DS4000 series, DS3000 series, and XIV storage systems. When applicable, FlashCopy manager coordinates its work with Microsoft's Volume Shadow Copy Services (VSS) interface. FlashCopy Manager can provide data protection using just point-in-time disk-resident copies, or can be integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution to move backup images to external storage pools, such as low-cost, energy-efficient tape cartridges.
General Parallel File System (GPFS) v3.3 Multiplatform
GPFS can support AIX, Linux, and Windows! Version 3.3 adds support for Windows 2008 Server on 64-bit chipset architectures from AMD and Intel. Now you can have a common GPFS cluster with AIX, Linux and Windows servers all sharing and accessing the same files. A GPFS cluster can have up to 256 file systems. Each of these file systems can be up to 1 billion files, up to 1PB of data, and can have up to 256 snapshots. GPFS can be used stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution with parallel backup streams.
For full details on these new ordering options, see the IBM [Press Release].
The marketshare data for external disk systems has been released by IDC for 4Q09. Overall, the market dropped 0.7 percent, comparing 4Q09 versus 4Q08. While EMC was quick to remind everyone that they were able to [maintain their #1 position] in the storage subset of "external disk systems", with the same 23.7 percent marketshare they had back in 4Q08 and revenues that were essentially flat, the real story concerns the shifts in the marketplace for the other major players. IBM grew revenue 9 percent, putting it nearly 5 points of marketshare ahead of HP. HP revenues dropped 7 percent, moving it further behind. Not mentioned in the [IBM Press Release] were NetApp and Dell, neck and neck for fourth place, with NetApp gaining 16.8 percent in revenues, while Dell dropped 13.5 percent. Both NetApp and Dell now have about 8 percent marketshare each. These top five storage vendors represent nearly 70 percent of the marketshare.
Given that HP is IBM's number one competitor, not just in storage but all things IT, this was a major win. Bob Evans from InformationWeek interviews my fifth-line manager, IBM executive Rod Adkins [IBM Claims Hardware Supremacy] where he shares his views and opinions about HP, Oracle-Sun, Cisco and Dell.
I'll add my two cents on what's going on:
Shift in Servers causes Shift in Storage
Hundreds of customers are moving away from HP and Sun over to IBM servers, and with it, are chosing IBM's storage offerings as well. IBM's rock-solid strategy (which I outlined in my post [Foundations and Flavorings]) has helped explain the different products and how they are positioned. HP's use of Itanium processors, and Sun's aging SPARC line, are both reasons enough to switch to IBM's lastest POWER7 processors, running AIX, IBM i (formerly i5/OS) and Linux operating systems.
Thunder in the Clouds
Some analysts predict that by 2013, one out of five companies won't even have their own IT assets. IBM supports all flavors of private, public and hybrid cloud computing models. IBM has its own strong set of offerings, is also the number one reseller of VMware, and has cloud partnerships with both Google and Amazon. HP and Microsoft have recently formed an alliance, but they have different takes on cloud computing. HP wants to be the "infrastructure" company, but Microsoft wants to focus on its ["three screens and a public cloud"] strategy. Microsoft has decided not to make its Azure Cloud operating system available for private cloud deployments. By contrast, IBM can start you with a private cloud, then help you transition to a hybrid cloud, and finally to a public cloud.
In the latest eX5 announcement, IBM's x86-based servers can run 78 percent more virtual machines per VMware license dollar. This will give IBM an advantage as HP shifts from Itanium to an all x86-based server line.
Network Attached Storage
There seems to be a shift away from FC and iSCSI towards NAS and FCoE storage networking protocols. This bodes bad for HP's acquisition of LeftHand, and Dell's acquisition of EqualLogic. IBM's SONAS for large deployments, and N series for smaller deployments, will compete nicely against HP's StorageWorks X9000 system.
Storage on Paper no longer Eco-friendly
HP beats IBM when you include consumer products like printers, which some might consider "Storage on Paper". At IBM, we often joke that 96 percent of HP's profits come from over-priced ink cartridges. With the latest focus on the environment, people are printing less. I have been printing less myself, setting my default printer to generate a PDF file instead. There are several tools available for this, including [CutePDF] and [BullZip]. As IBM employees switch from Microsoft Office to IBM's [Lotus Symphony], it has built-in "export-to-PDF" capability as well. People are also going to their local OfficeMax or CartridgeWorld to get their cartridges refilled, rather than purchase new ones. That has to be hurting HP's bottom line.
Don't Forget About Storage Management
The leading storage management suites today are IBM's Tivoli Storage Productivity Center and EMC's Control Center. HP's Storage Essentials doesn't quite beat either of these, and management software is growing in importance to more and more customers.
To learn more about IBM results 4Q09 and full-year 2009, see [Quarterly Earnings].
They say "Great Minds think alike" and that imitation is "the sincerest form of flattery." Both of these quotes came to mind when I read fellow blogger Chuck Hollis' (EMC) excellent April 7th blog post [The 10 Big Ideas That Are Shaping IT Infrastructure Today]. Not surprisingly, some of his thoughts are similar to those I had presented two weeks ago in my March 22nd post [Cloud Computing for Accountants]. Here are two charts that caught my eye:
Telephone Operators
On page 13 of my deck, I had an old black and white photo of telephone operators, as part of a section on the history of selecting "cloud" as the iconic graphic to represent all networks. Chuck has this same graphic on his chart titled "#1 The Industrialization of IT Infrastructure".
Looks like Chuck and I use the same "stock photo" search facility!
Arms Dealers
On page 45 on my deck, I had a list of major "arms dealers" that deliver the hardware and software components needed to build Cloud Computing. Chuck has a similar chart, titled "#2 The Consolidation of the IT Industry", but with some interesting differences.
Let's look at some of the key differences:
The left-to-right order is slightly different. I chose a 1-2-4-2-1 symmetrical pattern purely on aesthetic reasons. My presentation was to a bunch of accountants, and so I was trying not to make it sound like an "Infomercial" for IBM products and offerings. My sequence is roughly chronological, in that Oracle announced its intention to acquire Sun, then Cisco, VMware and EMC announced their VCE coalition, followed closely by Cisco, VMware and NetApp announcing they work together well also, followed by [HP extended alliance with Microsoft] on Jan 13, 2010. As the IT marketplace is maturing, more and more customers are looking for an IBM-like one-stop shopping experience, and certainly various "mini-mall" alliances have formed to try to compete in this space.
I had HP and Microsoft in the same column, referring only to the above-mentioned January announcement. HP is all about private cloud hardware infrastructures, but Microsoft is all about "three screens and the public cloud", so not sure how well this alliance will work out from a Cloud Computing perspective. This was not to imply that the other stacks don't work well with Microsoft software. They all do. Perhaps to avoid that controversy, Chuck chose to highlight HP's acquisition of EDS services instead.
I used the vendor logos in their actual colors. Notice that the colors black, blue and red occur most often. These happen to be the three most popular ballpoint pen ink colors found on the very same paper documents these computer companies are trying to eliminate. Paper-less office, anyone? Chuck chose instead to colorize each stack with his own color scheme. While blue for IBM and orange for Sun Microsystems make some sense, it is not clear if he chose green for Cisco/VMware/EMC for any particular reason. Perhaps he was trying to subtly imply that the VCE stack is more energy efficient? Or maybe the green refers to money to indicate that the VCE stack is the most expensive? Either way, I would pit IBM's server/storage/software stack up against anything of comparable price from these other stacks in any energy efficiency bake-off.
What about the Cisco/VMware/NetApp combination? All three got together to assure customers this was a viable combination. IBM is the number one reseller of VMware, and VMware runs great with IBM's N series NAS storage, so I do not dispute Cisco's motivation here. It makes sense for Cisco to two-time EMC in this manner. Why should Cisco limit itself to a single storage supplier? Et tu VMware? Having VMware chose NetApp over its parent company EMC was a bit of a shock. No surprise that Chuck left NetApp out of his chart.
No love for Dell? I give Dell credit for their work with Virtual Desktop Images (VDI), and for embracing Ubuntu Linux for their servers. Dell's acquisitions of EqualLogic iSCSI-based disk systems and Perot Systems for services are also worth noting. Dell used to resell some of EMC's gear, but perhaps that relationship continues to fade away, as I [predicted back in 2007]. Chuck's decision to leave Dell off his chart speaks volumes to where this relationship stands, and where it is going.
Perhaps we are all in just one big ["echo chamber"], as we are all coming up with similar observations, talking to similar customers, and reviewing similar market analyst reports. I am glad, at least this time, that Chuck and I for the most part agree where the marketplace is going. We live in interesting times!
It seems everyone is talking about stacks, appliances and clouds.
On StorageBod, fellow blogger Martin Glassborow has a post titled [Pancakes!] He feels that everyone from Hitachi to Oracle is turning into the IT equivalent of the International House of Pancakes [IHOP] offering integrated stacks of software, servers and storage.
Cisco introduced its "Unified Computing System" about a year ago, [reinventing the datacenter with an all-Ethernet approach]. Cisco does not offer its own hypervisor software nor storage, so there are two choices. First, Cisco has entered a joint venture, called Acadia, with VMware and EMC, to form the Virtual Computing Environment (VCE) coalition. The resulting stack was named Vblock, which one blogger had hyphenated as Vb-lock to raise awareness to the proprietary vendor lock-in nature of this stack. Second, Cisco, VMware and NetApp had a similar set of [Barney press releases] to announce a viable storage alternative to those not married to EMC.
"Only when it makes sense. Oracle/Sun has the better argument: when you know exactly what you want from your database, we’ll sell you an integrated appliance that will do exactly that. And it’s fine if you roll your own.
But those are industry-wide issues. There are UCS/VCE specific issue as well:
Cost. All the integration work among 3 different companies costs money. They aren’t replacing existing costs – they are adding costs. Without, in theory, charging more.
Lock-in. UCS/Vblock is, effectively, a mainframe with a network backplane.
Barriers to entry. Are there any? Cisco flagged hypervisor bypass and large memory support as unique value-add – and neither seems any more than a medium-term advantage.
BOT? Build, Operate, Transfer. In theory Vblocks are easier and faster to install and manage. But customers are asking that Acadia BOT their new Vblocks. The customer benefit over current integrator practice? Lower BOT costs? Or?
Price. The 3 most expensive IT vendors banding together?
Longevity. Industry “partnerships” don’t have a good record of long-term success. Each of these companies has its own competitive stresses and financial imperatives, and while the stars may be aligned today, where will they be in 3 years? Unless Cisco is piloting an eventual takeover."
Fellow blogger Bob Sutor (IBM) has an excellent post titled
[Appliances and Linux]. Here is an excerpt:
"In your kitchen you have special appliances that, presumably, do individual things well. Your refrigerator keeps things cold, your oven makes them hot, and your blender purees and liquifies them. There is room in a kitchen for each of these. They work individually but when you are making a meal they each have a role to play in creating the whole.
You could go out and buy the metal, glass, wires, electrical gadgets, and so on that you would need to make each appliance but it is is faster, cheaper, and undoubtably safer to buy them already manufactured. For each device you have a choice of providers and you can pay more for additional features and quality.
In the IT world it is far more common to buy the bits and pieces that make up a final solution. That is, you might separately order the hardware components, the operating system, and the applications, and then have someone put them all together for you. If you have an existing configuration you might add more blades or more storage devices.
You don’t have to do this, however, in every situation. Just from a hardware perspective, you can buy a ready-made machine just waiting for the on switch to be flicked and the software installed. Conversely, you might get a pre-made software image with operating system and applications in place, ready to be provisioned to your choice of hardware. We can get even fancier in that the software image might be deployable onto a virtual machine and so be a ready made solution runnable on a cloud.
Thus in the IT world we can talk about hardware-only appliances, software-only appliances (often called virtual software appliances), and complete hardware and software combinations. The last is most comparable to that refrigerator or oven in your kitchen."
If your company was a restaurant, how many employees would you have on hand to produce your own electricity from gas generators, pump your own water from a well, and assemble your own toasters and blenders from wires and motors? I think this is why companies are re-thinking the way they do their own IT.
Rather than business-as-usual, perhaps a mix of pre-configured appliances, consisting of software, server and storage stacked to meet a specific workload, connected to public cloud utility companies, might be the better approach. By 2013, some analysts feel that as many as 20 percent of companies might not even have a traditional IT datacenter anymore.
“By employing techniques like virtualization, automated management, and utility-billing models, IT managers can evolve the internal datacenter into a ‘private cloud’ that offers many of the performance, scalability, and cost-saving benefits associated with public clouds. Microsoft provides the foundation for private clouds with infrastructure solutions to match a range of customer sizes, needs and geographies.
The public cloud:
“Cloud computing is expanding the traditional web-hosting model to a point where enterprises are able to off-load commodity applications to third-party service providers (hosters) and, in the near future, the Microsoft Azure Services Platform. Using Microsoft infrastructure software and Web-based applications, the public cloud allows companies to move applications between private and public clouds.”
Finally, I saw this from fellow blogger, Barry Burke(EMC), aka the Storage Anarchist, titled [a walk through the clouds] which is really a two-part post.
The first part describes a possible future for EMC customers written by EMC employee David Meiri, envisioning a wonderful world with "No more Metas, Hypers, BIN Files...."
The vision is a pleasant one, and not far from reality. While EMC prefers to use the term "private cloud" to refer to both on-premises and off-premises-but-only-your-employees-can-VPN-to-it-and-your-IT-staff-still-manages-it flavors, the overall vision is available today from a variety of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) providers.
A good analogy for "private cloud" might be a corporate "intranet" that is accessible only within the company's firewall. This allowed internal websites where information to be disseminated to employees could be posted, using standard HTML and standard web browsers that are already deployed on most PCs and workstations. Web pages running on an intranet can easily be moved to an external-facing website without too much rework or trouble.
The second part has Barry claiming that EMC has made progress towards a "Virtual Storage Server" that might be announced at next month's EMC World conference.
Seriously?
When people hear "Storage Virtualization" most immediately think of the two market leaders, IBM SAN Volume Controller and Hitachi Data Systems (HDS) Universal Storage Platform (USP) products. Those with a tape bent might throw in IBM's TS7000 virtual tape libraries or Oracle/Sun's Virtual Storage Manager (VSM). And those focused on software-only solutions might recall Symantec's Veritas Volume Manager (VxVM), DataCore's SANsymphony, or FalconStor's IPStor products.
But what about EMC's failed attempt at storage virtualization, the Invista? After five years of failing to deliver value, EMC has so far only publicised ONE customer reference account, and I estimate that perhaps only a few dozen actual customers are still running on this platform. Compare that to IBM selling tens of thousands of SAN Volume Controllers, and HDS selling thousands of their various USP-V and USP-VM products, and you quickly realize that EMC has a lot of catching up to do. EMC's first delivered Invista about 18 months after IBM SAN Volume Controller, similar to their introduction of Atmos being 18 months after our Scale-Out File Services (SoFS) and their latest CLARiiON-based V-Max coming out 18 months after IBM's XIV storage system.
So what will EMC's Invista follow-on "Virtual Storage Server" product look like? No idea. It might be another five years before you actually hear about a customer using it. But why wait for EMC to get their act together?
IBM offers solutions TODAY that can make life as easy as envisioned here. IBM offers integrated systems sold as ready-to-use appliances, customized "stacks" that can be built to handle particular workloads, residing on-premises or hosted at an IBM facility, and public cloud "as-a-service" offerings on the IBM Cloud.
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I find some of the best sessions are those "user experiences" by the CIO or IT directors that successfully completed a project and showed the benefits and pitfalls. Matt Merchant, CTO of General Electric (GE), gave an awesome presentation on tapping Cloud Storage to reduce their backup and archive costs.
They were concerned over their lack of e-Discovery tools, the high fixed cost and large administrator personnel load of their Veritas NetBackup software environment, the possibility of corrupted tape media, new compliance and regulatory issues, and the risk of moving unencrypted cartridges to remote vaulting facilities like Iron Mountain. I found it interesting their backup/archive approach is that backups are re-classified as archive after they are 35 days old.
GE's Disk-to-Disk-to-Tape (D2D2T) approach was costing them 50 cents per GB/month. Changing to a D2D with remote replication addressed some of their concerns over tape, but was more costly at 79 centers per GB/month. Given that Backup and Archive represent 30 percent of their IT budget, the largest non-application expense, they reviewed their options:
Continue with their Traditional BU/Archive approach
Adopt Internal DAS using cheaper SATA disk drives
Implement an Internal Cloud
Use External Cloud services
General Electric had a long list of requirements:
99.99 percent Availability
99.999 percent Reliability and data integrity of the data
Location independent access
Meets HIPAA, SAS70, PCI compliance requirements
Secure 3rd party access
Eliminate GE operations management personnel
Documented APIs
Large file size uploads and resumable uploads (GE owns NBC Universal and some files are very large, movies can be 1.5 TB in size)
Encryption at rest
Multi-node capable, in other words, GE uploads it once and the Cloud Storage provider ensures that it is stored in two or more designated locations.
Child-level billing/management. Here child relates to department, division or other sub-division for reporting and management purposes.
Data integrity verification, such as with MD5 hash codes
GE evaluated Nirvanix, Amazon S3 and EMC and chose Nirvanix. They found Cloud storage worked best for backup, archive and large files, but was not a good fit for production/transactional data. However, they were not happy with proprietary APIs and vendor lock-in, so they wrote their own internal "Data Mover" called CloudStorage Manager that works with five different cloud storage providers through an abstraction layer. It is able to handle up to 8.8 GB per minute upload, has a policy engine that does encryption, compression and single-instance storage data deduplication at the file level. Some lessons learned include:
Challenge the skeptics
Run small pilot projects to get familiar with the technology and provider
Socialize (have a beer or coffee with) your Security and Legal teams early and often
Consider using multiple cloud providers
Test many different scenarios
The end result? They now have Cloud-based backups and archive for their GE Corp, NBC Universal and GE Asset Management divisions running at only 32 cents per GB/month, representing a 40-60 percent savings over their previous methods. This includes backups of their external Web sites, archives of their digital and production assets, RMAN backups including development/staging databases. They plan to add out-of-region compliance archive in 2010. They also plan to monetize their intellectual property by offering "CloudStorage Manager" as a software offering for others.
Monday morning of the [Oracle OpenWorld 2011] conference had Joe Tucci, CEO of EMC, present the keynote. Joe indicated that I.T. stands for "Industry in Transition". He had a chart that showed the history of IT, from the mainframe and mini-computer, to the PC and client/server era, and now to the Cloud era. He called these "waves of disruption". The catalysts for change are a "Budge Dilemma", "Information Deluge" and "Cyber Security". The keynote was very similar to what EMC presented at [VMworld] conference earlier this summer.
"We have failed our customers. Over the past 10 years, they spend 73% to maintain their existing systems, and only 27% for new."
--- Joe Tucci, EMC
While many people equate "EMC" and "Failure", I believe Joe was referring not just to his own company, but most of the other IT vendors as well. Analysts predict that from January 1, 2010 to December 31, 2019, the world of stored data will grow from 0.9 ZB to 35.2 ZB, which represents a 44x increase. During that same time, IT staff is only expected to grow 50 percent. A staggering 90 percent of this data will be unstructured (non-database) content. Meanwhile, the average company gets cyber-attacked 300 times per week.
The answer is Cloud Computing. A few years ago, EMC was trying to get people to go "private cloud" route instead of "public cloud", they now have a more realistic "hybrid cloud" approach similar to IBM. Of the clients that EMC works with, 35 percent are implementing some form of cloud, and another 30 percent are planning to. The tenents of Hybrid Cloud are "Efficiency", "Control" and "Choice" which equals "Agility".
Joe also mentioned that there is now a new "layering" for IT. Instead of storage, switches and servers, we have a cloud platform of shared resources, mobile devices like smartphones and tablets, and management.
Joe feels there is a massive opportunity where Cloud meets Big Data. A cute video showed a driver wearing a motorcycle helmet so you can't see his face get into an under-powered car with "VNXe" on the license plate. He punches in "Cloud and Big Data" into the GPS navigation system, and starts out on city streets. Then the car transforms to an under-utilized family sedan "VNX" on a highway in the middle of the desert, then transforms to an over-priced sports car labeled "VMAX" as it climbs into the mountains surrounded by fog. The video borrowed the "CARS" theme from the videos IBM developed for its 2008 launch of "Information Infrastructure" initiative.
EMC's Pat Gelsinger (CTO) and fellow blogger Chad Sakac did some demos of VMware vCenter. They called the VMware vSphere "the Datacenter-wide OS" indicating that EMC storage has 75 points of integration with their "partner" (VMware is majority-owned by EMC, so I am not sure if partner is the right term). If you don't count Itanium, SPARC, POWER and IBM Syste z architectures, VMware enjoys over 80 percent marketshare for server virtualization.
(Full disclosure: IBM is the leading reseller of VMware.)
Pat claims that 40 percent of Oracle Apps at EMC run VMware. For the longest time, Oracle refused support its apps on VMware, but they relaxed this restrictive policy back in 2009. Today, nearly 25 percent of Oracle Apps run virtualized. EMC claims that they can support 5 million VMs on a single VMAX, and can generate 1 million IOPS from a single VMware ESX host.
Chad did a demo of vFabric which allows a vCenter plug-in to kick up Database instances of OracleDB, MySQL, Hadoop, PostgreSQL, and GreenPlum (GreenPlum is EMC's version of open-source PostgreSQL).
Chad showed that VMware vMition could move workloads from servers without solid-state, to servers that are flash-enabled. Lightweight workloads can be moved from DAS-enabled servers to compute-enabled storage devices like their EMC Isilon. (EMC acquired Isilon to offer their me-too version of IBM's Scale-Out NAS [SONAS] product.) EMC announced their first "Solid-State on a PCIe card" from their Project Lightning initiative. These are 320 GB capacity, so they sounded like a me-too versino of IBM's [Fusion-io IOdrive] cards that IBM has had available for quite some time now.
Next, Pat and Chad talked about Big Data. The world is transforming from a manual scale-up model to an automated scale-out architecture. Moving from "islands" to "pools". They used a cute example of Car Insurance. Business Analytics were able to review a safe drivers record, including the driver's Facebook and Twitter activity, and give him a discount, and then review the bad driving habits of another driver, and raise the bad driver's rates.
EMC announced their "GreenPlum Analytics Platform" (GAP?). I often tell people that if you want to predict what EMC will announce next, just look at what IBM announced 18 months ago. This new platform sounds like their me-too version of IBM's [Smart Analytics System].
After EMC, Judith Sim from Oracle introduced the Ed Lee, the Mayor of San Francisco which was just named the "Greenest city in North America". He thanked the audience for contributing an estimated $100 million USD to his local economy. Also, he was happy that by eliminating paper-based handouts and conference materials, the audience saved 1,636 trees.
Mark Hurd, formerly CEO of HP, and now president of Oracle, gave some highlights of 2011, and what Oracle's strategy is going forward. He said that Oracle plans to provide complete stacks, complete choice, and have each component of the stack be best-of-breed. In 2011, Oracle introduced the new MySQL 5.5 database, Java 7 programming language, and the Solaris 11 operating system with ZFS file system. Oracle spent $4 Billion in R&D, and gained 20 percent growth in software licenses, which gave them 33 percent growth fiscally for 2011 year. Oracle acquired Larry Ellison's [Pillar Data] storage company. Oracle also launched a [Database Appliance].
Thomas Kurian, another Oracle executive, finished the keynote session. He started with yet another chart showing the historical transition from Mainframe to Tablet. He indicated that leading-edge OracleDB and their Fusion middleware combined with industry standard hardware provides 5-30x faster queries, 4-10x less disk space, and simplifies the data center footprint. Their Exadata provides what he likes to call "Hierarchical Storage Management" between DRAM, Flash Solid-State, and spinning disk.
(Note: I started my career at IBM in 1986 working on a product called DFHSM, the Data Facility Hierarchical Storage Manager! It is now a vibrant component of DFSMS, part of IBM's z/OS mainframe operating system.)
ps this new announcement is to address that deficiency.
Finally, Oracle announced their "Exadata Storage Expansion Rack". Many people realized that the Exadata was under-provisioned for storage, which explains why they have only sold a few thousand of them, so perha
If you are attending Oracle OpenWorld, here are sessions for Tuesday that IBM is featuring. Note the first two are Solution Spotlight sessions at the IBM Booth #1111 where I will be most of the time.
Securing Heterogeneous Database Infrastructures: A Comprehensive Approach
10/04/11, 9:45 a.m. -- 10:15 a.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: Al Cooley, Director, IBM InfoSphere Guardium
IBM Business Analystics for Oracle Solutions
10/04/11, 2:15 p.m. -- 2:45 p.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: John Strazdins, ERP Strategy Executive
Consolidated Global View of Your Customer with One Global Billing System
10/04/11, 3:30 p.m. -- 4:30 p.m., OpenWorld session #23650
Presenter: John Waterman, IBM
Enterprise billing system technologies are emerging to assist with global customer views and other challenges banks struggle with today. In this session, Citi discusses its challenges and successes in implementing a global billing system.
Upgrading Your Siebel CRM with Reduced Risk and Lowered Cost: Customer Successes
10/04/11, 3:30 p.m. -- 4:30 p.m., OpenWorld session #18222
Presenters: Arnaud Wingelaar, IBM; Geetha Sundaram; Agnes Zhang, Oracle
Hear customer success stories about upgrading Siebel CRM. Learn best practices on upgrading with lowered cost, or achieving a high-availability upgrade with zero downtime and reduced risk.
The old adage applies "You can't please everyone. Presidents can't. Prostitutes can't. Nobody can." I am reminded of that as I fielded a variety of interesting comments and emails about, of all things, my choice of order of things in recent blog posts.
Certainly, there are times when the order of things matters greatly. In my now-infamous blog post [Sock Sock Shoe Shoe], I use a scene from a popular 1970's television show to explain why compression should be done before encryption.
In my case, I put things in the order that I felt made sense to me, but not everyone agrees. Here are three recent examples:
Example 1:
In my blog post [Two IBMers Earn Their Retirement], I congratulated two of my colleagues on their retirement. Since their retirement happened on the same day, I decided to mention Mark Doumas first, and Jim Rymarczyk second.
However, one of my readers, who I will assume is a member of the unofficial "Jim Rymarczyk fan club", felt that I should have listed Jim first, as Jim served IBM for 44 years, and Mark only 32 years.
Really? I realize that movie stars insist on having their name listed first on the poster, but neither of these guys would be confused with George Clooney!
So, to Jim and all his fans out there, I assure you I did not mean this as a slight in any way. I have updated the post to indicate that the ordering was strictly alphabetical by last name.
Example 2:
In my blog post [IBM Announcements for February 2012], I presented tape products first, and disk second. Normally, I cover them alphabetically, disk first, then tape. However, I was asked to promote tape this year in preparation for the upcoming 60th anniversary of tape, so I mentioned the tape announcements first, and the disk second.
The feedback from the XIV community was swift. Many felt that I [buried the lede] in not mentioning the XIV Gen3 SSD caching first.
(Note: For those not familiar with the phrase used in journalism, 'burying the lede' refers to the failure to mention the most interesting or attention grabbing elements of a story in the first paragraph. In American news journalism, it is spelled "lede" and elsewhere it is spelled "lead". Major US dictionaries apparently accept both spellings for this phrase.)
Technically, my lead paragraph stated clearly that: "This week we have announcements for both disk and tape, but since 2012 is the 60th Diamond Anniversary for tape, I will start with tape systems first."
So, while I don't claim to be a journalist by any means, I think the lead paragraph accurately reflected that I would talk about both disk and tape products in the rest of the blog post, and if a reader didn't care to learn more about tape could bypass those sections and go directly to the section on disk instead.
Example 3:
I have had my head handed to me on a platter so many times here at IBM that I am considering installing a zipper around my neck. My friends in XIV land insisted that I write a secondary post about XIV Gen3 SSD caching that had no mention of tape whatsoever. One suggestion was to compare and contrast XIV Gen3 SSD caching with EMC's announcement for VFCache. The result was my blog post [IBM XIV Gen3 SSD Caching versus EMC VFCache].
What could go wrong with an apples-to-orange comparison of two different storage products sprinkled with a small amount of FUD against a major competitor?
I had two complaints on this one. First, is the order of products in my side-by-side table of comparisons. I put EMC VFCache in the left column, and IBM XIV Gen3 SSD caching in the right. I meant nothing sinister by this. Alphabetically, EMC comes before IBM, and VFCache comes before XIV. Chronologically, EMC's announcement came out on Monday, and IBM's announcement came out the following day.
(Note: The term [sinster] comes from the Latin word sinistra meaning "left hand". In the Middle Ages it was believed that when a person was writing with their left hand they were possessed by the Devil. Left-handed people were therefore considered to be evil. My poor mother was born left-handed and was forced as a child to write with her right hand to be accepted by society.)
Apparently, an unwritten convention within IBM is that comparison tables always have the newer product on the left column, followed by one or more older products to the right, or the IBM product on the left column, with one or more competitive alternatives to the right.
The second complaint came from a reader in the comments section: "... I think [what] you're doing is trying to ride EMC's release for your own marketing, did you really need to? XIV is an excellent array; adding SSD Cache to the Gen3 takes it further, Moshe would be fuming (which I think is a good thing), can you just stick to that and not ride someone else's wave?"
Seriously?
Both announcements relate to reducing latency of read IOPS through the use of Solid State Drives. That both companies would announce these were no surprise to any employee at either company, as both IBM and EMC have been talking about their intent to do so last year. IBM's announcement of XIV SSD Gen3 caching was certainly not in response to EMC's VFCache announcement, and I doubt EMC rushed out their VFCache announcement the day before as a pre-emptive strike against IBM's announcement of the XIV Gen3 SSD Caching feature.
(Note: I don't know her personally, but she has thousands of followers!)
There you have it. I will gladly fix false or misleading information, but I am not going to re-arrange the order of things just to please some readers, only to have other readers complain that they liked it better in the original order. As always, feel free to comment on any of this in the section below.