Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
“In times of universal deceit, telling the truth will be a revolutionary act.”
-- George Orwell
Well, it has been over two years since I first covered IBM's acquisition of the XIV company. Amazingly, I still see a lot of misperceptions out in the blogosphere, especially those regarding double drive failures for the XIV storage system. Despite various attempts to [explain XIV resiliency] and to [dispel the rumors], there are still competitors making stuff up, putting fear, uncertainty and doubt into the minds of prospective XIV clients.
Clients love the IBM XIV storage system! In this economy, companies are not stupid. Before buying any enterprise-class disk system, they ask the tough questions, run evaluation tests, and all the other due diligence often referred to as "kicking the tires". Here is what some IBM clients have said about their XIV systems:
“3-5 minutes vs. 8-10 hours rebuild time...”
-- satisfied XIV client
“...we tested an entire module failure - all data is re-distributed in under 6 hours...only 3-5% performance degradation during rebuild...”
-- excited XIV client
“Not only did XIV meet our expectations, it greatly exceeded them...”
In this blog post, I hope to set the record straight. It is not my intent to embarrass anyone in particular, so instead will focus on a fact-based approach.
Fact: IBM has sold THOUSANDS of XIV systems
XIV is "proven" technology with thousands of XIV systems in company data centers. And by systems, I mean full disk systems with 6 to 15 modules in a single rack, twelve drives per module. That equates to hundreds of thousands of disk drives in production TODAY, comparable to the number of disk drives studied by [Google], and [Carnegie Mellon University] that I discussed in my blog post [Fleet Cars and Skin Cells].
Fact: To date, no customer has lost data as a result of a Double Drive Failure on XIV storage system
This has always been true, both when XIV was a stand-alone company and since the IBM acquisition two years ago. When examining the resilience of an array to any single or multiple component failures, it's important to understand the architecture and the design of the system and not assume all systems are alike. At it's core, XIV is a grid-based storage system. IBM XIV does not use traditional RAID-5 or RAID-10 method, but instead data is distributed across loosely connected data modules which act as independent building blocks. XIV divides each LUN into 1MB "chunks", and stores two copies of each chunk on separate drives in separate modules. We call this "RAID-X".
Spreading all the data across many drives is not unique to XIV. Many disk systems, including EMC CLARiiON-based V-Max, HP EVA, and Hitachi Data Systems (HDS) USP-V, allow customers to get XIV-like performance by spreading LUNs across multiple RAID ranks. This is known in the industry as "wide-striping". Some vendors use the terms "metavolumes" or "extent pools" to refer to their implementations of wide-striping. Clients have coined their own phrases, such as "stripes across stripes", "plaid stripes", or "RAID 500". It is highly unlikely that an XIV will experience a double drive failure that ultimately requires recovery of files or LUNs, and is substantially less vulnerable to data loss than an EVA, USP-V or V-Max configured in RAID-5. Fellow blogger Keith Stevenson (IBM) compared XIV's RAID-X design to other forms of RAID in his post [RAID in the 21st Centure].
Fact: IBM XIV is designed to minimize the likelihood and impact of a double drive failure
The independent failure of two drives is a rare occurrence. More data has been lost from hash collisions on EMC Centera than from double drive failures on XIV, and hash collisions are also very rare. While the published worst-case time to re-protect from a 1TB drive failure for a fully-configured XIV is 30 minutes, field experience shows XIV regaining full redundancy on average in 12 minutes. That is 40 times less likely than a typical 8-10 hour window for a RAID-5 configuration.
A lot of bad things can happen in those 8-10 hours of traditional RAID rebuild. Performance can be seriously degraded. Other components may be affected, as they share cache, connected to the same backplane or bus, or co-dependent in some other manner. An engineer supporting the customer onsite during a RAID-5 rebuild might pull the wrong drive, thereby causing a double drive failure they were hoping to avoid. Having IBM XIV rebuild in only a few minutes addresses this "human factor".
In his post [XIV drive management], fellow blogger Jim Kelly (IBM) covers a variety of reasons why storage admins feel double drive failures are more than just random chance. XIV avoids load stress normally associated with traditional RAID rebuild by evenly spreading out the workload across all drives. This is known in the industry as "wear-leveling". When the first drive fails, the recovery is spread across the remaining 179 drives, so that each drive only processes about 1 percent of the data. The [Ultrastar A7K1000] 1TB SATA disk drives that IBM uses from HGST have specified 1.2 million hours mean-time-between-failures [MTBF] would average about one drive failing every nine months in a 180-drive XIV system. However, field experience shows that an XIV system will experience, on average, one drive failure per 13 months, comparable to what companies experience with more robust Fibre Channel drives. That's innovative XIV wear-leveling at work!
Fact: In the highly unlikely event that a DDF were to occur, you will have full read/write access to nearly all of your data on the XIV, all but a few GB.
Even though it has NEVER happened in the field, some clients and prospects are curious what a double drive failure on an XIV would look like. First, a critical alert message would be sent to both the client and IBM, and a "union list" is generated, identifying all the chunks in common. The worst case on a 15-module XIV fully loaded with 79TB data is approximately 9000 chunks, or 9GB of data. The remaining 78.991 TB of unaffected data are fully accessible for read or write. Any I/O requests for the chunks in the "union list" will have no response yet, so there is no way for host applications to access outdated information or cause any corruption.
(One blogger compared losing data on the XIV to drilling a hole through the phone book. Mathematically, the drill bit would be only 1/16th of an inch, or 1.60 millimeters for you folks outside the USA. Enough to knock out perhaps one character from a name or phone number on each page. If you have ever seen an actor in the movies look up a phone number in a telephone booth then yank out a page from the phone book, the XIV equivalent would be cutting out 1/8th of a page from an 1100 page phone book. In both cases, all of the rest of the unaffected information is full accessible, and it is easy to identify which information is missing.)
If the second drive failed several minutes after the first drive, the process for full redundancy is already well under way. This means the union list is considerably shorter or completely empty, and substantially fewer chunks are impacted. Contrast this with RAID-5, where being 99 percent complete on the rebuild when the second drive fails is just as catastrophic as having both drives fail simultaneously.
Fact: After a DDF event, the files on these few GB can be identified for recovery.
Once IBM receives notification of a critical event, an IBM engineer immediately connects to the XIV using remote service support method. There is no need to send someone physically onsite, the repair actions can be done remotely. The IBM engineer has tools from HGST to recover, in most cases, all of the data.
Any "union" chunk that the HGST tools are unable to recover will be set to "media error" mode. The IBM engineer can provide the client a list of the XIV LUNs and LBAs that are on the "media error" list. From this list, the client can determine which hosts these LUNs are attached to, and run file scan utility to the file systems that these LUNs represent. Files that get a media error during this scan will be listed as needing recovery. A chunk could contain several small files, or the chunk could be just part of a large file. To minimize time, the scans and recoveries can all be prioritized and performed in parallel across host systems zoned to these LUNs.
As with any file or volume recovery, keep in mind that these might be part of a larger consistency group, and that your recovery procedures should make sense for the applications involved. In any case, you are probably going to be up-and-running in less time with XIV than recovery from a RAID-5 double failure would take, and certainly nowhere near "beyond repair" that other vendors might have you believe.
Fact: This does not mean you can eliminate all Disaster Recovery planning!
To put this in perspective, you are more likely to lose XIV data from an earthquake, hurricane, fire or flood than from a double drive failure. As with any unlikely disaster, it is best to have a disaster recovery plan than to hope it never happens. All disk systems that sit on a single datacenter floor are vulnerable to such disasters.
For mission-critical applications, IBM recommends using disk mirroring capability. IBM XIV storage system offers synchronous and asynchronous mirroring natively, both included at no additional charge.
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
SAN Volume Controller
HDS USP-V and USP-VM
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Provides thin provisioning to devices that don't offer this natively
Like Invista, No
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
HP SVSP-like, requires at least one EMC storage array to hold metadata
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
Well, it's Wednesday, and you know what that means... IBM Announcements!
(Actually most IBM announcements are on Tuesdays, but IBM gave me extra time to recover from my trip to Europe!)
Today, IBM announced [IBM PureSystems], a new family of expert-integrated systems that combine storage, servers, networking, and software, based on IBM's decades of experience in the IT industry. You can register for the [Launch Event] today (April 11) at 2pm EDT, and download the companion "Integrated Expertise" event app for Apple, Android or Blackberry smartphones.
(If you are thinking, "Hey, wait a minute, hasn't this been done before?" you are not alone. Yes, IBM introduced the System/360 back in 1964, and the AS/400 back in 1988, so today's announcement is on scheduled for this 24-year cycle. Based on IBM's past success in this area, others have followed, most recently, Oracle, HP and Cisco.)
Initially, there are two offerings:
IBM PureFlex™ System
IBM PureFlex is like IaaS-in-a-box, allowing you to manage the system as a pool of virtual resources. It can be used for private cloud deployments, hybrid cloud deployments, or by service providers to offer public cloud solutions. IBM drinks its own champagne, and will have no problem integrating these into its [IBM SmartCloud] offerings.
To simplify ordering, the IBM PureFlex comes in three tee-shirt sizes: Express, Standard and Enterprise.
IBM PureFlex is based on a 10U-high, 19-inch wide, standard rack-mountable chassis that holds 14 bays, organized in a 7 by 2 matrix. Unlike BladeCenter where blades are inserted vertically, the IBM PureFlex nodes are horizontal. Some of the nodes take up a single bay (half-wide), but a few are full-wide, take up two bays, the full 19-inch width of the chassis. Compute and storage snap in the front, while power supplies, fans, and networking snap in the back. You can fit up to four chassis in a standard 42U rack.
Unlike competitive offerings, IBM does not limit you to x86 architectures. Both x86 and POWER-based compute nodes can be mixed into a single chassis. Out of the box, the IBM PureFlex supports four operating systems (AIX, IBM i, Linux and Windows), four server hypervisors (Hyper-V, Linux KVM, PowerVM, and VMware), and two storage hypervisors (SAN Volume Controller and Storwize V7000).
There are a variety of storage options for this. IBM will offer SSD and HDD inside the compute nodes themselves, direct-attached storage nodes, and an integrated version of the Storwize V7000 disk system. Of course, every IBM System Storage product is supported as external storage. Since Storwize V7000 and SAN Volume Controller support external virtualization, many non-IBM devices will be supported automatically as well.
Networking is also optimized, with options for 10Gb and 40Gb Ethernet/FCoE, 40Gb and 56Gb Infiniband, 8Gbps and 16Gbps Fibre Channel. Much of the networking traffic can be handled within the chassis, to minimize traffic on external switches and directors.
For management, IBM offers the Flex System Manager, that allows you to manage all the resources from a single pane of glass. The goal is to greatly simplify the IT lifecycle experience of procurement, installation, deployment and maintenance.
IBM PureApplication™ System
IBM PureApplication is like PaaS-in-a-box. Based on the IBM PureFlex infrastructure, the IBM PureApplication adds additional software layers focused on transactional web, business logic, and database workloads. Initially, it will offer two platforms: Linux platform based on x86 processors, Linux KVM and Red Hat Enterprise Linux (RHEL); and a UNIX platform based on POWER7 processors, PowerVM and AIX operating system. It will be offered in four tee-shirt sizes (small, medium, large and extra large).
In addition to having IBM's middleware like DB2 and WebSphere optimized for this platform, over 600 companies will announce this week that they will support and participate in the IBM PureSystems ecosystem as well. Already, there are 150 "Patterns of Expertise" ready to deploy from IBM PureSystem Centre, a kind of a "data center app store", borrowing an idea used today with smartphones.
By packaging applications in this manner, workloads can easily shift between private, hybrid and public clouds.
If you are unhappy with the inflexibility of your VCE Vblock, HP Integrity, or Oracle ExaLogic, talk to your local IBM Business Partner or Sales Representative. We might be able to buy your boat anchor off your hands, as part of an IBM PureSystems sale, with an attractive IBM Global Financing plan.
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday morning sessions.
A Data Center Perspective on MegaVendors
The morning started with a keynote session. The analyst felt that the eight most strategic or disruptive companies in the past few decades were: IBM, HP, Cisco, SAP, Oracle, Apple and Google. Of these, he focused on the first three, which he termed the "Megavendors", presented in alphabetical order.
Cisco enjoys high-margins and a loyal customer base with Ethernet switch gear. Their new strategy to sell UP and ACROSS the stack moves them into lower-margin business like servers. Their strong agenda with NetApp is not in sync with their partnership with EMC. They recently had senior management turn-over.
HP enjoys a large customer base and is recognized for good design and manufacturing capabilities. Their challenges are mostly organizational, distracted by changes at the top and an untested and ever-changing vision, shifting gears and messages too often. Concerns over the Itanium have not helped them lately.
IBM defies simple description. One can easily recognize Cisco as an "Ethernet Switch" company, HP as a "Printer Company", Oracle as a "Database Company', but you can't say that IBM is an "XYZ" company, as it has re-invented itself successfully over its past 100 years, with a strong focus on client relationships. IBM enjoys high margins, sustainable cost structure, huge resources, a proficient sales team, and is recognized for its innovation with a strong IBM Research division. Their "Smarter Planet" vision has been effective in supporting their individual brands and unlock new opportuties. IBM's focus on growth markets takes advantage of their global reach.
His final advice was to look for "good enough" solutions that are "built for change" rather than "built to last".
Chris works in the Data Center Management and Optimization Services team. IBM owns and/or manages over 425 data centers, representing over 8 million square feet of floorspace. This includes managing 13 million desktops, and 325,000 x86 and UNIX server images, and 1,235 mainframes. IBM is able to pool resources and segment the complexity for flexible resource balancing.
Chris gave an example of a company that selected a Cloud Compute service provided on the East coast a Cloud Storage provider on the West coast, both for offering low rates, but was disappointed in the latency between the two.
Chris asked "How did 5 percent utilization on x86 servers ever become acceptable?" When IBM is brought in to manage a data center, it takes a "No Server Left Behind" approach to reduce risk and allow for a strong focus on end-user transition. Each server is evaluated for its current utilization:
Amazingly, many servers are unused. These are recycled properly.
1 to 19 percent
Workload is virtualized and moved to a new server.
20 to 39 percent
Use IBM's Active Energy Manager to monitor the server.
40 to 59 percent
Add more VMs to this virtualized server.
over 60 percent
Manage the workload balance on this server.
This approach allows IBM to achieve a 60 to 70 percent utilization average on x86 machines, with an ROI payback period of 6 to 18 months, and 2x-3x increase of servers-managed-per-FTE.
Storage is classified using Information Lifecycle Management (ILM) best practices, using automation with pre-defined data placement and movement policies. This allows only 5 percent of data to be on Tier-1, 15 percent on Tier-2, 15 percent on Tier-3, and 65 percent on Tier-4 storage.
Chris recommends adopting IT Service Management, and to shift away from one-off builds, stand-alone apps, and siloed cost management structures, and over to standardization and shared resources.
You may have heard of "Follow-the-sun" but have you heard of "Follow-the-moon"? Global companies often establish "follow-the-sun" for customer service, re-directing phone calls to be handled by people in countries during their respective daytime hours. In the same manner, server and storage virtualization allows workloads to be moved to data centers during night-time hours, following the moon, to take advantage of "free cooling" using outside air instead of computer room air conditioning (CRAC).
Since 2007, IBM has been able to double computer processing capability without increasing energy consumption or carbon gas emissions.
It's Wednesday, Day 3, and I can tell already that the attendees are suffering from "information overload'.
This post concludes my series of posts on Oracle OpenWorld 2011 conference. Here are some pictures from Wednesday and Thursday.
IBM as the yardstick by which everyone measures against
Our friends at Violin Memory systems mentioned our joint-venture success results with IBM GPFS, scanning 10 billion files in less than an hour. (Their booth must have been slow, because members of their team spent a lot of time in our IBM booth!)
In fact, it seemed every company compared themselves to IBM in one fashion or another. Larry said that "IBM is a great company" and mentioned the IBM systems several times in comparisons to Oracle's newly announced hardware offerings.
Larry's Sailing Vessel
When things slowed down, I took a walk to see the other parts of the exhibition area. In the Moscone West building was Larry's catamaran that won [last year's America's Cup].
I used to sail myself, and have been part of crews in sailing races in both Japan and Australia. A few years ago, I watched the America's Cup time trials in New Zealand.
On the Streets of San Francisco
On the streets, IBM had advertised some of its products in a manner that thousands of attendees would see every day. Here we have some factoids related to IBM Netezza and DB2 database on POWER servers. We were very careful not to mention either product in the IBM Booth itself, as we all understand that IBM is a guest in Oracle's house this week. We certainly don't want to do anything to upset Larry in any way to make him treat IBM like he treated HP last year, or Salesforce.com this year.
Rest in Peace, Steve Jobs, 1955-2011
On Wednesday evening at Oracle OpenWorld, we were tearing down the booth when we heard that Apple co-founder Steve Jobs had passed away. This is truly a loss for the entire IT industry. I never met Steve in person, nor have I been to any Apple conferences like MacWorld that he spoke at.
At various keynote sessions, Larry Ellison compared his Oracle products to those of Apple, Inc., suggesting that Oracle is the "Apple for the Enterprise".
On our way back to the Hilton hotel on O'Farrell, there was a candle vigil at the Apple Store near Union Square. People left sticky notes on the glass window.
There were a lot of tributes to Steve Jobs, but I liked this 15-minute video of his 2005 Commencement Speech at Stanford University titled [How to Live before you Die.
This will be one of those moments where years later, many people will remember exactly where they were, and what they were doing, when they heard the news. For many, that news came as tweets or text messages on the very iPhones and iPads he helped design.
Rock Concert - Wednesday night
On Wednesday evening, I joined thousands of other attendees on Treasure Island to hear and watch Sting, Tom Petty and the Heartbreakers, and the English Beat in concert. It was cold and dark, but we all had a good time. Needless to say, I didn't make it to Marc Benioff's 8:00am Thursday morning session!
A word of advice: If you go to an evening rock concert at Treasure Island, dress warmly!
Despite the sad news about Steve Jobs, I had a great time at this conference. I learned a lot about what other IT vendors are doing, talked to dozens of IBM clients at the booth, and got to make some new friends that work in other parts of IBM.
(FTC Disclosure: I work for IBM. IBM and Apple are technology partners. I proudly own an Apple iPod, several Mac Mini computers and shares of stock in both IBM and Apple, Inc.)
My series last week on IBM Watson (which you can read [here], [here], [here], and [here]) brought attention to IBM's Scale-Out Network Attached Storage [SONAS]. IBM Watson used a customized version of SONAS technology for its internal storage, and like most of the components of IBM Watson, IBM SONAS is commercially available as a stand-alone product.
Like many IBM products, SONAS has gone through various name changes. First introduced by Linda Sanford at an IBM SHARE conference in 2000 under the IBM Research codename Storage Tank, it was then delivered as a software-only offering SAN File System, then as a services offering Scale-out File Services (SoFS), and now as an integrated system appliance, SONAS, in IBM's Cloud Services and Systems portfolio.
If you are not familiar with SONAS, here are a few of my previous posts that go into more detail:
This week, IBM announces that SONAS has set a world record benchmark for performance, [a whopping 403,326 IOPS for a single file system]. The results are based on comparisons of publicly available information from Standard Performance Evaluation Corporation [SPEC], a prominent performance standardization organization with more than 60 member companies. SPEC publishes hundreds of different performance results each quarter covering a wide range of system performance disciplines (CPU, memory, power, and many more). SPECsfs2008_nfs.v3 is the industry-standard benchmark for NAS systems using the NFS protocol.
(Disclaimer: Your mileage may vary. As with any performance benchmark, the SPECsfs benchmark does not replicate any single workload or particular application. Rather, it encapsulates scores of typical activities on a NAS storage system. SPECsfs is based on a compilation of workload data submitted to the SPEC organization, aggregated from tens of thousands of fileservers, using a wide variety of environments and applications. As a result, it is comprised of typical workloads and with typical proportions of data and metadata use as seen in real production environments.)
The configuration tested involves SONAS Release 1.2 on 10 Interface Nodes and 8 Storage Pods, resulting a single file system over 900TB usable capacity.
10 Interface Nodes; each with:
Maximum 144 GB of memory
One active 10GbE port
8 Storage Pods; each with:
2 Storage nodes and 240 drives
Drive type: 15K RPM SAS hard drives
Data Protection using RAID-5 (8+P) ranks
Six spare drives per Storage Pod
IBM wanted a realistic "no compromises" configuration to be tested, by choosing:
Regular 15K RPM SAS drives, rather than a silly configuration full of super-expensive Solid State Drives (SSD) to plump up the results.
Moderate size, typical of what clients are asking for today. The Goldilocks rule applies. This SONAS is not a small configuration under 100TB, and nowhere close to the maximum supported configuration of 7,200 disks across 30 Interface Nodes and 30 Storage Pods.
Single file system, often referred to as a global name space, rather than using an aggregate of smaller file systems added together that would be more complicated to manage. Having multiple file systems often requires changes to applications to take advantage of the aggregate peformance. It is also more difficult to load-balance your performance and capacity across multiple file systems. Of course, SONAS can support up to 256 separate file systems if you have a business need for this complexity.
The results are stunning. IBM SONAS handled three times more workload for a single file system than the next leading contender. All of the major players are there as well, including NetApp, EMC and HP.
Continuing my post-week coverage of the [Data Center 2010 conference], Thursday morning had some interesting sessions for those that did not leave town last night.
Interactive Session Results
In addition to the [Profile of Data Center 2010] that identifies the demographics of this year's registrants, the morning started with highlights of the interactive polls during the week.
External or Heterogeneous Storage Virtualization
The analyst presented his views on the overall External/Heterogeneous Storage Virtualization marketplace. He started with the key selling points.
Avoid vendor lock-in. Unlike the IBM SAN Volume Controller, many of the other storage virtualization products result in vendor lock-in.
Leverage existing back-end capacity. Limited to what back-end storage devices are supported.
Simplify and unify management of storage. Yes, mostly.
Lower storage costs. Unlike the IBM SAN Volume Controller, many using other storage virtualization discover an increase in total storage costs.
Migration tools. Yes, as advertised.
Consolidation/Transition. Yes, over time.
Better functionality. Potentially.
Shortly after several vendors started selling external/heterogeneous storage virtualization solutions, either as software or pre-installed appliances, major storage vendors that were caught with their pants down immediately started calling everything internally as also "storage virtualization" to buy some time and increase confusion.
While the analyst agreed that storage virtualization simplifies the view of storage from the host server side, it can complicate the management of storage on the storage end. This often comes up at the Tucson Briefing Center. I explain this as the difference between manual and automatic transmission cars. My father was a car mechanic, and since he is the sole driver and sole mechanic, he prefers manual transmission cars, easier to work on. However, rental car companies, such as Hertz or Avis, prefer automatic transmission cars. This might require more skills on behalf of their mechanics, but greatly simplifies the experience for those driving.
The analyst offered his views on specific use cases:
Data Migration. The analyst feels that external virtualization serves as one of the best tools for data migration. But what about tech refresh of the storage virtualization devices themselves? Unlike IBM SAN Volume Controller, which allows non-disruptive upgrades of the nodes themselves, some of the other solutions might make such upgrades difficult.
Consolidation/Transition. External virtualization can also be helpful, depending on how aggressive the schedule for consolidation/transition is performed.
Improved Functionality/Usability. IBM SAN Volume Controller is a good example, an unexpected benefit. Features like thin provisioning, automated storage tiering, and so on, can be added to existing storage equipment.
The analyst mentioned that there were different types of solutions. The first category were those that support both internal storage and external storage virtualization, like the HDS USP-V or IBM Storwize V7000. He indicated that roughly 40 percent of HDS USP-V are licensed for virtualization. The second category were those that support external virtualization only, such as IBM SAN Volume Controller, HP Lefthand and SVSP, and so on. The third category were software-only Virtual Guest images that could provide storage virtualization capabilities.
The analyst mentioned EMC's failed product Invista, which sold less than 500 units over the past five years. The low penetration for external virtualization, estimated between 2-5 percent, could be explained from the bad taste that left in everyone considering their options. However, the analyst predicts that by 2015, external virtualization will reach double digit marketshare.
Having a feel for the demographics of the registrants, and specific interactive polling in each meeting, provides a great view on who is interested in what topic, and some insight into their fears and motivations.
Continuing my post-week coverage of the [Data Center 2010 conference], Wendesday afternoon included a mix of sessions that covered storage and servers.
Enabling 5x Storage Efficiency
Steve Kenniston, who now works for IBM from recent acquisition of Storwize Inc, presented IBM's new Real-Time Compression appliance. There are two appliances, one handles 1 GbE networks, and the other supports mixed 1GbE/10GbE connectivity. Files are compressed in real-time with no impact to performance, and in some cases can improve performance because there is less data written to back-end NAS devices. The appliance is not limited to IBM's N series and NetApp, but is vendor-agnostic. IBM is qualifying the solution with other NAS devices in the market. The compression can compress up to 80 percent, providing a 5x storage efficiency.
Townhall - Storage
The townhall was a Q&A session to ask the analysts their thoughts on Storage. Here I will present the answer from the analyst, and then my own commentary.
Are there any gotchas deploying Automated Storage Tiering?
Analyst: you need to fully understand your workload before investing any money into expensive Solid-State Drives (SSD).
Commentary: IBM offers Easy Tier for the IBM DS8000, SAN Volume Controller, and Storwize V7000 disk systems. Before buying any SSD, these systems will measure the workload activity and IBM offers the Storage Tier Advisory Tool (STAT) that can help identify how much SSD will benefit each workload. If you don't have these specific storage devices, IBM Tivoli Storage Productivity Center for Disk can help identify disk performance to determine if SSD is cost-justified.
Wouldn't it be simpler to just have separate storage arrays for different performance levels?
Analyst: No, because that would complicate BC/DR planning, as many storage devices do not coordinate consistency group processing from one array to another.
Commentary: IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems support consistency groups across storage arrays, for those customers that want to take advantage of lower cost disk tiers on separate lower cost storage devices.
Can storage virtualization play a role in private cloud deployments?
Analyst: Yes, by definition, but today's storage virtualization products don't work with public cloud storage providers. None of the major public cloud providers use storage virtualization.
Commentary: IBM uses storage virtualization for its public cloud offerings, but the question was about private cloud deployments. IBM CloudBurst integrated private cloud stack supports the IBM SAN Volume Controller which makes it easy for storage to be provisioned in the self-service catalog.
Can you suggest one thing we can do Monday when we get back to the office?
Analyst: Create a team to develop a storage strategy and plan, based on input from your end-users.
Commentary: Put IBM on your short list for your next disk, tape or storage software purchase decision. Visit
[ibm.com/storage] to re-discover all of IBM's storage offerings.
What is the future of Fibre Channel?
Analyst 1: Fibre Channel is still growing, will go from 8Gbps to 16Gbps, the transition to Ethernet is slow, so FC will remain the dominant protocol through year 2014.
Analyst 2: Fibre Channel will still be around, but NAS, iSCSI and FCoE are all growing at a faster pace. Fibre Channel will only be dominant in the largest of data centers.
Commentary: Ask a vague question, get a vague answer. Fibre Channel will still be around for the next five years.
However, SAN administrators might want to investigate Ethernet-based approaches like NAS, iSCSI and FCoE where appropriate, and start beefing up their Ethernet skills.
Will Linux become the Next UNIX?
Linux in your datacenter is inevitable. In the past, Linux was limited to x86 architectures, and UNIX operating systems ran on specialized CPU architectures: IBM AIX on POWER7, Solaris on SPARC, HP-UX on PA-RISC and Itanium, and IBM z/OS on System z Architecture, to name a few. But today, Linux now runs on many of these other CPU chipsets as well.
Two common workloads, Web/App serving and DBMS, are shifting from UNIX to Linux. Linux Reliability, Availability and Serviceability (RAS) is approaching the levels of UNIX. Linux has been a mixed blessing for UNIX vendors, with x86 server margins shrinking, but the high-margin UNIX market has shrunk 25 percent in the past three years.
UNIX vendors must make the "mainframe argument" that their flavor of UNIX is more resilient than any OS that runs on Intel or AMD x86 chipsets. In 2008, Sun Solaris was the number #1 UNIX, but today, it is IBM AIX with 40 percent marketshare. Meanwhile HP has focused on extending its Windows/x86 lead with a partnership with Microsoft.
The analyst asks "Are the three UNIX vendors in it for the long haul, or are they planning graceful exits?" The four options for each vendor are:
Milk it as it declines
Accelerate the decline by focusing elsewhere
Impede the market to protect margins
Re-energize UNIX base through added value
Here is the analyst's view on each UNIX vendor.
IBM AIX now owns 40 percent marketshare of the UNIX market. While the POWER7 chipset supports multiple operating systems, IBM has not been able to get an ecosystem to adopt Linux-on-POWER. The "Other" includes z/OS, IBM i, and other x86-based OS.
HP has multi-OS Itanium from Intel, but is moving to Multi-OS blades instead. Their "x86 plus HP-UX" strategy is a two-pronged attack against IBM AIX and z/OS. Intel Nehalem chipset is approaching the RAS of Itanium, making the "mainframe argument" more difficult for HP-UX.
Before Oracle acquired Sun Microsystems, Oracle was focused on Linux as a UNIX replacement. After the acquisition, they now claim to support Linux and Solaris equally. They are now focused on trying to protect their rapidly declining install base by keeping IBM and HP out. They will work hard to differentiate Solaris as having "secret sauce" that is not in Linux. They will continue to compete head-on against Red Hat Linux.
An interactive poll of the audience indicated that the most strategic Linux/UNIX platform over the next next five years was Red Hat Linux. This beat out AIX, Solaris and HP-UX, as well as all of the other distributions of Linux.
The rooms emptied quickly after the last session, as everyone wanted to get to the "Hospitality Suites".
Continuing my post-week coverage of the [Data Center 2010 conference], we had receptions on the Show floor. This started at the Monday evening reception and went on through a dessert reception Wednesday after lunch. I worked the IBM booth, and also walked around to make friends at other booths.
Here are my colleagues at the IBM booth. David Ayd, on the left, focuses on servers, everything from IBM System z mainframes, to POWER Systems that run IBM's AIX version of UNIX, and of course the System x servers for the x86 crowd. Greg Hintermeister, on the right, focuses on software, including IBM Systems Director and IBM Tivoli software. I covered all things storage, from disk to tape. For attendees that stopped by the booth expressing interest in IBM offerings, we gave out Starbucks gift cards for coffee, laptop bags, 4GB USB memory sticks and copies of my latest book: "Inside System Storage: Volume II".
Across the aisle were our cohorts from IBM Facilities and Data Center services. They had the big blue Portable Modular Data Center (PMDC). Last year, there were three vendors that offered these: IBM, SGI, and HP. Apparently, IBM won the smack-down, as IBM has returned victorious, as SGI only had the cooling portion of their "Ice Cube" and HP had no container whatsoever.
IBM's PMDC is fully insulated so that you can use it in cold weather below 50 degrees F like Alaska, to the hot climates up to 150 degrees F like Iraq or Afghanistan, and everything in between. They come in three lengths, 20, 40 and 53 feet, and can be combined and stacked as needed into bigger configurations. The systems include their own power generators, cooling, water chillers, fans, closed circuit surveillance, and fire suppression. Unlike the HP approach, IBM allows all the equipment to be serviced from the comfort inside.
This is Mary, one of the 200 employees secunded to the new VCE. Michael Capellas, the CEO of VCE, offered to give a hundred dollars to the [Boys and Girls Club of America], a charity we both support, if I agreed to take this picture. The Boys and Girls Club inspires and enables young people to realize their full potential as productive, responsible, and caring citizens, so it was for a good cause.
The show floor offers attendees a chance to see not just the major players in each space, but also all the new up-and-coming start-ups.
Mastering the art of stretching out a week-long event into two weeks' worth of blog posts, I continue my
coverage of the [Data Center 2010 conference], Tuesday afternoon I attended several sessions that focused on technologies for Cloud Computing.
(Note: It appears I need to repeat this. The analyst company that runs this event has kindly asked me not to mention their name on this blog, display any of their logos, mention the names of any of their employees, include photos of any of their analysts, include slides from their presentations, or quote verbatim any of their speech at this conference. This is all done to protect and respect their intellectual property that their members pay for. The pie charts included on this series of posts were rendered by Google Charting tool.)
Converging Storage and Network Fabrics
The analysts presented a set of alternative approaches to consolidating your SAN and LAN fabrics. Here were the choices discussed:
Fibre Channel over Ethernet (FCoE) - This requires 10GbE with Data Center Bridging (DCB) standards, what IBM refers to as Converged Enhanced Ethernet (CEE). Converged Network Adapters (CNAs) support FC, iSCSI, NFS and CIFS protocols on a single wire.
Internet SCSI (iSCSI) - This works on any flavor of Ethernet, is fully routable, and was developed in the 1990s by IBM and Cisco. Most 1GbE and all 10GbE Network Interface Cards (NIC) support TCP Offload Engine (TOE) and "boot from SAN" capability. Native suppot for iSCSI is widely available in most hypervisors and operating systems, including VMware and Windows. DCB Ethernet is not required for iSCSI, but can be helpful. Many customers keep their iSCSI traffic in a separate network (often referred to as an IP SAN) from the rest of their traditional LAN traffic.
Network Attached Storage (NAS) - NFS and CIFS have been around for a long time and work with any flavor of Ethernet. Like iSCSI, DCB is not required but can be helpful. NAS went from being for files only, to be used for email and database, and now is viewed as the easiest deployment for VMware. Vmotion is able to move VM guests from one host to another within the same LAN subnet.
Infiniband or PCI extenders - this approach allows many servers to share fewer number of NICs and HBAs. While Infiniband was limited in distance for its copper cables, recent advances now allow fiber optic cables for 150 meter distances.
Interactive poll of the audience offered some insight on plans to switch from FC/FICON to Ethernet-based storage:
Interactive poll of the audience offered some insight on what portion storage is FCP/FICON attached:
Interactive poll of the audience offered some insight on what portion storage is Ethernet-attached:
Interactive poll of the audience offered some insight on what portion of servers are already using some Ethernet-attached storage:
Each vendor has its own style. HP provides homogeneous solutions, having acquired 3COM and broken off relations with Cisco. Cisco offers tight alliances over closed proprietary solutions, publicly partnering with both EMC and NetApp for storage. IBM offers loose alliances, with IBM-branded solutions from Brocade and BNT, as well as reselling arrangements with Cisco and Juniper. Oracle has focused on Infiniband instead for its appliances.
The analysts predict that IBM will be the first to deliver 40 GbE, from their BNT acquisition. They predict by 2014 that Ethernet approaches (NAS, iSCSI, FCoE) will be the core technology for all but the largest SANs, and that iSCSI and NAS will be more widespread than FCoE. As for cabling, the analysts recommend copper within the rack, but fiber optic between racks. Consider SAN management software, such as IBM Tivoli Storage Productivity Center.
The analysts felt that the biggest inhibitor to merging SAN and LANs will be organizational issues. SAN administrators consider LAN administrators like "Cowboys" undisciplined and unwilling to focus on 24x7 operational availability, redundancy or business continuity. LAN administrators consider SAN administrators as "Luddites" afraid or unwilling to accept FCoE, iSCSI or NAS approaches.
Driving Innovation through Innovation
Mr. Shannon Poulin from Intel presented their advancements in Cloud Computing. Let's start with some facts and predictions:
There are over 2.5 billion photos on Facebook, which runs on 30,000 servers
30 billion videos viewed every month
Nearly all Internet-connected devices are either computers or phones
An additional billion people on the Internet
Cars, televisions, and households will also be connected to the Internet
The world will need 8x more network bandwidth, 12x more storage, and 20x more compute power
To avoid confusion between on-premise and off-premise deployments, Intel defines "private cloud" as "single tenant" and "public cloud" as "multi-tenant". Clouds should be
automated, efficient, simple, secure, and interoperable enough to allow federation of resources across providers. He also felt that Clouds should be "client-aware" so that it know what devices it is talking to, and optimizes the results accordingly. For example, if watching video on a small 320x240 smartphone screen, it makes no sense for the Cloud server to push out 1080p. All devices are going through a connected/disconnected dichotomy. They can do some things while disconnected, but other things only while connected to the Internet or Cloud provider.
An internal Intel task force investigated what it would take to beat MIPS and IBM POWER processors and found that their own Intel chips lacked key functionality. Intel plans to address some of their shortcomings with a new chip called "Sandbridge" sometime next year. They also plan a series of specialized chips that support graphics processing (GPU), network processing (NPU) and so on. He also mentioned Intel released "Tukwilla" earlier this year, the latest version of Itanium chip. HP is the last major company to still use Itanium for their servers.
Shannon wrapped up the talk with a discussion of two Cloud Computing initiatives. The first is [Intel® Cloud Builders], a cross-industry effort to build Cloud infrastructures based on the Intel Xeon chipset. The second is the [Open Data Center Alliance], comprised of leading global IT managers who are working together to define and promote data center requirements for the cloud and beyond.
The analysts feel that we need to switch from thinking about "boxes" (servers, storage, networks) to "resources". To this end, they envision a future datacenter where resources are connected to an any-to-any fabric that connects compute, memory, storage, and networking resources as commodities. They feel the current trend towards integrated system stacks is just a marketing ploy by vendors to fatten their wallets. (Ouch!)
A new concept to "disaggregate" caught my attention. When you make cookies, you disaggregate a cup of sugar from the sugar bag, a teaspoon of baking soda from the box, and so on. When you carve a LUN from a disk array, you are disaggregating the storage resources you need for a project. The analysts feel we should be able to do this with servers and network resources as well, so that when you want to deploy a new workload you just disaggregate the bits and pieces in the amounts you actually plan to use and combine them accordingly. IBM calls these combinations "ensembles" of Cloud computing.
Very few workloads require "best-of-breed" technologies. Rather, this new fabric-based infrastructure recognizes the reality that most workloads do not. One thing that IT Data Center operations can learn from Cloud Service Providers is their focus on "good enough" deployment.
This means however that IT professionals will need new skill sets. IT administrators will need to learn a bit of application development, systems integration, and runbook automation. Network adminis need to enter into 12-step programs to stop using Command Line Interfaces (CLI). Server admins need to put down their screwdrivers and focus instead on policy templates.
Whether you deploy private, public or hybrid cloud computing, the benefits are real and worth the changes needed in skill sets and organizational structure.
Each quarter since 2006, the [IBM Migration Factory] team has tallied the number of clients who have moved to IBM severs and storage systems from competitive hardware. We'll I've just seen the latest numbers, for the third quarter of 2010, and it looks like we set a new quarterly record with nearly 400 total migrations to IBM from Oracle/Sun and HP.
It's clear that companies and governments worldwide are seeing greater value in IBM systems, while Oracle and HP watch their customer bases erode. In just this past 3Q 2010, nearly 400 clients have moved over to IBM -- almost all of them from Oracle/Sun and HP. Of these, 286 clients migrated to IBM Power Systems, running AIX, Linux and IBM i operating systems, from competitors alone -- nearly 175 from Oracle/Sun and nearly 100 from HP. The number of migrations to IBM Power Systems through the first three quarters of 2010 is nearly 800, already exceeding the total for all of last year by more than 200.
Let's do the math.... Since IBM established its Migration Factory program in 2006, more than 4,500 clients have switched to IBM. More than 1,000 from Oracle/Sun and HP joined the exodus this year alone. In less than five years, almost 3,000 of these clients -- including more than 1,500 from Oracle/Sun and more than 1,000 from HP -- have chosen to run their businesses on IBM's Power Systems. That's more than a client per day making the move to IBM!
And as the servers go, so goes the storage. Clients are re-discovering IBM as a server and storage powerhouse, offering a strong portfolio in servers, disk and tape systems, and how synergies between servers and storage can provide them real business benefits.
Adding it all up, it's clear that IBM's multi-billion dollar investment in helping to build a smarter planet with workload-optimized systems is paying off -- and that, more and more, clients are selecting IBM over the competition to help them meet their business needs.
In his blog post, [The Lure of Kit-Cars], fellow blogger Chuck Hollis (EMC) uses an excellent analogy delineating the differences between kit-cars you build from parts, versus fully-integrated systems that you can drive off the car dealership showroom lot. The analogy holds relatively well, as IT departments can also build their infrastructure from parts, or you can get fully-integrated systems from a variety of vendors.
Is this what your data center looks like?
Certainly, this debate is not new. In my now infamous 2007 post [Supermarkets and Specialty Shops], I explained that there were clients that preferred to get their infrastructure from a single IT supermarket, like IBM or HP, while others were lured into thinking that buying separate parts from butchers, bakers and candlestick makers and other specialty shops was somehow a better idea.
Chuck correctly explains that in the early years of the automobile industry, before major car manufacturers had mass-production assembly lines, putting a car together from parts was the only way cars were made. Today, only the few most avid enthusiasts build cars this way. The majority get cars from a single seller and drive away. In my post [Resolving the Identity Crisis], I postulated that EMC appeared to be trying to shed itself of the "disk-only specialty shop" image and over to be more like IBM. Not quite a full IT Supermarket, but perhaps more like a [Trader Joe's] premium-priced retailer.
(If you find that EMC's focus on integrated systems appears to be a 180-degree about-face from their historical focus on selling individual best-of-breed products, see my previous discussion of Chuck's contradictions in my blog post: [Is Storage the Next Confusopoly].)
While companies like EMC might be making this transition, there is a lot of resistance and inertia from the customer marketplace. I agree with Chuck, companies should not be building kit-cars or IT infrastructures from parts, certainly not from parts sold from different vendors. In my post [Talking about Solutions not Products], I explained how difficult it was to change behavior. CIOs, IT directors and managers need to think differently about their infrastructure. Let's take a quick look at some choices:
Following Chuck's argument, it makes no sense to build a "kit-car" combining Oracle/Sun servers with EMC storage. Oracle would argue it makes more sense to run on integrated systems, business logic on their "Exalogic" system, and database processing on their "Exadata". Benchmark after benchmark, however, IBM is able to demonstrate that Oracle applications and databases run faster on IBM systems. Customers that want to run Oracle applications can run either on a full Oracle stack, or a full IBM stack, and both do better than a kit-car including EMC parts.
HP has been working hard to keep up with IBM in this area. With their their partnership with Microsoft, and acquisitions of EDS, 3Com and 3PAR, they can certainly make a case for getting a full HP stack rather than a kit-car mixing HP servers with EMC disk storage. The problem is that HP is focused on a converged infrastructure for private cloud computing, but Microsoft is focused on Azure and public cloud computing. It will be interesting when these two big companies sort this out. Definitely watch this space.
If you squint your eyes and focus on the part of the world that only has x86 machines, then Dell can be seen as an IT supermarket. In my post about [Entry-Level iSCSI Offerings], I discuss how Dell's acquisition of EqualLogic was a signal that it was trying to get away from selling EMC specialty shop products, and building up its own set of offerings internally.
Cisco is new on the server scene, but has already made quite a splash. Here, I have to agree with Chuck's logic: the only time it makes sense to buy EMC disk storage at all is when it is part of an integrated "V-block". This is not really an IT supermarket situation, instead you park your car at the "Acadia Mini-Mall" and get what you need from Trader Joe's, Cisco UCS, and VMware stores.
But wait, if what you want is running VMware on Cisco servers, you might be better off with IBM System Storage N series or NetApp storage. In his blog post about [Enhanced Secure Multi-Tenancy], fellow Blogger Val Bercovici (NetApp) provides a convincing argument of why Cisco and VMware run better on an "N-block" rather than a "V-block". IBM N series provides A-SIS deduplication, and IBM Real-time Compression can provide additional capacity and performance improvements. That might be true, but whether you get your storage from EMC, NetApp or IBM, to me, you are still working with three different vendors in any case.
Of course, following Chuck's logic, it makes more sense for people with IBM servers, whether they be mainframes, POWER systems or x86 machines, to integrate these with IBM storage, IBM software and IBM services. IBM is the leading reseller of VMware, but also has a lot of business with Microsoft Hyper-V, Citrix Xen, Linux KVM, PowerVM, PR/SM and z/VM. While IBM has market leading servers, disk and tape systems, to compete for those RFP bids that just ask for one component or another, it prefers to sell fully-integrated systems, which IBM has been doing successfully since the 1950s.
Back in 2007, I mentioned how IBM's fully-integrated InfoSphere Balanced Warehouse [Trounced HP and Sun]. For business analytics, IBM offers the fully-integrated [IBM Smart Analytics Systems]. Today, IBM expanded its line of fully-integrated private cloud service delivery platforms with the announcement of the [IBM CloudBurst for on Power Systems], which does for POWER7 what the IBM CloudBurst for System x, Oracle Exalogic, or Acadia's V-block, do for x86.
IBM estimates that private clouds built on Power systems can be up to 70 percent less expensive than stand alone x86 servers.
Before he earned his PhD in Mechanical Engineering, my father was a car mechanic. I spent much of my teenage years covered in grease, helping my father assembling cars, lifting engines, and rebuilding carburetors. Certainly this was good father-son time, and I certainly did learn something in the process. Like the automobile industry, the IT industry has matured, and it makes no financial sense to build your own IT infrastructure from parts from different vendors.
For a test drive of the industry's leading integrated IT systems, see your IBM sales rep or IBM Business Partner.
This week, Hitachi Ltd. announced their next generation disk storage virtualization array, the Virtual Storage Platform, following on the success of its USP V line. It didn't take long for fellow blogger Chuck Hollis (EMC) to comment on this in his blog post [Hitachi's New VSP: Separating The Wheat From The Chaff]. Here are some excerpts:
"Well, we all knew that Hitachi (through HDS and HP) would be announcing some sort of refresh to their high-end storage platform sooner or later.
As EMC is Hitachi's only viable competitor in this part of the market, I think people are expecting me to say something.
If you're a high-end storage kind of person, your universe is basically a binary star: EMC and Hitachi orbiting each other, with the interesting occasional sideshow from other vendors trying to claim relevance in this space."
Chuck implies that neither Hewlett-Packard (HP) nor Hitachi Data Systems (HDS) as vendors provide any value-add from the box manufactured by Hitachi Ltd. so combines them into a single category. I suspect the HP and HDS folks might disagree with that opinion.
When I reminded Chuck that IBM was also a major player in the high-end disk space, his response included the following gem:
"Many of us in the storage industry believe that IBM currently does not field a competitive high-end storage platform. IDC market share numbers bear out this assertion, as you probably know."
While Chuck is certainly entitled to his own beliefs and opinions, believing the world is flat does not make it so. Certainly, I doubt IDC or any other market research firm has put out a survey asking "Do you think IBM offers a competitive high-end disk storage platform?" Of course, if Chuck is basing his opinion on anecdotal conversations with existing EMC customers, I can certainly see how he might have formed this misperception. However, IDC market share numbers don't support Chuck's assertion at all.
There is no industry-standard definition of what is a "high-end" or "enterprise-class" disk system. Some define high-end as having the option for mainframe attachment via ESCON and/or FICON protocol. Others might focus on features, functionality, scalability and high 99.999+ percent availability. Others insist high-end requires block-oriented protocols like FC and iSCSI, rather than file-based protocols like NAS and CIFS.
For the most demanding mission-critical mix of random and sequential workloads, IBM offers the [IBM System Storage DS8000 series] high-end disk system which connects to mainframes and distributed servers, via FCP and FICON attachment, and supports a variety of drive types and RAID levels. The features that HP and HDS are touting today for the VSP are already available on the IBM DS8000, including sub-LUN automatic tiering between Solid-State drives and spinning disk, called [Easy Tier], thin provisioning, wide striping, point-in-time copies, and long distance synchronous and asynchronous replication.
There are lots of analysts that track market share for the IT storage industry, but since Chuck mentions [IDC] specifically, I reviewed the most recent IDC data, published a few weeks ago in their "IDC Worldwide Quarter Disk Storage Tracker" for 2Q 2010, representing April 1 to June 30, 2010 sales. Just in case any of the rankings have changed over time, I also looked at the previous four quarters: 2Q 2009, 3Q 2009, 4Q 2009 and 1Q 2010.
(Note: IDC considers its analysis proprietary, out of respect for their business model I will not publish any of the actual facts and figures they have collected. If you would like to get any of the IDC data to form your own opinion, contact them directly.)
In the case of IDC, they divide the disk systems into three storage classes: entry-level, midrange and high-end. Their definition of "high-end" is external RAID-protected disk storage that sells for $250,000 USD or more, representing roughly 25 to 30 percent of the external disk storage market overall. Here are IDC's rankings of the four major players for high-end disk systems:
By either measure of market share, units (disk systems) or revenue (US dollars), IDC reports that IBM high-end disk outsold both HDS and HP combined. This has been true for the past five quarters. If a smaller start-up vendor has single digit percent market share, I could accept it being counted as part of Chuck's "occasional sideshow from other vendors trying to claim relevance", but IBM high-end disk has consistently had 20 to 30 percent market share over the past five quarters!
Not all of these high-end disk systems are connected to mainframes. According to IDC data, only about 15 to 25 percent of these boxes are counted under their "Mainframe" topology.
Chuck further writes:
"It's reasonable to expect IBM to sell a respectable amount of storage with their mainframes using a protocol of their own design -- although IBM's two competitors in this rather proprietary space (notably EMC and Hitachi) sell more together than does IBM."
The IDC data doesn't support that claim either, Chuck. By either measure of market share, units (disk systems) or revenue (US dollars), IDC reports that IBM disk for mainframes outsold all other vendors (including EMC, HDS, and HP) combined. And again, this has been true for the past five quarters. Here is the IDC ranking for mainframe disk storage:
IBM has over 50 percent market share in this case, primarily because IBM System Storage DS8000 is the industry leader in mainframe-related features and functions, and offers synergy with the rest of the z/Architecture stack.
So Chuck, I am not picking a fight with you or asking you to retract or correct your blog post. Your main theme, that the new VSP presents serious competition to EMC's VMAX high-end disk arrays, is certainly something I can agree with. Congratulations to HDS and HP for putting forth what looks like a viable alternative to EMC's VMAX.
To learn more about IBM's upcoming products, register for next week's webcast "Taming the Information Explosion with IBM Storage" featuring Dan Galvan, IBM Vice President, and Steve Duplessie, Senior Analyst and Founder of Enterprise Storage Group (ESG).
Continuing coverage of my week in Washington DC for the annual [2010 System Storage Technical University], I attended several XIV sessions throughout the week. There were many XIV sessions. I could not attend all of them. Jack Arnold, one of my colleagues at the IBM Tucson Executive Briefing Center, often presents XIV to clients and Business Partners. He covered all the basics of XIV architecture, configuration, and features like snapshots and migration. Carlos Lizarralde presented "Solving VMware Challenges with XIV". Ola Mayer presented "XIV Active Data Migration and Disaster Recovery".
Here is my quick recap of two in particular that I attended:
XIV Client Success Stories - Randy Arseneau
Randy reported that IBM had its best quarter ever for the XIV, reflecting an unexpected surge shortly after my blog post debunking the DDF myth last April. He presented successful case studies of client deployments. Many followed a familiar pattern. First, the client would only purchase one or two XIV units. Second, the client would beat the crap out of them, putting all kinds of stress from different workloads. Third, the client would discover that the XIV is really as amazing as IBM and IBM Business Partners have told them. Finally, in the fourth phase, the client would deploy the XIV for mission-critical production applications.
A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. They now have 14 XIV boxes deployed in mission-critical applications.
A large equipment manufacturer compared the offerings among seven different storage vendors, and IBM XIV came out the winner. They now have 11 XIV boxes in production and another four boxes for development/test. They have moved their entire VMware infrastructure to IBM XIV, running over 12,000 guest instances.
A financial services company bought their first XIV in early 2009 and now has 34 XIV units in production attached to a variety of Windows, Solaris, AIX, Linux servers and VMware hosts. Their entire Microsoft Exchange was moved from HP and EMC disk to IBM XIV, and experienced noticeable performance improvement.
When a University health system replaced two competitive disk systems with XIV, their data center temperature dropped from 74 to 68 degrees Fahrenheit. In general, XIV systems are 20 to 30 percent more energy efficient per usable TB than traditional disk systems.
A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC's V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. But, in the end, IBM XIV proved to be the better fit, and now the customer is happy having made the switch.
The manager of an information communications technology service provider was impressed that the XIV was up and running in just a couple of days. They now have over two dozen XIV systems.
Another XIV client had lost all of their Computer Room Air Conditioning (CRAC) units for several hours. The data center heated up to 126 degrees Fahrenheit, but the customer did not lose any data on either of their two XIV boxes, which continued to run in these extreme conditions.
Optimizing XIV Performance - Brian Cormody
This session was an update from the [one presented last year] by Izhar Sharon. Brian presented various best practices for optimizing the performance when using specific application workloads with IBM XIV disk systems.
Oracle ASM: Many people allocate lots of small LUNs, because this made sense a long time ago when all you had was just a bunch of disks (JBOD). In fact, many of the practices that DBAs use to configure databases across disks become unnecessary with XIV. Wth XIV, you are better off allocating a few number of very large LUNs from the XIV. The best option was a 1-volume ASM pool with 8MB AU stripe. A single LUN can contain multiple Oracle databases. A single LUN can be used to store all of the logs.
VMware: Over 70 percent of XIV customers use it with VMware. For VMFS, IBM recommends allocating a few number of large LUNs. You can specify the maximum of 2181 GB. Do not use VMware's internal LUN extension capability, as IBM XIV already has thin provisioning and works better to allow XIV to do this for you. XIV Snapshots provide crash-consistent copies without all the VMware overhead of VMware Snapshots.
SAP: For planning purposes, the "SAPS" unit equates roughly to 0.4 IOPS for ERP OLTP workloads, and 0.6 IOPS for BW/BI OLAP workloads. In general, an XIV can deliver 25-30,000 IOPS at 10-15 msec response time, and 60,000 IOPS at 30 msec response time. With SAP, our clients have managed to get 60,000 IOPS at less than 15 msec.
Microsoft Exchange: Even my friends in Redmond could not believe how awesome XIV was during ESRP testing. Five Exchange 2010 servers connected two a pair of XIV boxes using the new 2TB drawers managed 40,000 mailboxes at the high profile (0.15 IOPS per mailbox). Another client found four XIV boxes (720 drives) was able to handle 60,000 mailboxes (5GB max), which would have taken over 4000 drives if internal disk drives were used instead. Who said SANs are obsolete for MS Exchange?
Asynchronous Replication: IBM now has an "Async Calculator" to model and help design an XIV async replication solution. In general, dark fiber works best, and MPLS clouds had the worst results. The latest 10.2.2 microcode for the IBM XIV can now handle 10 Mbps at less than 250 msec roundtrip. During the initial sync between locations, IBM recommends setting the "schedule=never" to consume as much bandwidth as possible. If you don't trust the bandwidth measurements your telco provider is reporting, consider testing the bandwidth yourself with [iPerf] open source tool.
Wrapping up my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a final morning of main-tent sessions. Here is a quick recap of the sessions presented Thursday morning. This left the afternoon for people to catch their flights or hit the links.
Data Center Actions your CFO will Love
Steve Sams, IBM Vice President of Global Site and Facilities, presented simple actions that can yield significant operational and capital cost savings. The first focus area was to extend the life of your existing data center. Some 70 percent of data centers are 10-15 years old or worse, and therefore not designed for today's computational densities. IBM did this for its Lexington data center, making changes that resulted in 8x capability without increasing footprint.
The second focus area was to rationalize the infrastructure across the organization. The process of "rationalizing" involves determining the business value of specific IT components and deciding whether the business value justifies the existing cost and complexity. It allows you to prioritize which consolidations should be done first to reduce costs and optimize value. IBM's own transformation reduced 128 CIOs down to a single CIO, and from 155 host data centers scattered were consolidated down to seven, and 80 web hosting data centers down to five. This also included consolidating 31 intranets down to a single global intranet.
The third focus area was to design your new infrastructure to be more responsive to change. IBM offers four solutions to help those looking to build or upgrade their data center:
Scalable Modular Data Center - save up to 20 percent than traditional deployments with turn-key configurations from 500 to 2500 square feet that can be deployed in as little as 8-12 weeks to an existing floorspace.
Enterprise Modular Data Center - save 40 to 50 percent with 5000 square foot standardized design for larger data centers. This modular approach provides a "pay as you grow" approach that can be more responsive to future unforeseen needs.
Portable Modular Data Center - this is the PMDC shipping container that was sitting outside in the parking lot. This can be deployed anywhere in 12-14 weeks and is ideal for dealing with disaster recoveries or situations where traditional data center floor plans cannot be built fast enough.
High Density Zone - this can help increase capacity in an existing data center without a full site retrofit.
Here is a quick [video] that provides more insight.
Neil Jarvis, CIO of American Automobile Association (AAA) for Northern California, Nevada and Utah (NCNU), provided the customer testimonial. Last September, the [AAA NCNU selected IBM] to build them an energy-efficient green data center. Neil provided us an update now six months later, managing the needs of 4 million drivers.
Virtualization - Managing the World's Infrastructure
Helene Armitage, IBM General Manager of the newly formed IBM System Software product line, presented on virtualization and management. Virtualization is becoming much more than a way of meeting the demand for performance, capability, and flexibility in the data center. It helps create a smarter, more agile data center. Her presentation focused on four areas: consolidate resources, manage workloads, automate processes, and optimize the delivery of IT services.
Charlie Weston, Group Vice President of Information Technology at Winn Dixie, one of the largest food retailers in the United States, with over 500 stores and supermarkets. The grocery business is highly competitive with tight profit margins. Winn Dixie wanted to deploy business continuity/disaster recovery (BC/DR) while managing IT equipment scattered across these 500 locations. They were able to consolidate 600 stand-alone servers into a single corporate data center. Using IBM AIX with PowerVM virtualization on BladeCenter, each JS22 blade server could manage 16 stores. These were mirrored to a nearby facility, as well as a remote disaster recovery center. They were also able to add new Linux application workloads to their existing System z9 EC mainframe. The result was to free up $5 million US dollars in capital that could be used to remodel their stores, and improve application performance 5-10 times. They were able to deploy a new customer portal on Linux for System z in days instead of months, and have reduced their disaster recovery time objective (RTO) against hurricanes from days to hours. Their next steps involves looking at desktop virtualization.
Redefining x86 Computing
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idlea, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds.
The results can be significant. For example, just two IBM System x3850 4-socket, 8-core systems can replace 50 (yes, FIFTY) HP DL585 4-socket, 4-core Opteron rack servers, reducing costs 80 percent with a 3-month ROI payback period. Compared to IBM's previous X4 architecture, the eX5 provides 3.5 times better SAP performance, 3.8 times faster server virtualization performance, and 2.8 times faster database performance.
The CIO of Acxiom provided the customer testimonial. They were able to get a 35-to-1 consolidation switching over to IBM x86 servers, resulting in huge savings.
Top ROI projects to Get Started
Mark Shearer, IBM Vice President of Growth Solutions, and formerly my fourth-line manager as the Vice President of Marketing and Communications, presented a list of projects to help clients get started. There are over 500 client references that have successfully implement Smarter Planet projects. Mark's list were grouped into five categories:
Enabling Massive Scale
Increase Business Agility
Manage Risk, Compliance and Security
Organize Vast Amounts of Information
Turn Information into Insight
The attendees were all offered a free "Infrastructure Study" to evaluate their current data center environments. A team of IBM experts will come on-site, gather data, interview key personnel and make recommendations. Alternatively, these can be done at one of IBM's many briefing centers, such as the IBM Executive Briefing Center in Tucson Arizona that I work at.
This wraps up the week for me. I have to pack the XIV back into the crate, and drive back to Tucson. IBM plans to host another Executive Summit in the September/October time frame on the East coast.