Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
..., nor any drop to drink"From Rime of the Ancient Mariner (1798), by Samuel Taylor Coleridge
Actually, I've been so busy this week that I am just now getting to this week's theme of Smarter Water. Since it was St. Patrick's Day this week, I thought of discussing IBM's project to help Ireland. Working with the Marine Institute Ireland, IBM has created a system to monitor wave conditions, marine life and pollution levels in and around Galway Bay. Here is quick excerpt from IBM [Press Release]:
"This real-time advanced analytics pilot is turning mountains of data into intelligence, paving the way for smarter environmental management and development of the bay.
The vision for SmartBay is a marine research infrastructure of sensors and computational technology interconnected across Galway Bay collecting and distributing information on coastal conditions, pollution levels and marine life. The monitoring services, delivered via the web and other devices, benefits tourism, fishing, aquaculture and the environment.
The pilot, which includes a move from manual to instrumented data gathering, will allow researchers to deploy quicker reactions to the critical challenges of the bay such as pollution, flooding, fishing stock levels, green energy generation and the threats from climate change."
Or... I could have used water as a metaphor for the "tidal wave" of information. For many,we have a lot of raw data, but not suitably digestible information in the form we need it.
Ok, I admit it is a silly photo, Darth Vader standing in the middle of the ocean filtering sea water into a plastic jug, but it helps focus on the problem. Long before we are donefighting over the last few drops of oil, we will be fighting over water.
This Sunday, March 22, is "World Water Day".Over the past 100 years, water consumption has increased six fold, twice the growth ofhuman population. Today, one in five people on this planet lack access to suitable drinking water. I have been to countries where people not just lack water filters, and in some cases didn't have closeable plastic jugs to carry the water in.
By 2015, the World Health Organization [WHO] estimates that water problems will impactover half the world's population.Here is their [Top 10 Facts File] on water scarcity.
At this point, you might be asking what any of this has to do with IBM.
The smart folks at IBM Research lab, the same location where we do storage research,were able to take some of their knowledge of chemistry, solid state memory, and nanotubes to help the planet with the water situation.Here is a quick [2-minute video]
Continuing my week's theme on how bad things can get following the "Do-it-yourself" plan, I start with James Rogers' piece in Byte and Switch, titled[Washington Gets E-Discovery Wakeup Call]. Here's an excerpt:
"A court filing today reveals there may be gaps in the backup tapes the White House IT shop used to store email. It appears that messages from the crucial early stages of the Iraq War, between March 1 and May 22, 2003, can't be found on tape. So, far from exonerating the White House staffers, the latest turn of events casts an even harsher light on their email policies.
Things are not exactly perfect elsewhere in the federal government, either. A recent [report from the Government Accountability Office (GAO)] identified glaring holes in agencies’ antiquated email preservation techniques. Case in point: printing out emails and storing them in physical files."
You might think that laws requiring email archives are fairly recent. For corporations, they began with laws like Sarbanes-Oxley that the second President Bush signed into law back in 2002. However, it appears that laws for US Presidents to keep their emails were in force since 1993, back when the first President Clinton was in office. (we might as all get used to saying this in case we have a "second" President Clinton next January!)
"The Federal Record Act requires the head of each federal agency to ensure that documents related to that agency's official business be preserved for federal archives. The Watergate-era Presidential Records Act augmented the FRA framework by specifically requiring the president to preserve documents related to the performance of his official duties. A [1993 court decision] held that these laws applied to electronic records, including e-mails, which means that the president has an obligation to ensure that the e-mails of senior executive branch officials are preserved.
In 1994, the Clinton administration reacted to the previous year's court decision by rolling out an automated e-mail-archiving system to work with the Lotus-Notes-based e-mail software that was in use at the time. The system automatically categorized e-mails based on the requirements of the FRA and PRA, and it included safeguards to ensure that e-mails were not deliberately or unintentionally altered or deleted.
When the Bush administration took office, it decided to replace the Lotus Notes-based e-mail system used under the Clinton Administration with Microsoft Outlook and Exchange. The transition broke compatibility with the old archiving system, and the White House IT shop did not immediately have a new one to put in its place.
Instead, the White House has instituted a comically primitive system called "journaling," in which (to quote from a [recent Congressional report]) "a White House staffer or contractor would collect from a 'journal' e-mail folder in the Microsoft Exchange system copies of e-mails sent and received by White House employees." These would be manually named and saved as ".pst" files on White House servers.
One of the more vocal critics of the White House's e-mail-retention policies is Steven McDevitt, who was a senior official in the White House IT shop from September 2002 until he left in disgust in October 2006. He points out what would be obvious to anyone with IT experience: the system wasn't especially reliable or tamper-proof."
So we have White House staffers manually creating PST files, and other government agencies printing out their emails and storing them in file cabinets. When I first started at IBM in 1986, before Notes or Exchange existed, we used PROFS on VM on the mainframe, and some of my colleagues printed out their emails and filed them in cabinets. I can understand how government employees, who might have grown up using mainframe systems like PROFS, might have just continued the practice when they switched to Personal Computers.
Perhaps the new incoming White House staff hired by George W. Bush were more familiar with Outlook and Exchange, and ratherthan learning to use IBM Lotus Notes and Domino, found it easier just to switch over. I am not going to debatethe pros and cons of "Lotus Notes/Domino" versus "Microsoft Outlook/Exchange" as IBM has automated email archiving systems that work great for both of these, as well as also for Novell Groupwise. So, taking the benefit of the doubt,when President Bush took over, he tossed out the previous administration's staff, and brought in his own people, andlet them choose the office productivity tools they were most comfortable with.Fair enough, happens every time a new President takes office. No big surprise there.
However, doing this without a clear plan on how to continue to comply with the email archive laws already on the books, and that it continues to be bad several years later, is appalling. I can understand why business are upset in deploying mandated archiving solutions when their own government doesn't have similar automation in place.
In his last post in this series, he mentions that the amazingly successful IBM SAN Volume Controller was part of a set of projects:
"IBM was looking for "new horizon" projects to fund at the time, and three such projects were proposed and created the "Storage Software Group". Those three projects became know externally as TPC, (TotalStorage Productivity Center), SanFS (SAN File System - oh how this was just 5 years too early) and SVC (SAN Volume Controller). The fact that two out of the three of them still exist today is actually pretty good. All of these products came out of research, and its a sad state of affairs when research teams are measured against the percentage of the projects they work on, versus those that turn into revenue generating streams."
But this raises the question: Was SAN File System just five years too early?
IBM classifies products into three "horizons"; Horizon-1 for well-established mature products, Horizon-2 was for recently launched products, and Horizon-3 was for emerging business opportunities (EBO). Since I had some involvement with these other projects, I thought I would help fill out some of this history from my perspective.
Back in 2000, IBM executive [Linda Sanford] was in charge of IBM storage business and presented that IBM Research was working on the concept of "Storage Tank" which would hold Petabytes of data accessible to mainframes and distributed servers.
In 2001, I was the lead architect of DFSMS for the IBM z/OS operating system for mainframes, and was asked to be lead architect for the new "Horizon 3" project to be called IBM TotalStorage Productivity Center (TPC), which has since been renamed to IBM Tivoli Storage Productivity Center.
In 2002, I was asked to lead a team to port the "SANfs client" for SAN File System from Linux-x86 over to Linux on System z. How easy or difficult to port any code depends on how well it was written with the intent to be ported, and porting the "proof-of-concept" level code proved a bit too challenging for my team of relative new-hires. Once code written by research scientists is sufficiently complete to demonstrate proof of concept, it should be entirely discarded and written from scratch by professional software engineers that follow proper development and documentation procedures. We reminded management of this, and they decided not to make the necessary investment to add Linux on System z as a supported operating system for SAN file system.
In 2003, IBM launched Productivity Center, SAN File System and SAN Volume Controller. These would be lumped together with Horizon-1 product IBM Tivoli Storage Manager and the four products were promoted together as the inappropriately-named [TotalStorage Open Software Family]. We actually had long meetings debating whether SAN Volume Controller was hardware or software. While it is true that most of the features and functions of SAN Volume Controller is driven by its software, it was never packaged as a software-only offering.
The SAN File System was the productized version of the "Storage Tank" research project. While the SAN Volume Controller used industry standard Fibre Channel Protocol (FCP) to allow support of a variety of operating system clients, the SAN File System required an installed "client" that was only available initially on AIX and Linux-x86. In keeping with the "open" concept, an "open source reference client" was made available so that the folks at Hewlett-Packard, Sun Microsystems and Microsoft could port this over to their respective HP-UX, Solaris and Windows operating systems. Not surprisingly, none were willing to voluntarily add yet another file system to their testing efforts.
Barry argues that SANfs was five years ahead of its time. SAN File System tried to bring policy-based management for information, which has been part of DFSMS for z/OS since the 1980s, over to distributed operating systems. The problem is that mainframe people who understand and appreciate the benefits of policy-based management already had it, and non-mainframe couldn't understand the benefits of something they have managed to survive without.
(Every time I see VMware presented as a new or clever idea, I have to remind people that this x86-based hypervisor basically implements the mainframe concept of server virtualization introduced by IBM in the 1970s. IBM is the leading reseller of VMware, and supports other server virtualization solutions including Linux KVM, Xen, Hyper-V and PowerVM.)
To address the various concerns about SAN File System, the proof-of-concept code from IBM Research was withdrawn from marketing, and new fresh code implementing these concepts were integrated into IBM's existing General Parallel File System (GPFS). This software would then be packaged with a server hardware cluster, exporting global file spaces with broad operating system reach. Initially offered as IBM Scale-out File Services (SoFS) service offering, this was later re-packaged as an appliance, the IBM Scale-Out Network Attached Storage (SONAS) product, and as IBM Smart Business Storage Cloud (SBSC) cloud storage offering. These now offer clustered NAS storage using the industry standard NFS and CIFS clients that nearly all operating systems already have.
Today, these former Horizon-1 products are now Horizon-2 and Horizon-3. They have evolved. Tivoli Storage Productivity Center, GPFS and SAN Volume Controller are all market leaders in their respective areas.
Those that prefer to work with one-stop shopping of an IT Supermarket, with companies like IBM, HP and Dell who offer a complete set of servers, storage, switches, software and services, what we call "The Five S's".
Those that perfer shopping for components at individual specialty shops, like butchers, bakers, and candlestick makers, hoping that this singular focus means the products are best-of-breed in the market. Companies like HDS for disk, Quantum for tape, and Symantec for software come to mind.
My how the IT landscape for vendors has evolved in just the past five years! Cisco starts to sell servers, and enters a "mini-mall" alliance with EMC and VMware to offer vBlock integrated stack of server, storage and switches with VMware as the software hypervisor. For those not familiar with the concept of mini-malls, these are typically rows of specialty shops. A shopper can park their car once, and do all their shopping from the various shops in the mini-mall. Not quite "one-stop" shopping of a supermarket, but tries to address the same need.
("Who do I call when it breaks?" -- The three companies formed a puppet company, the Virtual Computing Environment company, or VCE, to help answer that question!)
Among the many things IBM has learned in its 100+ years of experience, it is that clients want choices. Cisco figured this out also, and partnered with NetApp to offer the aptly-named FlexPod reference architecture. In effect, Cisco has two boyfriends, when she is with EMC, it is called a Vblock, and when she is with NetApp, it is called a FlexPod. I was lucky enough to find this graphic to help explain the three-way love triangle.
Did this move put a strain on the relationship between Cisco and EMC? Last month, EMC announced VSPEX, a FlexPod-like approach that provides a choice of servers, and some leeway for resellers to make choices to fit client needs better. Why limit yourself to Cisco servers, when IBM and HP servers are better? Is this an admission that Vblock has failed, and that VSPEX is the new way of doing things? No, I suspect it is just EMC's way to strike back at both Cisco and NetApp in what many are calling the "Stack Wars". (See [The Stack Wars have Begun!], [What is the Enterprise Stack?], or [The Fight for the Fully Virtualized Data Center] for more on this.)
(FTC Disclosure: I am both an employee and shareholder of IBM, so the U.S. Federal Trade Commission may consider this post a paid, celebrity endorsement of the IBM PureFlex system. IBM has working relationships with Cisco, NetApp, and Quantum. I was not paid to mention, nor have I any financial interest in, any of the other companies mentioned in this blog post. )
Last month, IBM announced its new PureSystems family, ushering in a [new era in computing]. I invite you all to check out the many "Paterns of Expertise" available at the [IBM PureSystems Centre]. This is like an "app store" for the data center, and what I feel truly differentiates IBM's offerings from the rest.
The trend is obvious. Clients who previously purchased from specialty shops are discovering the cost and complexity of building workable systems from piece-parts from separate vendors has proven expensive and challenging. IBM PureFlex™ systems eliminate a lot of the complexity and effort, but still offer plenty of flexibility, choice of server processor types, choice of server and storage hypervisors, and choice of various operating systems.
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
SAN Volume Controller
HDS USP-V and USP-VM
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Provides thin provisioning to devices that don't offer this natively
Like Invista, No
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
HP SVSP-like, requires at least one EMC storage array to hold metadata
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
This week I am down under, starting my 7-city Storage Optimisation Breakfast roadshow on Tuesday in Sydney, Australia. I can't be at two places at once, and it seems whenever I am one place, lots of my coworkers are somewhere else at another conference or event. For those at [VMworld 2010] conference in San Francisco this week, IBM is a Platinum Sponsor and hosting a variety of presentations and activities. Here are some things to look forward to:
Session ID SP9638 - Getting the MAX from your Virtualization Investment
Monday 1:30pm, Moscone South Room 309
Speaker: Bob Zuber, IBM System x Program Director
Speaker: Clod Barrera Distinguished Engineer and Chief Technical Strategist
Clod and I just finished Solutions University 2010 in Dallas, and here he is going to VMworld! You already know that virtualization is beneficial. Exploit virtualization to its MAXimum and move beyond virtualization 101 where you have virtualized web, file/print, and DHCP type workloads. Now it is time to take virtualization to the next step and virtualize business infrastructure applications such as ERP, Messaging, CRM, and Database. With IBM solutions you can take the virtualization journey to build a smarter data center through; 1) Consolidation, 2) Management, 3) Automation and 4) Optimization. Attend this session and learn the key considerations for virtualizing mission-critical workloads and the best practices for a virtual data center that delivers a REAL return on your investment.
Session ID TA8065 - Storage Best Practices, Performance Tuning and Troubleshooting
Speaker: Duane Fafard, Senior XIV Storage Architect, IBM
Monday 10:30 AM Moscone South Room 301
Wednesday 03:00 PM Moscone West Room 2005
The industry has solved many of the challenges of virtualization applications by delivering innovative server solutions that automatically migrate load to available resources, but the complete environment requires both the network and the storage to be part of the equation. Designing, managing, and troubleshooting intricate storage environments in today’s age have become more and more complex. This session will discuss storage best practices, performance challenges, and resolving issues in the storage area network using native tools within the environment. With the techniques learned in this session, the storage administrator will be able to use these best practices to design proper storage solutions and pinpoint troubled areas quickly and accurately.
Session ID SS1012 - Expert Panel: How Smarter Systems can Address your Business Challenges
Wednesday, 12-1pm, Room 135
This is IBM's "Super Session". At IBM, we know that all business challenges such as sprawling IT infrastructure, poor performance and rising management costs are solvable on a smarter planet. With Smarter Systems, IBM can help you increase utilization and flexibility, reduce complexity and cost, respond to business changes swiftly and effectively, and enable end-to-end resiliency and security. Alex Yost, Vice President and Business Line Executive for IBM System x and BladeCenter hosts a panel of Virtualization experts:
James Northington, Vice President and Business Line Executive, IBM System x
Donn Bullock, Vice President of Sales, Mainline Information Systems, Inc.
Dylan Larson, Director of Advanced Software and Server Technologies, Intel Data Center Group
Richard, McAniff, Chief Development Officer and Member of the Office of the President, VMware
Siddhartha (Sid) Chatterjee, Ph.D, Vice President, Strategy & Partnerships, IBM Systems Software
David Guzman, Chief Information Officer and Senior Vice President, Global Technology Solution, Acxiom
This week, some of my coworkers are out at
[VMworld 2009] in San Francisco. IBM is a platinum sponsor, and is the leading reseller of VMware software. Here is the floor plan for our IBM booth there:
Virtual Data Center in a Box & Virtual Networking on
IBM & VMware Joint Collaboration on Power Monitoring
“Always on IT” Business Continuity Solution
IBM System Storage™ XIV®
[IBM XIV Storage System] is a revolutionary, easily managed, open disk system, designed to meet today’s ongoing IT challenges. This system now supports VMware 4.0 and extends the benefits of virtualization to your storage system, enabling easy provisioning and self-tuning after hardware changes. Its unique grid-based architecture represents the next generation of high-end storage and delivers outstanding performance, scalability, reliability and features, along with management simplicity and exceptional TCO.
IBM Storage Solutions with VMware
Featured products include: The new IBM System Storage DS5020 , Virtual Disk solutions with IBM System Storage SAN Volume Controller, IBM Tivoli Storage Productivity Center, and IBM System Storage ProtecTIER Data Deduplication solutions.
Server virtualization with VMware vSphere offers significant benefits to an organization, including increased asset utilization, simplified management and faster server provisioning. In addition to these benefits, VMware enables business agility and business continuity with more advanced features such as VMotion, high availability, fault tolerance, and Site Recovery Manager that all require dependable high-performance shared storage. Adding storage solutions --including virtualized storage-- from IBM delivers complementary benefits to your information infrastructure that extend and enhance the benefits of VMware vSphere while increasing overall reliability, availability and performance to help you transform into a dynamic infrastructure. IBM can provide the right storage solution for your environment and requirements. Our solutions help maximize efficiency with lower costs and provide affordable, scalable storage solutions that help you solve your particular needs.
Stop by to learn how our the exciting new storage solutions can help optimize VMware including self-encrypting storage, automated, affordable disaster recovery with VMware SRM easier and faster provisioning of storage for virtual machines, dramatically improved storage utilization with ProtecTIER deduplication, and how the DS5000 has lower costs Total Cost of Acquisition (TCA) than typical competitors.
IBM Smart Business Desktop Cloud
IBM System x® iDataPlex™: Get More on the Floor
Virtual Client Solutions from IBM
IBM also is sponsoring some breakout sessions:
Leverage Storage Solutions for a Smarter Infrastructure
Simplify and Optimize with IBM N series
IBM SAN Volume Controller: Virtualized Storage for Virtual Servers
XIV: Storage Reinvented for today's dynamic world
Wish I was there, looks like a lot of good information!
IBM also has a vision for the future, and like Martin Luther King's speech, is startingto enable change. Last February 2006, IBM launched "Information on Demand", a visionthat involves bringing together our hardware, software, and services.
The impact has not gone unnoticed. Barron's featured IBM in an article titled "The New IBM".
I suspect bloggers helped get the word out. Here's a graph fromYahoo! Financeshowing the IBM stock price over the pastsix months. This blog started in September, when stock was in the low 80's, and now it is in thehigh 90's. I can't take all the credit, of course, as there are now over 3000 IBMers blogging, either inside thecompany, or externally to the rest of the world.
Continuing this week's theme on the z10 EC mainframe being able to perform the workloadof hundreds or thousands of small 2-way x86 servers, I offer a simple analogy.
One car, one driver
If you wonder why so many companies subscribe to the notion that you should only runa single application per server, blame Sun, who I think helped promote this idea.Not to be out-done, Microsoft, HP and Dell think that it is a great idea too. Imaginethe convenience for operators to be able to switch off a single machine and impactonly a single application. Imagine how much this simplifies new application development,knowing that you are the only workload on a set of dedicated resources.
This is analogous to a single car, single driver, where the car helps get the personfrom "point A" to "point B" and the single driver represents the driver and solepassenger of the vehicle. If this were a single driver on a energy-efficient motorcycleor scooter, than would be reasonable, but people often drive alone much bigger vehicles,what Jeff Savit would call "over-provisioning". Chips have increased in processingpower much faster than individual applications have increased their requirements, so as a result,you have over-provisioning.
Carpooling - one bus, one driver, and many other passengers riding along
This is how z/OS operates. Yes, you could have up to 60 LPARs that you could individuallyturn on and off, but where z/OS gets most of its advantages is that you can run many applicationsin a single OS instance, through the use of "Address Spaces" which act as application containers.Of course, it is more difficult to write for this environment, because you have to be a good"z/OS citizen", share resources nicely, and be WLM-compliant to allow your application to beswapped out for others.
While you get efficiencies with this approach, when you bring the OS down, all the apps on that OS image haveto stop with it. For those who have "Parallel Sysplex" that is not an issue. For example, let's say youhave three mainframes, each running several LPARs of z/OS, and your various z/OS images all are able toprocess incoming transactions for a common shared DB2 database. Thanks to DB2 sharing technology, youcould take down an individual LPAR or z/OS image, and not disrupt transaction processing, because theIP spreader just sends them to the remaining LPARs. A "Coupling Facility" allows for smooth operationsif any of the OS images are lost from an unexpected disaster or disruption.
Needless to say, IBM does not give each z/OS developer his or her own mainframe. Instead, we get to run z/OS guest images under z/VM. It was even possible to emulate the next generation S/390 chipsetto allow us to test software on hardware that hasn't been created yet. With HiperSockets, we canhave virtual TCP/IP LAN connections between images, have virtual coupling facilities, have virtualdisk and virtual tape, and so on. It made development and test that much more efficient, which iswhy z/OS is recognized as one of the most rock-solid bullet-proof operating systems in existence.
The negatives of carpooling or taking the bus applies here as well. I have been on buses that havestopped working, and 50 people are stranded. And you don't need more than two people to make thelogistics of most carpools complicated. This feeds the fear that people want to have separatemanageable units one-car-one-driver than putting all of their eggs into one basket, having to scheduleoutages together, and so on.
(Disclaimer: From 1986 to 2001 I helped the development of z/OS and Linux on System z. Mostof my 17 patents are from that time of my career!)
Bicycle races and Marathons
The third computing model is the Supercomputer. Here we take a lot of one-way and two-way machines,and lash them together to form an incredible machine able to perform mathematical computations fasterthan any mainframe. The supercomputer that IBM built for Los Alamos National Laboratory just clockedin at 1,000,000,000,000,000 floating point operations per second. This is not a single operating system,but rather each machine runs its own OS, is given its primary objective, and tries to get it done.NetworkWorld has a nice article on this titled:[IBM, Los Alamos smash petaflop barrier, triple supercomputer speed record].If every person in the world was armed with a handheld calculator and performed one calculation per second, it would take us 46 years collectively to do everything this supercomputer can do in one day.
I originally thought of bicycle races as an analogy for this, but having listened to Lance Armstrong at the[IBM Pulse 2008] conference, I learned thatbiking is a team sport, and I wanted something that had the "every-man-for-himself" approach to computing.So, I changed this to marathons.
The marathon was named after a fabled greek soldier was sent as messenger from the [Battle of Marathon to the City of Athens],a distance that is now standardized to 26 miles and 385 yards, or 42.195 kilometers for my readersoutside the United States.
If you were given the task to get thousands of people from "point A" to "point B" 26 plus milesaway, would you chose thousands of cars, each with a lone driver? Conferences with a lot of people in a few hotels useshuttle buses instead. A few drivers, a few buses, and you can get thousands of people from a fewplaces to a few places. But the workloads that are sent to supercomputers have a single end point,so a dispatcher node gives a message to each "greek soldier" compute node, and has them run it on their own. Somemake it, some don't, but for a supercomputer that is OK. When the message is delivered, the calculation for thatlittle piece is done, and the compute node gives it another message to process. All of the computations areassembled to come up with the final result. Applications must be coded very speciallyto be able to handle this approach, but for the ones that are, amazing things happen.
So, how does "server virtualization" come into play?
IBM has had Logical Partitions for quite some time. A logical partition, or LPAR, can run its own OSimage, and can be turned on and off without impacting other LPARs. LPARs can have dedicated resources,or shared resources with other LPARs. The IBM z10 EC can have up to 60 LPARs. System p and System i,now merged into the new "POWER Systems" product line, also support LPARs in this manner. Depending onthe size of your LPAR, this could be for a single OS and application, or a single OS with lots of applications.
Address Spaces/Application Containers
This is the bus approach. You have a single OS, and that is shared by a set of application containers. z/OS does this with address spaces, all running under a single z/OS image, and for x86there are products like [Parallels Virtuozzo Containers] that can run hundred of Windows instances under a single Windows OS image, or a hundred Linux imagesunder a single Linux OS image. However, you cannot mix and match Windows with Linux, just as all theaddress spaces on z/OS all have to be coded for the same z/OS level on the LPAR they run in.
The term "guests" were chosen to model this after the way hotels are organized. Each guest has a roomwith its own lockable entrance and privacy, but shared lobby, and in some countries, shared bathroomson every hall. This approach is used by z/VM, VMware and others. The z/VM operating system can handle any S/390-chip operating system guest, so you could have a mix ofz/OS, TPF, z/VSE, Linux and OpenSolaris, and even other z/VM levels running as guests. Many z/VM developers runin this "second level" mode to develop new versions of the z/VM operating system!
As part of the One Laptop Per Child [OLPC] development team (yes, I ama member of their open source community, and now have developer keys to provide contributions), I havebeen experimenting with Linux KVM. This was [folded into the base Linux 2.6.20 kernel and availableto run Linux and Windows guest images. This is a nice write-up on[Wikipedia].
The key advantage of this approach is that you are back to one-car-one-driver simplistic mode of thinking. Each guest can be turned on and off without impacting otherapplications. Each guest has its own OS image, so you can mix different OS on the same server hardware.You can have your own customized kernel modules, levels of Java, etc.Externally, it looks like you are running dozens of applications on a single server, but internally,each application thinks it is the only one running on its own OS. This gives you simpler codingmodel to base your test and development with.
Jeff is correct that running less than 10 percent utilization average across your servers is a cryingshame, and that it could be managed in a manner that raises the utilization of the servers so that fewer areneeeded. Just as people could carpool, or could take the bus to work, it just doesn't happen, and data centersare full of single-application servers.
VMware has an architectural limit of 128 guests per machine, and IBM is able to reach this withits beefiest System x3850 M2 servers, but most of the x86 machines from HP, Dell and Sun are less powerful,and only run a dozen or so guests. In all cases, fewer servers means it is simpler to manage, so moreapplications per server is always the goal in mind.
VMware can soak up 30 to 40 percent of the cycles, meaning the most you can get from a VMware-basedsolution is 60 to 70 percent CPU utilization (which is still much better than the typical 5 to 10 percent average utilization we see today!) z/VM has been finely tuned to incur as little as 7 percent overhead,so IBM can achieve up to 93 percent utilization.
Jeff argues that since many of the z/OS technologies that allow customers to get over90 percent utilization don't apply to Linux guests under z/VM, then all of the numbers are wrong.My point is that there are two ways to achieve 90 percent utilization on the mainframe, one is throughz/OS running many applications on a single LPAR (the application container approach), and the other through z/VM supporting many Linux OS images, each with one (or a few) applications (the virtual guest approach).
I am still gathering more research on this topic, so I will try to have it ready later this week.
Several of my IBM colleagues will be attending the "Virtual Worlds 2007" conference today and tomorrow. This conference sold out so quickly that they have already scheduled a second one for October. The focus is on 3-D internet technologies likeSecond Life. Attendance is expected at over 600 people.
IBM is investing heavily in this new concept of v-business. Last year, I was one of only 325 IBMers on Second Life. Now, according to this Better than Life blog entry from Grady Booch, IBM Fellow, the number is over 4000!
Of course, the challenge for IBM, and others, is learning to market in virtual worlds. Already, my team is in-world, and we meet several times a week. Using Second Life is quickly becoming an essential business skill, like participating in conference calls, or responding to instant messages.
What does meeting in-world entail?
Scheduling a time and a place
Finding a time that people can meet is no different than scheduling a audio or video conference call. In general, you don't have to worry about travel, but you do have to be actively somewhere connected to everyone else.
Finding a place involves actually determining the island, region and coordinates to hold the meeting. You need to find a place with enough seating. You don't have to worry about daylight, each person can control how much or little sunlight shows up on their screen. You do have to make sure you pick a spot that nobody else plans to use at that same time. Just like scheduling conference rooms at the site or hotel, we have to schedule rooms in advance.
To avoid this hassle, I have created the "pocket conference room". This is a single object that I can "rez" onto the ground, from my inventory, with 40 chairs, a PowerPoint presentation screen, a podium for a speaker to stand behind, and stools for speakers to sit on if they are next on the agenda. Now, I can hold impromptu meetings in any sandbox, grassy knoll, or the roof top of a building.
As with any other meeting, you need some basic ground rules. I am not talking the usual "no shooting, no gambling, no selling" rules that you see everywhere in Second Life. Instead, rules like an avatar must stand up before speaking. Anyone with a question must first "raise their hand" and get recognized by the chair. These ground rules can be as formal as Robert's Rules of Order or more casual, depending on who is participating.
It costs 10 Linden Dollars (L$) per PAGE to upload a PowerPoint presentation. This has the immediate benefit of having everyone spend more time and effort on their presentation, trying to cut down the number of charts, and focus more on what they are going to say.
Public Speaking Skills
It is amazing. People who are too scared to speak in front of an audience in Real Life have no problem having their avatar stand in front of other avatars in Second Life. This has greatly broadened the pool of speakers to tap into.Are you a woman with a husky masculine voice? Are you a man with a high-pitched feminine voice? Now, you can create an avatar that matches your voice.
This turns out to be the biggest challenge. In Real Life, organizing a face-to-face meeting involves time and effort making sure the venue has everything you need, a platform, a podium, good Audio/Video system, etc. All people have to do is show up, sit in a chair and listen.
In Second Life, however, the aspects of venue are all covered, but getting people to show up is another story. People have to sign up for Second Life account, create an avatar, wear appropriate virtual clothing, figure out how to teleport near the venue, walk or fly the difference to get to the exact building and room, master the sitting-in-a-chair and hold-coffee-and-sip-occasionally process, and pay attention.
Perhaps the best part of Second Life is that if you are not paying attention, your avatar noticeably falls asleep, into a hunched-over position, what is called "afk" (short for Away From Keyboard). On the other hand, if you do need to step away from your desk, you can put your avatar in "afk" mode immediately, tell everyone why and perhaps when you'll be back, and then re-activate when you return. This is one of the best improvements over regular audio conference calls.
I suspect the need for having places in Second Life to hold meetings will become more and more in demand.At a time when real-estate sales in the US is slowing down, Coldwell Banker's Second Life efforts are ramping up. I am not making this up. Coldwell Banker is one of the nation's largest real estate brokerage firms. They are trying to bring the same "adult supervision" to virtual real-estate transactions, offering to help people buy and rent properties in Second Life.
Registration is now open for our next "Meet the Storage Experts" event in Second Life. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
It takes me 20-30 minutes to complete a crossword or Sudoku puzzle. I am in no hurry, and I find the process relaxing. But what if you were paid to complete a puzzle? In that case, finishing the puzzle sooner, in fewer minutes, means more money in your paycheck per hour worked! However, getting paid would mean that doing these puzzles may no longer be fun or relaxing.
The idea of converting a hobby into a revenue-generating activity is not new. Who wouldn't want to earn money doing something you were planning to do already? The television is full of commercial advertisements for credit cards where you can earn Double Miles or Cash Rewards just for spending money on things you were going to spend on anyways.
But is "earn" the right word? The merchants pay a percentage fee every time a patron uses a credit card, and the bank is just providing a marketing incentive in the form of a portion of those fees back to the consumer, to encourage more usage of their card versus other forms of payment. Sort of like "profit sharing".
(FTC Disclosure: I am a full-time employee and shareholder of the IBM Corporation. This blog post should not be considered an endorsement for anything. My opinions and writings are based on publicly available information and my own experiences doing freelance work prior to my employment at IBM. I have no hands-on experience with Amazon Mechanical Turk, neither as a worker nor requester, have not participated in TopCoder contests, nor have I used the Viggle app. I do not have any financial interest in Amazon, TopCoder, Viggle or any other third-party company mentioned on this blog post, nor has anyone paid me to mention their company names, brands or offerings.)
Here's how it works. You get the app on your phone, and register each television show as you watch it. You can watch the show live, or much later recorded on your Tivo. You watch the shows you were going to watch anyways, and just provide your demographics, all in the name of market research. You get two points per minute of watching, and after 7,500 points, you get a $5 gift card from retailers such as from retailers such as Burger King, Starbucks, Best Buy, Sephora, Fandango, and CVS drugstores. For the typical American, it would take about three weeks to watch that much television!
Of course, this is not the only way to earn money working from home. A reader asked me for my opinions of [Amazon Mechanical Turk]. While the other examples above are done for marketing purposes, Mechanical Turk can be used for a variety of other things. Up to now, the IT industry has regarded the Cloud as the delivery of computing as a service, with the infrastructure, hardware and software existing on internationally networked servers, effectively invisible to the end user. This model is now to being applied broadly to people.
Basically, Mechanical Turk acts as a marketplace, where employers post Human Intelligent Tasks (HITs) that workers can do. Most can be completed in minutes and you are paid pennies to do so. Some examples might help illustrate what a HIT looks like:
Call a business and get the email address of the manager in charge.
Review a photograph and describe its style or content in three words or less
Select among multiple choices to categorize a job listing or company position
As a Mechanical Turk worker, you only work on the HITs you choose to work on, presumably those that interest you, and that you can do well and quickly. Workers can do this anytime, anywhere, such as 2:00am in the morning, at home, when you can't sleep or taking care of children. You can choose to work as much or as little as you like.
The employers--referred to as Mechanical Turk requesters--put money into their payroll accounts, load up their tasks, and hit publish. This gives them immediate access to a global, on-demand 24-by-7 workforce that can help complete thousands of HITs in minutes. These employers won't have to put an advertisement in the want ads and interview potential candidates, just to let them go later when the project is over.
Just like any other job, Mechanical Turk wages are reported to the IRS, and each person's work is evaluated for quality. In doing these tasks, you build up your "digital reputation" that will either prevent you or allow you to work on certain HITs. You can also take tests to reach Qualification levels to be eligible to work on HITs not available to everyone else.
Software engineers would have a hard time writing an Artificial Intelligence [AI] program to do these simple tasks, so being able to generate a HIT for something in the middle of a computer program might be the easiest way to get past a difficult part of an algorithm. Amusingly, Amazon describes this form of [crowdsourcing] as an artificial form of Artificial Intelligence!
While this approach may work for small, easily defined tasks, what about works that require a high amount of Human Intelligence, like storage software or hardware development?
When I was working for IBM as a software engineer in the 1980s and 1990s, it took us years to get a project done, using the traditional [Waterfall Model]. My job as a software architect was to estimate the thousands of lines of code (KLOC) a project would require, estimate the number of Person-Years (PY) it would take, and recommend the appropriate sized team. Back then, each engineer averaged only about 1,000 lines of software code per year, so KLOC and PY were often used interchangeably. Fellow IBM author Fred Brooks wrote an excellent book on the process called [The Mythical Man-Month].
The Waterfall model has the advantage that people only have to work a portion of the cycle on the project. In between, there was plenty of downtime to attend training, improve your skills, or take vacation. As our director Lynn Yates would often complain, "if they are only writing two lines of code in the morning, and two in the afternoon, why do they need time to rest?"
The Waterfall model was not perfect, and had its share of critics. One downside was that the clients didn't see anything until General Availability (GA), with a few getting a glimpse a few months earlier during our Early Support Program (ESP). By the time clients could tell us it was not what they wanted or expected, it was too late to change until the next release.
To address this concern, 17 software engineers wrote the now famous [Agile Manifesto]. The authors felt that collaboration, between the developers and with the clients, is critical to success. Business people and developers must work together daily throughout the project. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. The best architectures, requirements, and designs emerge from self-organizing teams. The result is an iterative approach that allows the client to see working prototypes early in the process, allowing last-minute changes to requirements to influence the final product.
Combining the Mechanical Turk concept with Agile programming methodology gives you what IBM calls an "Outcomes Model" approach. In the IBM research paper [Software Economies] (PDF, 5 pages), the authors argue that there are four fundamental principles needed for an "Outcomes Model" approach:
Autonomy. All of the actions necessary to bring jobs to completion should be driven by market forces; the process is
never gated by an entity outside of the market.
Inclusiveness. Everyone who provides information or performs work that leads to improvements should share in the
Transparency. The system should be transparent with respect to both the flow of money in the market and the tasks
performed by workers in the market.
Reliability. The system should be immune to manipulation, robust against attack (e.g., via insertion of untrusted code),
and prevent "shallow" work which would have to be re-done later.
I was surprised to see that [the TopCoder Community is 390,593 strong], nearly the size of the entire IBM company. TopCoder is focused on computer programming and digital creation using the Outcomes Model approach. Rather than paying everyone for their work, however, the platform is designed around challenges and competitions, and the top players or contributors are rewarded with cash prizes.
As an innovative company, IBM constantly explores a variety of means and approaches to offer value to its clients and customers. These new approaches may have some distinct advantages not just for IBM and its shareholders, but also for its clients and the freelancers hired to work on these projects. The global marketplace is getting flatter, smaller and smarter. It will be interesting how this plays out. If the discussion above encourages you to hone your technical skills, perhaps that is motivation enough to get off the couch and stop watching so much television!
I’ve just returned from the IBM Tivoli Pulse conference in Las Vegas – a meeting of over 4000 customers, partners, and IBM employees. ... There was a lot to digest, but three of the major themes caught my attention, and my imagination. ... First, IBM put a huge push behind their Dynamic Infrastructure initiative. Sounds like so many other automation and autonomic initiatives of the past, right? Well, things are getting better, and “dynamic” is becoming more of a realistic possibility, especially with the emergence of cloud computing and cloud services models. ... Second, a lot of time was spent on IBM’s Service Management Industry Solutions. When I first heard of this, my thought was that IBM was creating solutions for the Service Management industry (i.e. food services, janitorial services, hospitality services). But this is much larger than that – much, much larger. IBM is taking their unique ability to pair business (non-IT) expertise with IT consulting, planning, and technology delivery, and constructing (careful – here comes the “f” word) frameworks for several vertical industry segments. ... IBM is perhaps the only organization in the world that can take this on fully and hope to deliver a meaningful result. But beyond that, this represents a huge opportunity for IT professionals to become the transformation agents within their own organizations, contributing at a whole new level. ... Lastly, I was really impressed by IBM’s Smarter Planet initiative. The primary thought here was that the key to a greener planet is to take inefficiencies out of just about every form of business through the intelligent application and deployment of technology. At first I was thinking this was just another marketing initiative, but in the course of this event, listening to the keynotes and talking to a number of IBM execs, it became apparent that this is a substantial cultural shift within IBM itself. Just think about that for a moment – when 400,000 employees all change their direction and focus, their sheer mass is going to make a noticeable difference. ... Magic (Johnson) gave an excellent talk, and reminded the audience that you should do two things no matter what your job or role. First, service starts with knowing your customers – not just who they are, but what they do and what is important to them. And second – always over-deliver. Go that extra step. Exceed expectations. The boost in loyalty, goodwill, and improved customer relationships will be well worth the effort. Good thoughts to keep with us….
If you missed Pulse 2009, perhaps because your company has put a clamp down on travel expenses, you are in luck! IBM is hosting the "Dynamic Infrastructure Forum" March 3-4, 2009, on your computer. This is an IBM Virtual event, no travel required! [Register Today!]
Earlier this week, EMC announced its Symmetrix V-Max, following two trends in the industry:
Using Roman numerals. The "V" here is for FIVE, as this is the successor to the DMX-3 and DMX-4. EMC might have gotten the idea from IBM's success with the XIV (which does refer to the number 14, specifically the 14th class of a Talpiot program in Israel that the founders of XIV graduated from).
Adding "-Max", "-Monkey" or "2.0" at the end of things to make them sound more cool and to appeal to a younger, hipper audience. EMC might have gotten this idea from Pepsi-Max (... a taste of storage for the next generation?)
I took a cue from President Obama and waited a few days to collect my thoughts and do my homework before responding.Special thanks to fellow blogger ChuckH in giving me a [handy list of reactions] for me to pick and choose from. It appears that EMC marketing machine feels it is acceptable for their own folks to claim that EMC is doing something first, or that others are catching up to EMC, but when other vendors do likewise, then that is just pathetic or incoherent. Here are a few reactions already from fellow bloggers:
This was a major announcement for EMC, addressing many of the problems, flaws and weaknesses of the earlier DMX-3 and DMX-4 deliverables. Here's my read on this:
Now you can have as many FCP ports (128) as an IBM System Storage DS8300, although the maximum number of FICON ports is still short, and no mention of ESCON support. The Ethernet ports appear to be 1Gb, not the new 10GbE you might expect.
Support for System z mainframe
V-Max adds some new support to catch up with the DS8000, like Extended Address Volumes (EAV). EMC is still not quite there yet. IBM DS8000 continues to be the best, most feature-rich storage option if you have System z mainframe servers.
Both the IBM DS8000 and HDS USP-V beat the DMX-4 in performance, and in some cases the DMX-4 even lost to the IBM XIV, so EMC had to do something about it. EMC chooses not to participate in industry-standard performance benchmarks like those from the [Storage Performance Council], which limits them to vague comparisons against older EMC gear. I'll give EMC engineers the benefit of the doubt and say that now V-Max is now "comparably as fast as HDS and IBM offerings".
Getting "V" in the name
The "V" appears to be for the roman number five, not to be confused with external heterogeneous storage virtualization that HDS USP-V and IBM SVC provide. There is no mention of synergy with EMC's failed "Invista" product, and I see no support for attaching other vendors disk to the back of this thing.
Switch to Intel processor
Apple switched its computers from PowerPC to Intel-based, and now EMC follows in the same path. There are some custom ASICs still in V-Max, so it is not as pure as IBM's offerings.
Modular, XIV-like Scale-out Architecture
Actually, the packaging appears to follow the familiar system bays and storage bays of the DMX-4 and DMX-4 950 models, but architecturally offers XIV-like attachment across a common switch network between "engines", EMC's term for interface modules.
Non-disruptive data migration
IBM's SoFS, DR550 and GMAS have this already, as does as anything connected behind an IBM SAN Volume Controller.
A long time ago, IBM used to have midrange disk storage systems called "FAStT" which stood for Fibre Array Storage Technology, so this might have given EMC the idea for their "Fully Automated Storage Tiering" acronym. The concept appears similar to what IBM introduced back in 2007 for the Scale-Out-File Services [SofS] which not only provides policy-based placement, movement and expiration on different disk tiers, includes tape tiers as well for a complete solution. I don't see anything in the V-Max announcement that it will support tape anytime soon.
And what ever happend to EMC's Atmos? Wasn't that supposed to be EMC's new direction in storage?
Zero-data loss Three-site replication
IBM already calls this Metro/Global Mirror for its IBM DS8000 series, but EMC chose to call it SRDF/EDP for Extended Distance Protection.
Ease of Use
The most significant part of the announcement is that EMC is finally focusing on ease-of-use.In addition to reducing the requirement for "Bin File" modifications, this box has a redesigned user interface to focus on usability issues. For past DMX models, EMC customers had to either hire EMC to do tasks for them that were just to difficult otherwise, or buy expensive software like their EMC Control Center to manage. EMC willcontinue to sell DMX-4 boxes for a while, as they are probably supply-constrained on the V-Max side, but I doubt they will retro-fit these new features back to DMX-3 and DMX-4.
When IBM announced its acquisition of XIV over a year ago now, customers were knocking down our doors to get one. This caught two particular groups looking like a [deer in headlights]:
EMC Symmetrix sales force: Some of the smarter ones left EMC to go sell IBM XIV, leaving EMC short-handed and having to announce they [were hiring during their layoffs]. Obviously, a few of the smart ones stayed behind, to convince their management to build something like the V-Max.
IBM DS8000 sales force: If clients are not happy with their existing EMC Symmetrix, why don't they just buy an IBM DS8000 instead? What does XIV have that DS8000 doesn't?
Let me contrast this with the situation Microsoft Windows is currently facing.
I am often asked by friends to help them pick out laptops and personal computers. I use Linux, Windows and MacOS, so have personal experience with all three operating systems.
Linux is cheaper, offers the power-user the most options for supporting older, less-powerfulequipment, but I wouldn't have my Mom use it. While distributions like Ubuntu are makinggreat strides, it is just too difficult for some people.
MacOS is nice, I like it, it works out of the box with little or no customization and an intuitive interface. However, some of my friends don't make IBM-level salaries, and have to watch their budget.
In their "I'm a PC" campaign, Microsoft is fighting both fronts. Let's examine two commercials:
In the first commercial, a young eight-year-old puts together a video from pictures oftoy animals and some background music.The message: "Windows is easier to use than Linux!" If they really wanted to send this message, they should have shown senior citizens instead.
In the second commercial, a young college student is asked to find a laptop with 17 inchscreen, and a variety of other qualifications, for under $1000 US dollars. The only modelat the Apple store below this price had a 13 inch screen, but she finds a Windows-based system that had this size screen and met all the other qualifications. The message: "Windows-based hardware from a variety of competitors are less expensive than hardware from Apple!"
Both Microsoft and Apple charge a premium for ease-of-use.In the storage world, things are completely opposite. Vendors don't charge a premium forease-of-use. In fact, some of the easiest to use are also the least expensive.
If you just have Windows and Linux, you can get some entry level system likethe IBM DS3000 series, only a few features, and can be set up in six simple steps.
Next, if you have a more interesting mix of operating systems, Linux, Windows and some flavorsof UNIX like IBM AIX, HP-UX or Sun Solaris, then you might want the features and functionsof more pricier midrange offerings. More options means that configuration and deploymentis more difficult, however.
Finally, if you are serious Fortune 500 company, running your mission critical applications on System z or System i centralized systems in a big data center, that you might be willing to pay top dollar for the most feature-rich offerings of an Enterprise-class machine.Thankfully you have an army of highly-trained staff to handle the highest levels of complexity.
IBM's DS8000, HDS USP-V and EMC's Symmetrix are the key players in the Enterprise-classspace. They tried to be ["all things to all people"], er.. perhaps all things to allplatforms. All of the features and functions came at a price, not just in dollars, butin complexity and difficulty. You needed highly skilled storage admins using expensive storage management software, or be willing to hirethe storage vendor's premium services to get the job done.
IBM recognized this trend early. IBM's SVC, N series and now XIV all offer ease-of-use withenterprise-class features and functions, at lower total cost of ownership than traditional enterprise-class systems. IBM is not the only one, of course, as smaller storage start-ups like 3PAR,Pillar Data Systems, Compellent, and to some extent Dell's EqualLogic all recognized thisand developed clever offerings as well.
While IBM's XIV may not have been the first to introduce a modular, scale-out architectureusing commodity parts managed by sophisticated ease-of-use interfaces, its success might have been the kick-in-the-butt EMC needed to follow the rest of the industry in this direction.