A recent blog by Chris Mellor makes the outlandish conspiracy theory that IBM and HDS copied virtualisation technology
from small start-up company DataCore
(Chris doesn't actually name who is his source making such a claim, whether thatsomeone was employed by any of the parties involved at the time the events occurred,or is currently employed by a competitor like EMC bitterly jealous of the success IBM and HDScurrently enjoy with their offerings.)
As I already posted before about IBM'slong history of storage virtualization, SAN Volume Controller was really part of a sequence of major product in this area, after the successful 3850 MSS and 3494 VTS block virtualization products.
In the late 1990's, our research teams in Almaden, California and Hursley, UK were exploring storagetechnologies that could take advantage of commodity hardware parts and the industry-leadingLinux operating system.
As is often the case, while IBM was working on "the perfect product", small start-ups announce "not-yet-perfect" products into the marketplace. Tactical moves like partneringwith DataCore was a smart move, for the following reasons:
- Helps identify market segments. Identify which subset of customers would most benefit fromdisk virtualization. While our 3850 MSS and 3494 VTS were focused on mainframe customers, this newtechnology was focused on distributed Unix, Windows and Linux servers.
- Helps prioritize market requirements. What are the most appealing features?What drives clients to buy disk virtualization for distributed systems platforms?
- Helps evaluate packaging options. Should we deliver pure software and expect customersto purchase their own servers? Should we offer this as a "service offering" with installation anddeployment services included? Should we offer this as hardware with software pre-installed?
The partnership proved worthwhile, not just to prove to IBM that this was a worthwhile market to enter, but also how "NOT" to package a solution. Specifically, DataCore SANsymphony was software that you had to install on your own Windows-based server. The client was left with the task of orderinga suitable Intel-based server, with the right amount of CPU cycles, RAM and host bus adapter ports,and configure the Windows operating system and DataCore software.
It didn't go well. Basically, customers were expected to be their own "hardware engineers", having to knowway too much about storage hardware and software to design a combination that worked for theirworkloads. Most clients were disappointed with the amount of effort involved, and the resulting poor performance.
To fix this, IBM delivered the SAN Volume Controller, with an optimized Linux operating system and internally-writtensoftware that runs on IBM System x(tm) server hardware optimized for performance.
I can't speak for HDS, but I suspect they came to similar conclusions that resulted in a similar decisionto build their product in-house. I welcome Hu Yoshida to correct me if I am wrong on this.
technorati tags: Chris Mellor, DataCore, SANsymphony, IBM, SVC, HDS, EMC, Invista, disk, storage, virtualization, Hu Yoshida, Windows, Linux
IDC announced that IBM was number #1 in storage hardware (disk and tape combined)for 2006. Here are some excerpts from the IBM press release:
The newly released May 2007 report  by leading industry analyst firm IDC, "Worldwide Combined Disk and Tape Storage 2006 Market Share Update," shows IBM in the #1 overall position for all disk and tape storage hardware for the full year 2006.
In a total disk and tape storage hardware segment that increased to $28.2 billion in 2006, IBM captured 22.2 percent of the combined revenue for full year 2006, besting HP's 20.9 percent and EMC's 13.2 percent.
Five years ago, IBM was only #3 in this area, butis this new standing from IBM doing things better, or HP and EMC doing things poorly? Probably a little of both, but since it's not polite to point out the flaws of others in a blog, I will focus on what IBM is doing right, and I think our leadership in tape accounts for a good measure of this.
The resurgence of tape comes from a variety of factors:
- The focus on being "green", to conserve energy power and cooling costs. Tape is the cheapest storage in this regard, as the tape cartridges only consume power when read or written.
- Government regulations where more data must be stored for longer periods of time, such as theFederal Rules of Civil Procedures (FRCP), Sarbanes-Oxley, SEC regulations, and so on.
- The widening gap in dollars per MB. Advancements in tape are outpacing disk. Disk is slowing down to about 25% improvement year on year, but tape continues its 30-40% improvement curve. A solution like Information Lifecycle Management (ILM) that moves older less valuable data from disk to tape can result in excellent cost savings.
- Exciting "combined storage" solutions like the IBM System Storage DR550 and the IBM Grid Medical Archive Solution (GMAS) that combine disk and tape with internal hierarchy storage management of data, based on policies.
For more details, see IBM's press release.
technorati tags: IBM, IDC, 2006, 2007, May, report, disk, tape, storage, hardware, green, power, cooling, EMC, HP, FRCP, Sarbanes-Oxley, SEC, DR550, GMAS, grid, medical, archive, solution
NetworkWorld has compiled interlude with storage videos
, a follow up to last year's Yikes! Exploding Servers
I've blogged about some of these videos already, but since there are probably a few out there buying the brand new Apple iPhone looking for YouTube videos to play on them, these links might provide some exampleentertainment on your new handheld device.
Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
technorati tags: NetworkWorld, storage, videos, HP, IBM, EMC, HDS, Sun,exploding, servers, Apple, iPhone, YouTube
Ian Hughes talks about this Web 2.0 in his postExplaining Web 2.0 State of Mind
Alan Lepofsky posts about The Value Of Social Networking which points to this same presentation about Web 2.0 concepts and ideas.He also points to this article in the Wall Street Journal titledPlaying Well With Others about IBM and their leadership in Web 2.0 technologies, such as those from our Lotus group.
Some quotes from the WSJ article I found interesting:
Some 26,000 IBM workers have registered blogs on the company's internal computer network where they opine on technology and their work.
Social networking is especially important for the 42% of IBM employees who regularly work from their homes or client locations rather than IBM facilities.
At most companies, public-relations managers and the human-resources department tightly control all electronic communications except for email and instant messaging. ... Not at IBM.
"Any employee can have a blog, a wiki or a podcast,..."
IBM owns more than 50 "islands" in Second Life and often uses them for lectures and group discussions.
Two years ago, IBM started Wiki Central to manage wikis for IBM groups. It now has more than 20,000 wikis online with more than 100,000 users.
Interesting in learning more about Web 2.0? The last page of the deck above has a good set of links and resources, for example, here are 23 Things to know about Web 2.0 to get you started.
technorati tags: Ian Hughes, eightbar, secondlife, Alan Lepofsky, Lotus, Connections, Quickr, collaboration, social networking, wiki, blog, podcast, islands, work from home
Chuck Hollis makes some excellent points about Green Data Center Goes Marketing Mainstream
. He does a great job summarizing EMC's strategy in this area:
- Use VMware to virtualize your x86-based servers
- Use more efficient disk media, such as high-capacity SATA disk drives
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
- VMware is a great product, and IBM is its top reseller. But in addition to VMware, there are other solutions for the x86-based servers, like Xen and Microsoft Virtual Server. IBM's System p, System i, and System z product lines all support logical partitioning.
To compare the energy effectiveness of server virtualization, consider a metric that can apply across platforms. For example, for an e-mail server, consider watts per mailbox. If you have, say, 15,000 users, you can calculate how many watts you are consuming to manage their mailboxes on your current environment, and compare that with running them on VMware, or logical partitions on other servers. Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
- More efficient Media
- SATA and FATA disks support higher capacities, and run at slower RPM speeds, thus using fewer watts per terabyte.A terabyte stored on 73GB high-speed 15K RPM drives consumes more watts than the same terabyte stored using 500GB SATA.Chuck correctly identifies that tape is more power-efficient than disk, but then argues that paper is more power-efficient than tape. But paper is not necessarily more efficient than tape.
ESG analyst Steve Duplessie divides up data betweenDynamic vs. Persistent. The best place to put dynamic data is on disk, and here is where evaluation of FC/SAS versus SATA/FATA comes into play.Persistent data, on the other hand, can be stored on paper, microfiche, optical or tape media. All of these shelf-resident media consume no electricity, nor generate any heat that would require additional cooling.
A study by scientists at the Lawrence Berkeley National Laboratory titled High-Tech Means High-Efficiency: The Business Case for Energy Management in High-Tech Industries indicates thatData centers consume 15 to 100 times more energy per square foot than traditional office space. Storing persistent data in traditional office space can save a huge amount of energy. Steve Duplessie feels the ratio of dynamic to persistent data is 1:10 today, but is likely to grow to 1:100 in the near future, raising the demand for energy-efficient storage of persistent data ever more important to our environment.
Data centers consume nearly 5000 Megawatts in the USA alone, 14000 Megawatts worldwide. To put that in perspective, the country of Hungary I was in last week can generate up to 8000 Megawatts for the entire country (and they were using 7400 Megawatts last week as a result of their current heat wave, causing them grave concern).
Back in the 1990's, one of the insurance companies IBM worked with kept data on paper in manila folders, and armiesof young adults in roller skates were dispatched throughout the large warehouses of shelves to get the appropriate folder in response to customer service inquiries. Digitizing this paper into electronic format greatly reduced the need for this amount of warehouse space, as well as improved the time to retrieve the data.
A typical file storage box (12 inch x 12 inch x 18 inch) containing typed pages single-spaced, double-sided, 12 point font could hold perhaps 100MB. The same box could hold a hundred or more LTO or 3592 tape cartridges, each storing hundreds of GB of information. That's a million-to-one improvement of space-efficiency, and from a watts-per-TB basis, translates to substantial improvement in standard office air conditioning and lighting conditions.
To learn more about IBM's Project Big Green, watch thisintroductory video
which used Second Life for the animation.
technorati tags: IBM, EMC, Chuck Hollis, VMware, FC, SAS, SATA, FATA, disk, storage, logical partition, energy, power, cooling, Steve Duplessie, dynamic, persistent, data, Lawrence Berkeley National Laboratory, megawatt, paper, optical, microfiche, LTO, 3592, Project Big Green, Secondlife