Last July, IBM and EMC traded blog postings over SPC-1 benchmark results. Fellow EMC bloggerChuck Hollis wrote his post [Does Anyone Take The SPC Seriously?
]. Here is an excerpt:
I think most storage users have figured this out. We've never done an SPC test, and probably will never do one. Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it.
I responded with [Getting Under EMC Skin], and then followed up with a series explaining IBM SVC and SPC benchmarks here:
So what is the good news?Yesterday, our friends at NetApp took up Chuck's challenge and posted results on their FAS3040 as well as their EMC CLARiiON devices. IBM sells the FAS3040 under the name IBM System Storage N5300 disk system. Knowing that NetApp maintains excellent performance when it is doing point-in-time copies, NetApp ran both with and without on both boxes. I include DS4700 and DS4800 as well for comparison purposes, but only have them without FlashCopy running.
|IBM DS4800||No FlashCopy||45,014|
|NetApp FAS3040 (IBM N5300)||No SnapShot||30,985|
|NetApp FAS3040 (IBM N5300)||With SnapShot||29,958|
|EMC CLARiiON CX3-40||No SnapDrive||24,997|
|IBM DS4700 Express||No FlashCopy||17,195|
|EMC CLARiiON CX3-40||With SnapDrive||8,997|
One would expect some performance degradation with a box running point-in-time copies at the same time it is reading and writing data, but NetApp/IBM N5300 does not degrade by much, but EMC's drops a significant amount.
So what is the bad news? Last October, I welcomed HDS USP-V to the [Super High-End Club], but now we need to invite Texas Memory Systems as well.In 2006, I posted [Hybrid, Solid State and the future of RAID], and poked fun at Texas Memory Systems using the slogan "World's Fastest Storage", which at the time that honor belonged to IBM SAN Volume Controller instead.The VP of Texas Memory Systems, Woody Hutsell, explained the only reason their solid-state disk system, RAMSAN-320, didn't have faster results is that they didn't have the fastest IBM server to run against it. It may not surprise you that nearly everyone's SPC benchmarks use IBM servers because IBM has the fastest servers as well. I didn't have a million-dollar System p UNIX server to send Woody for this, but it looks like they have finally gotten one, and a new RAMSAN-400 device, as they have posted their latest results.
|Texas Memory Systems RAMSAN-400||Cache only||291,208|
|IBM SAN Volume Controller 4.2||Cache/External Disk||272,505|
|HDS USP-V||Cache/Internal Disk||200,245|
EMC doesn't publish numbers for their Symmetrix box, despite their announcement of faster SSD drives. They claim that SSD drives make their overall disk system performance faster, but without SPC benchmarks, we will never know. If you have a Symmetrix, this YouTube video may help you decide where it belongs:
You can read all the[SPC-1 Benchmark Results]on the Storage Performance Council (SPC) website.
technorati tags: IBM, EMC, Chuck Hollis, SPC, SPC-1, NetApp, FAS3040, N5300, CLARiiON, CX3-40, SnapShot, SnapDrive, FlashCopy, DS4800, DS4700, Texas Memory Systems, RAMSAN-320, RAMSAN-400, SSD, Hybrid, RAID, HDS, USP-V, Symmetrix,
IBM came out with their latest "5 in 5". These are five predictions for technologies that will havean impact over the next five years, summarized on 5 pages. Before I give my take on this year's set,here is a quick recap of[Last Year's 5 in 5
- Access health care remotely
- Real-time speech translation
- 3-D internet, based on systems like [Second Life]
- Nanotechnology for cleaning up and improving the environment
- "Presence aware" cell phones that learn our preferences and habits
Here's my take on the [Next 5 in 5]:
- 3-D representations of the human body to improve health care
This prediction is based on the idea that most medical mistakes result from lack of informationabout the patient. A 3-D avatar of the patient would allow the doctor to click on the section ofthe body, and this would trigger retrieval of patient records, relevant X-rays, MRI images, and so on.For example, IBM System Storage Grid Medical Archive Solution (GMAS) provides the storage that wouldallow any doctor to access these records, even if the image was taken at a different facility.
Unfortunately, this prediction only applies to patients who can actually afford to see a doctor. Apparently,no amount of technology, no matter how cool it is, can convince governments to make health care somethingeveryone has access to. Michael Moore has done a good job explaining this in his film documentary [Sicko].
- Digital passport for food
Using RFID tags and second generation barcodes, you will have access to details of a food's origin,transportation conditions, and impact to the environment. Much of this information is already gathered,just not stored in a database accessible to the consumer.
Last year, the term "locavore" was the2007 Word of the Year for the Oxford American Dictionary, referring to people who limit what they eatto food produced within a certain radius, from family farms and locally-owned businesses.Here is an excerpt from a [Locavores] website:
Our food now travels an average of 1,500 miles before ending up on our plates. This globalization of the food supply has serious consequences for the environment, our health, our communities and our tastebuds.
Certainly, I am all for selling storage capacity to the food industry to help store vasts amount ofinformation for this, and certainly some people will be able to make smarter decisions based on thisinformation. This is not the first time this idea came up. The U.S. Food and Drug Administration introduced [nutrition labeling requirements] on thehope that people would choose more healthier foods. Despite this, people still opt for white bread, iceberg lettuce, and processed meats, so possibly having more information about where food comes from, and how it was transported, may not mean much to some consumers.
- Technology to manage your own carbon footprint
"Smart energy" technologies allow you to walk the talk, by managing your own carbon footprint inyour home. For example, if you forgot to turn off the heat or air conditioner before leaving thehouse on your commute to work, your home would call your mobile phone, so that you can turn aroundand go back and correct that mistake. Better yet, IBM is working with others to provide web-enabledelectric meters that would allow you to turn off systems from work or cell phone browser.
Of course, such technology already exists for the data center. IBM Systems Director Active EnergyManager (AEM) allows you to monitor the actual usage of your servers and storage devices, and insome cases make adjustments to control energy consumption. This can feed into the IBM TivoliUsage and Accounting Manager software to incorporate energy usage as part of the charge-backcalculations. See the [IBM Press Release] formore details.
- Cars that drive themselves
Not only will cars that drive themselves reduce the number of drunk-driving accidents, it canalso help reduce congestion in big cities, by routing traffic to different directions, based onGPS and presence-aware technologies. Stockholm (Sweden) has already reduced peak hour traffic by 20 percentusing this approach.
While I admire the concept, cars are perhaps the least energy-efficient mode of transportation.Often, a family can only afford a single vehicle, and it is purchased based on the worst-case scenario.A friend of mine has only two children, but a sever-person mini-van that gets only 17 MPG. Why suchan energy-inefficient vehicle? Because she occasionally drives her daughter and her friends tosoccer practice, and that represents the worst-case scenario, minimizing the parent/child ratio. Theother 99 percent of the time, she is driving by herself, or with one child, and consuming a lot ofgasoline in the process.
A better approach would be to find technology that connects airports, trains, buses and light rail forpublic transportation to greatly reduce the need to drive a car in the first place.
The idea that a family can have only one vehicle plays in the storage arena as well. Larger companiescan afford to have different storage for different workloads. The IBM System Storage DS8000 high-end disk system for their large OLTP anddatabase workloads, an XIV Nextra for their Web 2.0 storage needs, DR550 to hold their compliance data,and so on. Smaller companies are often tasked to find a single solution for all their needs, andfor them, IBM offers the IBM System Storage N series, providing a "unified storage" platform.
- Increased dependence on cell phones
Before the cell phone, the last don't-leave-home-without-it technology most of us carried was the credit card. Now, IBM predicts that we will be even more dependent on our cell phones, becoming our banker, ticket broker, and shopping buddy.For example, you could use your cell phone to take a picture of a shirt at the mall, and it will then show you what youwould look like wearing that shirt, on a 3-D avatar representation of yourself, or perhaps your spouse, and getinformation on what discounts are available, or where else the shirt is being offered.
None of this example actually uses the "phone" part of the cell phone, however the cell phone is one device thatnearly everyone carries, so it becomes the development platform for all other technologies to be based on.
The common theme running through these is that it can be helpful to store more information than we do today,provided we make it accessible to the people who need it to make better decisions.
technorati tags: IBM, predictions, health care, nanotechnology, secondlife, speech translation, 3-D, avatar, GMAS, Michael Moore, Sicko, digital passport, food, nutrition labeling, FDA, carbon footprint, AEM, locavore, Tivoli, Usage Accounting Manager, DS8000, XIV, Nextra, DR550, unified storage, cell phones, decisions
Last week, I got the following comment from Bob Swann:
I am looking for the IBM VM Poster or a picture of the IBM VM "Catch the Wave"
Do you know where I might find it?
Well, Bob, I made some phone calls. The company that published these posters no longer exists, butI found a coworker at the Poughkeepsie Briefing Center who still had the poster on his wall, and he was kind enough to take a picture of it for you.
|VM: The Wave of the Future|
(click thumbnail at left to see larger image)
Some may recognize this as a [mash-up] using as a base the famous Japanese 10-inch by 15-inch block print[The Great Wave off Kanagawa] byartist [Katsushika Hokusai]. I had this as my laptop'swallpaper screen image until last year when I was presenting in Kuala Lumpur, Malaysia. I was told that it reminded people about the horrible tsunami caused by the [Indian Ocean earthquake] back in 2004.I was actually scheduled to fly the last week of December 2004 to Jakarta, Indonesia, but at the last minute ourclient team changed plans. I would have been on route over the Pacific ocean when the tsunami hit, and probably stranded over there for weeks or months until the airports re-opened.
The Wave theme was in part to honor the IBM users group called World Alliance VSE VM and Linux (WAVV) which is havingtheir next meeting [April 18-22, 2008] in Chattanooga, Tennessee. I presentedat this conference back in 1996 in Green Bay, Wisconsin, as part of the IBM Linux for S/390 team. It started onthe Sunday that Wisconsin switched their clocks for [DaylightSaving Time], and the few of us from Arizona or other places that don't both with this, all showed up forbreakfast an hour early.
When I was in Australia last year, I was told the wave that sports fans do, by raising their hands in coordinatedsequence, was called the [Mexican Wave]in most other countries. When I was there, Melbourne was trying to outlaw this practice at their cricket matches.
The "wave" represents a powerful metaphor, from z/VM operating system on System z mainframes to VMware and Xenon Intel-based processor machines, as the direction of virtualization that we are heading for future data centers.The Mexican wave represents a glimpse of what humans can accomplish with collaboration on a globalscale. It can also represent the tidal wave of data arising from nearly 60 percent annual growth instorage capacity. (I had to mention storage eventually, to avoid being completely off-topic on this post!)
I hope this is the graphic you were looking for Bob. If anyone else has wave-themed posters they would like to contribute, please post a comment below.
technorati tags: Bob Swann, IBM poster, z/VM, Japanese, Great Wave, Kanagawa, Katsushika Hokusai, Kuala Lumpur, Malaysia, Indian Ocean, Jakarta, Indonesia, WAVV, Mexican Wave, storage, capacity, growth, Linux,Melbourne, Australia, VMware, Xen
While EMC bloggers garnered media attention last year pointing out the faulty mathematics from HDS, an astute reader pointed me to EMC's own [DMX-4 specification sheet
],updated for its 1TB SATA disk.I've chosen just the minimum and maximum number of drives RAID-6 data points for non-mainframe platforms:
|RAID level||# drives||500GB SATA||1TB SATA|
In the first two rows, the numbers appear as expected. For example, 96 drives would be 12 sets of 6+2 RAID ranks, meaning 72 drives' worth of data, so nearly 36TB for 500GB drives, and nearly 72TB for 1TB drives. With 14+2 RAID-6, thenyou would have 84 drives' worth of data, so 42TB and 84TB respectively match expectations.
Where EMC appears miscalculating is having 20x more drives, as the numbers don't match up. For 1920 drives inRAID-6, you would expect 20x more usable capacity than the 96 drive configurations. For 6+2 configurations, one would expect 720TB and 1440TB respectively. For 14+2 configurations, one wouldexpect 840TB and 1680TB, respectively.
Perhaps EMC DMX-4 can't address more than 600TB for the entire system? Does EMC purposely limit the benefitsof these larger drives? It does question why someone might go from 500GB to 1TB drives, if the maximum configuration only gives about 40TB more capacity.Fellow IBM blogger Barry Whyte questioned the use of SATA in an expensive DMX-4 system, in his post[One Box Fits All - Or Does It], and now perhaps there are good reasons to question 1TB from a capacityperspective as well.
technorati tags: IBM, EMC, DMX-4, 500GB, 1TB, RAID-6, HDS, SATA
Today is Tuesday, a good day for announcements and good news!
This week I am in Guadalajara, Mexico, and the focus in Mexico is Small and Medium sized Business (SMB). SmallBusinessComputing.COM put out their [2008 Awards: The Absolute Best in Small Business], and IBM disk and server systems were recognized. Here is an excerpt:
As companies expand, so does the data, and often at an alarming rate. Adding dedicated storage to your network can ease both system performance and efficiency woes, making your work life a bit easier.
This year, 42 percent our readers cast their lot with the [IBM System Storage DS3400]. The $6,495 system supports 12 hard disk drives for capacity of up to 3.6 terabytes a good match for tasks such as managing databases, e-mail and Web serving.
Last year's winner, NetApp, takes a very respectable runner-up slot for the NetApp Store Vault S300, a $3,000 storage appliance that offers security, scalability, data protection and simplified management.
Also, IBM's SMB departmental machine, the [System i515 Express] was named runner-up for servers.
technorati tags: IBM, Guadalajara, Mexico, SMB, DS3400, i515, NetApp
This week I'm in beautiful Guadalajara, Mexico teaching at our[System Storage Portfolio Top Gun class
].We have all of our various routes-to-market represented here, including our direct sales force, our technicalteams, our online IBM.COM website sales, as well as IBM Business Partners.Everyone is excited over last week's IBM announcement of [4Q07 and full year 2007 results
], which includesdouble-digit growth in our IBM System Storage business, led by sales of our DS8000, SAN Volume Controller and Tapesystems. Obviously, as an IBM employee and stockholder, I am biased, so instead I thought I would provide someexcerpts from other bloggers and journalists.
New York Times [I.B.M. Posts Strong Preliminary Results] said "The fourth quarter usually is the best time of the year for IBM Corp., but rarely does it look this good." When the final results were posted last Thursday, Steve Lohr wrote[IBM - A Separate Reality?]. Here'san excerpt:
But what was striking in the company’s conference call on Thursday afternoon was the unhedged optimism in its outlook for 2008, given the strong whiff of recession fear elsewhere.
The questions from Wall Street analysts in the conference call had a common theme. Why are you so comfortable about the 2008 outlook? Now, that might just be professional churlishness, since so many of them have been so wrong recently about I.B.M. Wall Street had understandably thought, for example, that I.B.M.’s sales to financial services companies — the technology giant’s largest single customer category — would suffer in the fourth quarter, given the way banks have been battered by the mortgage credit crunch.
But Mr. Loughridge said that revenue from financial services customers rose 11 percent in the fourth quarter, to $8 billion. The United States, he noted, accounts for only 25 percent of I.B.M.’s financial services business.
The other thing that seems apparent is how much I.B.M.’s long-term strategy of moving up to higher-profit businesses and increasingly relying on services and software is working. Its huge services business grew 17 percent to $14.9 billion in the quarter. After the currency benefit, the gain was 10 percent, but still impressive. Software sales rose 12 percent to $6.3 billion.
Trade Radar poses the question[IBM Beats -- but is itrepresentative of entire tech sector?]. Here's an excerpt:
Looking at IBM's business segments, it can be seen that they offer far more coverage of the technology space that those of the typical tech company:
IBM is just so big and diversified that there is little comparison between it and most other tech companies. IBM is a member of an elite group of companies like Cisco Systems (CSCO), Microsoft (MSFT), Oracle (ORCL) or Hewlett-Packard (HPQ).
IBM's wide international coverage and deep technological capabilities dwarf those of most tech companies. Not only do they have sales organizations worldwide but they have developers, consultants, R&D workers and supply chain workers in each geographic region. Their product mix runs from custom software to packaged enterprise software, hardware (mainframes and servers), semiconductors, databases, middleware technology, etc., etc. There are few tech companies that even attempt to support that many kinds and variations of products.
As color on the fourth quarter earnings announcement, there are a couple of observations that I would like to make. The first one speaks to IBM's international prowess. The company indicated that growth in the Americas was only 5%. International sales were a primary driver of IBM's good results. As an insight on the difference between IBM and most other tech companies, it is clear that nowadays, a tech company that isn't adept at selling internationally is going to be in trouble.
Sramana Mitra opines [IBM Also Looks Safe]. Here's an excerpt:
Terrific performance in a terrific year - no doubt a result of its strong global model. IBM operates in 170 countries, with about 65% of its employees outside US and about 30% in Asia Pacific. For fiscal 2007, revenues from Americas grew 4% to $41.1 billion (42% of total revenue), [EMEA] grew 14% to $34.7 billion (35%of total revenue), and Asia-Pacific grew by 11% to $19.5 billion (19.7% of total revenue). IBM sees growth prospects not just in [BRIC] but also countries like Malaysia, Poland, South Africa, Peru, and Singapore.
Meanwhile, Dan Farber and Larry Dignan from ZDnet write[IBM’s alternate universe: Big Blue sees great 2008]. Here'san excerpt:
Thus far 2008–all two weeks of it–hasn’t been a pretty for the tech industry. Worries about the economy prevail. And even companies that had relatively good things to say like Intel get clobbered. It’s ugly out there–unless you’re IBM.
I am sure there will be more write-ups and analyses on this over the next coming weeks, and others will probably waituntil more tech companies announce their results for comparison.
technorati tags: IBM, Guadalajara, Mexico, Top Gun, 4Q07, results, DS8000, SAN Volume Controller, SVC, Tape, optimism, confidence, Cisco, Microsoft, Oracle, Hewlett-Packard, EMEA, BRIC
Fellow Blogger BarryB mentions "chunk size" in his post [Blinded by the light
],as it relates to Symmetrix Virtual Provisioning capability. Here is an excerpt:
I mean, seriously, who else but someone who's already implemented thin provisioning would really understand the implications of "chunk" size enough to care?
For those of you who don't know what the heck "chunk size" means (now listen up you folks over at IBM who have yet to implement thin provisioning on your own storage products), a "chunk" is the term used (and I think even trademarked by 3PAR) to refer to the unit of actual storage capacity that is assigned to a thin device when it receives a write to a previously unallocated region of the device.For reference, Hitachi USP-V uses I think a 42MB chunk, XIV NEXTRA is definitely 1MB, and 3PAR uses 16K or 256K (depending upon how you look at it).
Thin Provisioning currently offered in IBM System Storage N serieswas technically "implemented" by NetApp, and that the Thin Provisioning that will be offered in our IBM XIV Nextrasystems will have been acquired from XIV. Lest I remind you that many of EMC's products were developed by other companies first, then later acquired by EMC, so no need for you to throw rocks from your glass houses in Hopkington.
"Thin provisioning" was first introduced by StorageTek in the 1990's and sold by IBM under the name of RAMAC Virtual Array (RVA). An alternative approach is "Dynamic Volume Expansion" (DVE). Rather than giving the host application a huge 2TB LUN but actually only use 50GB for data, DVE was based on the idea that you only give out 50GB they need now, but could expand in place as more space was required. This was specifically designed to avoid the biggest problem with "Thin Provisioning" which back then was called "Net Capacity Load" on the IBM RVA, but today is now referred to as "over-subscription". It gave Storage Administrators greater control over their environment with no surprises.
In the same manner as Thin Provisioning, DVE requires a "chunk size" to work with. Let's take a look:
- DS4000 series
On the DS4000 series, we use the term "segment size", and indicate that the choice of a segment size can have some influence on performance in both IOPS and throughput. Smaller segment sizes increase the request rate (IOPS) by allowing multiple disk drives to respond to multiple requests. Large segment sizes increase the data transfer rate(Mbps) by allowing multiple disk drives to participate in one I/O request. The segment size does not actually change what is stored in cache, just what is stored on the disk itself.It turns out in practice there is no advantage in using smaller sizes with RAID 1; only in a few instances does this help with RAID-5 if you can writea full stripe at once to calculate parity on outgoing data. For most business workloads, 64KB or 128KB are recommended. DVE expands by the same number of segments across all disks in the RAID rank, so for example in a 12+P rank using 128KB segment sizes, the chunk size would be thirteen segments, about 1.6MB in size.
- SAN Volume Controller
On the SAN Volume Controller, we call this "extent size" and allow it to be various values 64MB to 512MB. Initially,IBM only managed four million extents, so this table was used to explain the maximum amount that could be managedby an SVC system (up to 8 nodes) depending on extent size selected.
|Extent Size||Maximum Addressable|
IBM thought that since we externalized "segment size" on the DS4000, we should do the same for the SANVolume Controller. As it turned out, SVC is so fast up in the cache, that we could not measure any noticeable performance difference based on extent size. We did have a few problems. First, clients who chose 16MB andthen grew beyond the 64TB maximum addressable discovered that perhaps they should have chosen something larger.Second, clients called in our help desk to ask what size to choose and how to determine the size that was rightfor them. Third, we allowed people to choose different extent sizes per managed disk group, but that preventsmovement or copies between groups. You can only copy between groups that use the same extent size. The generalrecommendation now is to specify 256MB size, and use that for all managed disk groups across the data center.
The latest SVC expanded maximum addressability to 8PB, still more than most people have today in their shops.
- DS8000 series
Getting smarter each time we introduce new function, we chose 1GB chunks for the DS8000. Based on a mainframebackground, most CKD volumes are 3GB, 9GB, or 27GB in size, and so 1GB chunks simplified this approach. Spreadingthese 1GB chunks across multiple RAID ranks greatly reduced hot-spots that afflict other RAID-based systems.(Rather than fix the problem by re-designing the architecture, EMC will offer to sell you software to help you manually move data around inside the Symmetrix after the hot-spot is identified)
Unlike EMC's virtual positioning, IBM DS8000 dynamic volume expansion does work on CKD volumes for our System z mainframe customers.
The trade-off in each case was between granularity and table space. Smaller chunks allow finer control on the exact amount allocated for a LUN or volume, but larger chunks reduced the number of chunks managed. With our advanced caching algorithms, changes in chunk size did not noticeably impact performance. It is best just to come up with a convenient size, and either configure it as fixed in the architecture, or externalize it as a parameter with a good default value.
Meanwhile, back at EMC, BarryB indicates that they haven't determined the "optimal" chunk size for their newfunction. They plan to run tests and experiments to determine which size offers the best performance, and thenmake that a fixed value configured into the DMX-4. I find this funny coming from the same EMC that won't participate in [standardized SPC benchmarks] because they feel that performance is a personal and private matter between a customer and their trusted storage vendor, that all workloads are different, and you get the idea. Here's another excerpt:
Back at the office, they've taking to calling these "chunks" Thin Device Extents (note the linkage back to EMC's mainframe roots), and the big secret about the actual Extent size is...(wait for it...w.a.i.t...for....it...)...the engineers haven't decided yet!
That's right...being the smart bunch they are, they have implemented Symmetrix Virtual Provisioning in a manner that allows the Extent size to be configured so that they can test the impact on performance and utilization of different sizes with different applications, file systems and databases. Of course, they will choose the optimal setting before the product ships, but until then, there will be a lot of modeling, simulation, and real-world testing to ensure the setting is "optimal."
Finally, BarryB wraps up this section poking fun at the chunk sizes chosen by other disk manufacturers. I don't knowwhy HDS chose 42MB for their chunk size, but it has a great[Hitchiker's Guide to the Galaxy]sound to it, answering the ultimate question to life, the universe and everything. Hitachi probably went to theirDeep Thought computer and asked how big should their "chunk size" be for their USP-V, and the computer said: 42.Makes sense to me.
I have to agree that anything smaller than 1MB is probably too small. Here's the last excerpt:
Now, many customers and analysts I've spoken to have in fact noted that Hitachi's "chunk" size is almost ridiculously large; others have suggested that 3PAR's chunks are so small as to create performance problems (I've seen data that supports that theory, by the way).
Well, here's the thing: the "right" chunk size is extremely dependent upon the internal architecture of the implementation, and the intersection of that ideal with the actual write distribution pattern of the host/application/file system/database.
So my suggestion to EMC is, please, please, please take as much time as you need to come up with the perfect"chunk size" for this, one that handles all workloads across a variety of operating systems and applications, from solid-state Flash drives to 1TB SATA disk. Take months or years, as long as it takes. The rest of the world is in no hurry, as thin provisioning or dynamic volume expansion is readily available on most other disk systems today.
Maybe if you ask HDS nicely, they might let you ask their computer.
technorati tags: IBM, thin provisioning, XIV, Nextra, N series, chunk size, BarryB, EMC, Symmetrix, virtual provisioning, 3PAR, Hitachi, HDS, USP-V, StorageTek, RAMAC Virtual Array, RVA, dynamic volume expansion, DVE, 42MB, Hitchhiker's Guide, CKD, System z, mainframe, SATA, DS8000, DS4000, SAN Volume Controller, SVC
This week was the 2008 MacWorld conference. I thought I would reflect on some of the storage related aspects of the products mentioned by Steve Jobsin his Keynote address.Many were updated version of products introduced last year's MacWorld. (In case you forgot whatthose were, here ismy post that covered [MacWorld 2007
(Disclaimer: IBM has a strong working relationship with Apple, and manufacturers technology used in someof Apple's products. I own both an Apple iPod as well as an Apple G4 Mac Mini. IBM supports its employees usingApple laptops instead of Windows-based ones for work, and IBM has developed software that runs on Apple's OS X.Apple is kind enough to extend its "employee discount prices" to IBM employees.)
- [Apple OS X 10.5 Leopard operating system]
In the first 90 days of its release, Apple sold 5 million copies, representing 19 percent of Mac users. I am stillone of the 81 percent still using 10.4 Tiger, the previous level. My Mac Mini is based on G4 POWER processor, and upgrading is on my [Someday/Maybe] list. I am not taking sides in the [OS X vs. Windows vs. Linux religious debate]; I use all three.
The key storage-related feature of Leopard is their backup software Time Machine, and Steve Jobs announceda companion product called Time Capsule that would serve as the external backup disk wirelessly, over 802.1nWi-Fi. For many households, backup is either never done, or done rarely, so any help to simplify and relieve theburden is welcome.
Time Capsule comes in 500GB and 1TB SATA disk capacities, which Steve Jobs called "server-grade". What about a 750GB model? Looks like Apple followed EMC'sexample and went straight to 1TB instead. After EMC failed to deliver 750GB drives in 2007 that they [promised back in July], EMC blogger Chuck Hollis explains in his post[Enterprise Storage Strikes Back!]:
So there's something in the EMC goodie bag as well for you -- the availability of the new 1TB disk drives you've been hearing about. We skipped the 750GB drive and went right to the 1TB drive.
- Apple iPhone and iPod Touch
In the first 200 days, Apple has sold 4 million phones, and has garnered nearly 20 percent of the smart phone market share. New features include a GPS-like location feature that uses [triangulation] with cell phone towers and Wi-Fi hotspotsto determine where you are located.
I covered last year's introduction of the iPhone in my post on [Convergence].All of the features he presented were software updates to the existing 8GB and 16GB models. No new modelswith larger storage were introduced.
I am a T-mobile customer, so am out of luck until either (a) Apple unlocks their phones from the AT&T network, or(b) Apple signs an agreement with T-mobile in the USA. I reviewed the various hacks to unlock iPhones last year, but was not interested in losing official warranty or future software support.
The iPod Touch is an interesting alternative. It is basically an iPhone with the cell-phone features disabled, whichgives you Wi-Fi over the Safari browser, music, videos, and so on. Steve Jobs mentioned enhanced software updates for this as well. The iPod Touch comes in the same 8GB and 16GB sizes as the iPhone.
- AppleTV and iTunes
Steve Jobs indicated that they have sold over 4 billion songs over iTunes, 125 million TV shows, and 7 million movies.He announced that now iTunes would allow for movie rentals, with the option to see them within 30 days, but once you started watching a movie, you have 24 hours to finish. I found it interesting that he said rentals were to reduce space on your hard drive, versus outright purchase of movie content.
In a rare concession, Steve admitted that the original AppleTV misunderstood the marketplace. The original AppleTV allowed you to view pictures and listen to music through your television, but people wanted to view movies. Thesoftware upgrade would allow this, using the iTunes rental model above, as well as watch video podcasts and over 50 million videos posted on YouTube.
Some television-related stats from [z/Journal] were quite timely. The older non-digital TVs could be usedwith the AppleTV and gaming systems like Nintendo Wii.
- 33 percent of U.S. households do not know what to do with (their older) TVs after digital switch (Feb 2009)
- 69 percent of Americans think PCs are more entertaining than TV
Rather than try to fight peer-to-peer website piracy, Apple cleverly decided to compete head-to-head against it. This iswell summarized in Matt Mason's 6-minute video [The Pirate's Dilemma]. Eleven major movie studios are on board with Apple's movie rental plans, making thousands of movietitles available for this, with hundreds in High Definition (HD).
I personally have a Tivo, connected wirelessly to a regular non-HD television, as well as my PC, Mac and internet hub, and this allows me to view my photos, listen to my iTunes collection of music and internet radio stations from [Live365], as well as rent movies and TV shows from Amazon Unbox, with prices ranging from free to four dollars.
- MacBook Air
The theme of this week was "Something is in the Air", an obvious reference to this product, billed as the world's thinnest laptop.John Windsor on his YouBlog writes[Making it Memorable] aboutthe use of a standard office envelope to demonstrate how thin this new MacBook Air laptop is. It is 0.16 inchesat one end, and 0.76 inches as the other end. Unlike other "ultra-thin" laptops, this has a full-size back-lit keyboardand full-size 13.3 inch widescreen. The touchpad supports multi-touch gestures similar to the iPhone and iPod Touch.Intel managed to shrink down their Core 2 Duo processor chip by 60 percent to fit inside this machine. Thebattery is reported to last five hours.
This laptop was designed for wireless access, with 802.1n and BlueTooth enabled. No RJ-45 connection for traditionalLAN ethernet connection, but I guess you can use a USB-to-RJ45 converter.
Storage-wise, you can choose between the 1.8-inch 80GB HDD or a pricey-but-faster 64GB Flash Solid-State Disk (SSD).In a move similar to [getting rid of the 3.5-inch floppy disk in 1998's iMac G3], the MacBook Air got rid of the CD/DVDdrive. While they offer a USB-attachable SuperDrive as an optional peripheral, Steve Jobs gave alternative methods:
|Watching movies on DVD||Rent or Buy from iTunes instead|
|Burning music CDs for your car stereo||Attach your iPod to your car stereo|
|Taking backups to CD or DVD||Use Time Machine and Time Capsule instead|
|Installing Software from CD||Wirelessly connect to a "Remote Optical Disc" on a Mac or PC, running special Apple-provided software that allows you to make this connection|
Here's a list to the 90-minute[keynote address video]. If you arenot a fan of recycling, saving the environment, free speech or democracy, you can safely skip the last 15 minutes when musical artist Randy Newman performs.For alternative viewpoints on the keynote, see posts from [John Gruber] and [Tara MacKay].
technorati tags: Apple, MacWorld, IBM, OS X, Leopard, Tiger, iPod, Mac Mini, G4, Time Machine, Time Capsule, 500GB, 1TB, SATA, EMC, Chuck Hollis, 750GB, 802.1n, Wi-Fi, iPhone, iPod Touch, T-mobile, unlock, AppleTV, iTunes, movie rentals, Tivo, Amazon, Unbox, Live365, John Windsor, YouBlog, MacBook Air, Flash, SSD, BlueTooth, Remote Disc, CD/DVD drive, iMac, G3, John Gruber, Randy Newman, Tara MacKay, recycling, environment, free speech, democracy, HD, piracy, Matt Mason
In addition to creating the Dilbert cartoon, Scott Adams has a blog, which sometimes is quite serious,and other times quite funny. The anticipated 30x cost of "Flash Drives" for Enterprise disk systems reminded meof one of Scott's articles from November 2007 titled [Urge to Simplify
].Here's an excerpt:
Now the casinos have people trained, like chickens hoping for pellets, to take money from one machine (the ATM), carry it across a room and deposit in another machine (the slot machine). I believe B.F. Skinner would agree with me that there is room for even more efficiency: The ATM and the slot machine need to be the same machine.
The casinos lose a lot of money waiting for the portly gamblers with respiratory issues to waddle from the ATM to the slot machines. A better solution would be for the losers, euphemistically called “players,” to stand at the ATM and watch their funds be transferred to the hotel, while hoping to somehow “win.” The ATM could be redesigned to blink and make exciting sounds, so it seems less like robbery.
I’m sure this is in the five-year plan. Longer term, people will be trained to set up automatic transfers from their banks to the casinos. People will just fly to Vegas, wander around on the tarmac while the casino drains their bank accounts, then board the plane and fly home. The airlines are already in on this concept, and stopped feeding you sandwiches a while ago.
Perhaps EMC can redesign its DMX-4 to "blink and make exciting sounds" as well. The Flash Drives were designedfor the financial services industry, so those disk systems could be directly connected to make transfers between the appropriate bank accounts.
technorati tags: Scott Adams, Dilbert, B.F. Skinner, ATM, casinos, EMC, DMX-4
When times are tough, people revert back to their "default programming", and companies search for their"core strengths".The Redwoods Group calls this the[Native Language Theory
]. Here'san excerpt:
A young carpenter immigrates to the United States from Italy, unable to speak a word of English. Upon arrival, he moves into a small apartment by himself and begins looking for a job in construction. With some luck and a lot of hard work, he quickly lands a job at a local construction site. Over the coming weeks he learns how to say “hello” and “goodbye” to his English-only coworkers. As time goes on, he is able to learn more complex phrases and commands and is now able to begin taking on jobs that better match his level of expertise.
Several years after the carpenter moved to the US, he now speaks fluent English and has started a family with an American woman and now speaks only English on the job site and at home. One afternoon, while hammering at the framing of a new home, the carpenter strikes his thumb. In what language does he curse?Italian, of course.
We believe that this story illustrates the nature of reacting to difficult, stressful, and, yes, painful situations by reverting to what you know best. This is the reason that coaches ask their players to make certain actions “instinctual” – simply, when times get tough, we do what we fall back on our native language.
Last September, in my post[Supermarketsand Specialty Shops] I mentioned how Forrester Research identified two kinds of IT vendors selling storage. On one side were the"information infrastructure" companies (IBM, HP, Sun, and Dell) that focus on providing one-stop shopping for clients that want all parts of an IT solution, including servers, storage, software and services. These I compared to "supermarkets".
On the other side were the storage component vendors (EMC, HDS, NetApp, and many others) that focus on specificstorage components. These I compared to "specialty shops", like butchers, bakers and candlestick makers.These often appeal to customers with big enough IT staffs with the skills to do their own system integration.The key difference seems to be that the supermarkets are client-focused, and the specialty shops are technology-focused, and different people prefer to do business with one side or another.This came in handy last November to explain Dell's acquisition of EqualLogic and discuss[IBMEntry-Level iSCSI offerings].
Some recent news seems to fit this model, in relation to the Native Language Theory.
Several argued that EMC was in the process of shifting sides, from disk specialty shop over to an everything-but-servers supermarket. Certainly many of its acquisitions in software, services, and VMwarewould support the notion that perhaps they are going through an identity crisis.The immediate beneficiary was HDS, the #2 disk specialty shop, that passedup EMC with innovative features in its USP-V disk system.
However, times are tough, especially in the U.S. economy that many storage vendors are focused on. EMCappears to have found its native language, going back to its roots of solid state storage systems thatthey started with back in 1979. This week EMC announced [Symmetrix DMX-4 support of Flash drives].Several bloggers review the technology involved:
Overall smart move for EMC to go back to its technology-focused disk specialty shop mode and go head-to-head against the HDS threat. With Web 2.0 workloads moving off these monolithic solutions and onto [clustered storage more appropriate for "cloud computing"], large enterprise-class disk systems like theIBM System Storage DS8000 and EMC DMX-4 can shift focus on what they do best: online transaction processing (OLTP) and large databases. However,I noticed the EMC press release mentions EMC as an "information infrastructure" company, so perhaps they stillhaven't resolved their identity crisis.
(For the record, IBM shipped [Flash drive-based storage last year], and announced [larger drive models] this week. As we have learned from last year, terms like "First" or "Leader" in corporate press releases should not always be taken literally.)
- Sun Microsystems
After Sun acquired StorageTek specialty shop, they too had a bit of an identity crisis.Fortunately, they realized their core strengths were on the "supermarket" side,moved storage in with servers in their latest restructuring, changed their NYSE symbol from SUNW to JAVA, and reset their focus on providing end-to-end solutions like IBM. For example, fellow blogger Taylor Allis from Sun mentions their latest in "clustered storage" in his post[IBM Buys XIV - Good Move].
Last August, in my post [Fundamental Changes for Green Data Centers], I mentioned that IBM consolidated 3900 rack-optimized servers onto 33 mainframes,and that this was part of our announcement that[since 1997, IBM has consolidated its strategic worldwide data centers from 155 to seven].I noticed in Nick Carr's Rough Type blog post[The Network is the Data Center] thatHP and Sun have followed suit:
In an ironic twist, some of today's leading manufacturers of server computers are also among the companies moving most aggressively to reduce their need for servers and other hardware components. Hewlett-Packard, for instance, is in the midst of a project to slash the number of data centers it operates from 85 to 6 and to cut the number of servers it uses by 30 percent. Now, Sun Microsystems is upping the stakes. Brian Cinque, the data center architect in Sun's IT department, says the company's goal is to close down all its internal data centers by 2015. "Did I just say 0 data centers?" he writes on his blog."Yes! Our goal is to reduce our entire data center presence by 2015."
While Nick feels this is ironic for Sun, known for UNIX servers based on their SPARC chip technology, I don't. Sun has shifted from being technology-focused to being client-focused.This is where the marketplace is going, and the supermarket vendors, being client-focused, are best positioned to adapt to this new world. In a sense, Sun found its roots. Nick summarizes this as:"The network, to spin the old Sun slogan, becomes the data center."
So, each move seems to strengthen their respective identities back to their origins, or at least help them communicate that to the market.
technorati tags: core strengths, native language, Forrester Research, supermarket, specialty shops, IBM, HP, Sun, Dell, information infrastructure, client-focused, technology-focused, EqualLogic, EMC, HDS, NetApp, USP-V, DMX-4, Flash, disk, drive, systems, Java, Taylor Allis, UNIX, SPARC, Nick Carr
Christopher Carfi on his Social Customer Manifesto
blog has a great post[Let's Look at the Big Picture
]that talks about Information as the new form of "money" by looking at how the concept of "money" wasfirst formed 150 years ago. Here's an excerpt:
Lesson 1: "Money" was very fragmented for a very long period of time after the colonization of North America
"Money" as we think of it in the form of cash/paper currency has only been around for about 150 years. Over a period of almost two hundred years both before and after that time, a number of fragmented methods were used to exchange value.
Lesson 2: Everybody needs to win
After the ideas of "cash" and "checks" had taken hold and become widespread, there were still many inefficiencies in the system. Cash is cumbersome, and subject to loss. Checks may bounce. This continued until the mid-1900's.
Enter the credit card.*
The credit card resonated with both customers and vendors because both parties received benefits.
Now, the widespread usage of credit cards was not something the occurred overnight. Instead, it was something that occurred over a generation. In 1970, only 16% of American households had credit cards. However, by 1995, that number had climbed to 65%.
We are now looking at Information in much the same way. It is fragmented, it is used to represent value, it is hoarded by some, shared by others. In much that "brown" is the new "black", does that mean "information" is the new"money"?
A related blog post from Shawn over at Anecdote discusses a panelist discussion of Albert Camus' work,The Stranger. Here is an excerpt:
... meaning is not pre-inscribed in the world around us and we are continuously seeking meaning in an inherently meaningless world. I almost toppled off the step machine. Do we live in an inherently meaningless world? On first thought I think the answer is yes. The onus is on us to make sense of our world.
And here is where information, by itself, is not of value unless people place value on it. Just as people valued Wampum and Furs, and could therefore trade it for other goods, people trade information for other itemsof value. But the onus is on us to make sense of the information, to determine the meaning of it, and use thisto help drive business or other accomplishments.
Are you leveraging information as well as investors leverage other people's money? If not, IBM can help.
technorati tags: Christopher Carfi, Social, Customer, Manifesto, VRM, information, money, cash, paper, currency, wampum, furs, credit card, IBM, meaning
It's already the 11th of January, and thought I would take a break from technology tofocus on my [New Year's Resolutions
]from last year, and make some new ones for 2008.
Last Year's Resolutions:
- Blog on a more consistent frequency
In [Data Center Resolutions], I resolved to post one to five entries per week, and I think I made good on this one. When I was assembling mybook [Inside System Storage: Volume I], I noticed an evolution month by month since I made this resolution.
- Reduce my waist down to 35 inches
Rather than a target weight, I chose a target waist measurement, but did not quite make this one. I did keep up with my weekly exercise regime, but we recently installed an "ice cream freezer" here at work, and I have failed to resist temptation.
- Reduce, Reuse and Recycle
In my post [Stayingon Budget], I resolved to "reduce, reuse and recycle". I have taken measures to de-clutter and simplify mylife, and already things are paying off. So I am happy about this one.
- Learn to Better use Lotus Notes and Office 2007 software
In my post [Honeyour Tools and Skills], I resolved to learn how to better use Lotus Notes and Office 2007. We never got Office 2007.In a surprise move, IBM put out Lotus Symphony, an Office 2007 replacement. Lotus Symphony works on IBM's three approved recognized desktop platforms (Windows XP, Linux and Mac OS X). Here's a collection of [IBM Press Releases about Lotus Symphony].
I did learn how to better use Lotus Notes,thanks to Alan Lepofsky's blog [IBM Lotus Notes Hints, Tips, and Tricks].Ironically, the best help for dealing with Lotus Notes was not the software itself, but the skills in handling emailin general. This includes:
- Write shorter notes. Down to [five sentences] in some cases.
- Resist the urge to copy the world, and better use "bcc" to be kind to upper management on "reply all" respondents.
- Avoid attaching large documents, but use URL's to NAS file shares, websites, or [YouSendIt.com] instead. Obviously, the recipient has to have access to whatever you point to, but it greatly reduces total email volume and improves transmission over wireless.
- Delegate. A lot of times I was the "middleman" between someone asking a question, and someone else Iknew had the answer. Now, I just introduce them together and step out of the way.
- Checking email only a few times a day. I use to check my email every 5-10 minutes, now only 2-4 times per day.
- Laugh More
In my post, [Lighten Up], I resolved to laugh more, stretch more, get enough sleep, and listen to music more. I participated in monthly[Tucson Laughter Club]events, incorporated stretching in my weekly exercise program, have gotten more sleep, and rediscovered some of my older music that I hadn't listened to in a while. Overall, I feel happy I met this one.
My New Year's Resolutions for 2008:
- Improve my writing skills
Going back through my past blog postings, some of my sentences and paragraphs were frightful. I resolve toimprove my sentence and paragraph structure, and make better use of HTML tags to improve the layout andformatting.
- Improve my HTML and Web design skills
- Contribute to the OLPC Foundation
Last year, as a "Day 1 Donor", I had donated to this important charitable organization to help educate the childrenof third world nations. This year, I plan to learn Python and other programming languages used on the XO laptop,and see how I can contribute my skills and expertise on the OLPC forums.
- Eat Healthier and Drink more
I think my downfall with last year's resolution was that it was merely a goal, 35 inch waist, rather thana "call for action". This year, I plan to eat more fish, salads, whole grains and other heart-healthy foods.
While many people resolve to "Quit Drinking", I need to drink more. My doctor, my personaltrainer, and even my interpreter teams, have asked me to do so. We live in Tucson, Arizona, during a centuryof global warming, and dehydration can cause stress on the body.
- Attend more movies and film-making events
Last year, I joined the Tucson Film Society, and produced[my first film], part of which was filmedfrom Bogota, Colombia. I got invited to see a lot of independent films, premieres, and film-maker events, but did not attend many. I resolve to attend more in 2008.
- Get better Organized
Moving offices from one building to another brought to light that I wasn't well organized. While I havemade some efforts to de-clutter my home, I need to step this up to my work as well.
I decided to start with something very non-tech, a [Hipster PDA]. I have nowmet or heard several people who use this approach successfully, and have decided to give it a try.
Hopefully, this list might inspire you to come up with your own resolutions. Not surprisingly, writing them in a public forum helped me keep most of them, and stick to my resolutions throughout the year.
technorati tags: resolutions, blog frequency, IBM, Lotus Notes, Office 2007, Lotus Symphony, desktop, email, laughter club, writing skills, web design, Bogota, Colombia, Hipster PDA
Whew! I am glad that is over. The BarryB circus has left town, he has decided to [move on to other topics
], and I am now to clean up the ["circus gold"
] leftbehind. I would like to remind everyone that all of these discussions have been about the architecture,not the product. IBM will come out withits own version of a product based on Nextra later in 2008, which may be different than the product that XIV currentlysells to its customers.
- RAID-X does not protect against double-drive failures as well as RAID-6, but it's very close
BarryB calls this the "Elephant in the room", that RAID-6 protects better against double-drive failures. I don't dispute that. He also credits me with the term "RAID-X", but I got this directly from the XIV guys. It turns out this was already a term used among academic research circles for [distributed RAID environments]. Meanwhile, Jon Toigo feels the term RAID-X sounds like a brand of bug spray in his post[XIV Architecture: What’s Not to Like?]Perhaps IBM can change this to RAID-5.99 instead.
If you measure risk of a second drive failing during the rebuild or re-replication process ofa first drive failure, you can measure the exposure by multiplying the amount of GB at risk by thenumber of hours that the second failure could occur, resulting in a unit of "GB-hours". Here Ilist best-case rebuild times, your mileage may vary depending on whether other workloads existon the system competing for resources. Notice that 8-disk configurations of RAID-10 and RAID-5for smaller FC disk are in the triple digits, and larger SATA disk in five digits, but that with RAID-X it is only single digits. That is orders of magnitude closer to the ideal.
For each RAID type, the risk is proportional to the square of the individual drive size.Double the drive size causes the risk to be four times greater.This is not the first time this has been discussed. In [Is RAID-5 Getting Old?], Ramskovquotes NetApp's response in Robin Harris' [NetApp Weighs In On Disks]:
...protecting online data only via RAID 5 today verges on professional malpractice.
As disks get older, RAID-6 will not be able to protect against 3-drive failures. A similar chartabove could show the risk to data after the second drive fails and both rebuilds are going on,compared to the risk of a third drive failure during this time. The RAID-X scheme protects muchbetter against 3-drive failures than RAID-6.
- Nothing in the Nextra architecture prevents a RAID-6, Triple-copy, or other blob-level scheme
In much the same way that EMC Centera is RAID-5 based for its blobs, there is nothing in the Nextra architecturethat prevents taking additional steps to provide even better protection, using a RAID-6 scheme, making three copiesof the data instead of two copies, or something even more advanced. The current two-copy scheme for RAID-X is betterthan all the RAID-5 and RAID-10 systems out in the marketplace today.
- Mirrored Cache won't protect against Cosmic rays, but ECC detection/correction does
BarryB incorrectly states that since some implementations of cache are non-mirrored, that this implies they are unprotected against Cosmic rays. Mirroring does not protect against bit-flips unless both copies arecompared for differences. Unfortunately, even if you compared them, the best you can do is detect theyare different, there is no way of knowing which version is correct.Mirroring cache is normally done to protect uncommitted writes. Reads in cacheare expendable copies of data already written to disk, so ECC detection/correction schemes are adequateprotection. ECC is like RAID for DRAM memory. A single bit-flip can be corrected, multiple bit-flipscan be detected. In the case of detection, the cache copy is discarded and read fresh again from disk.IBM DS8000, XIV and probably most other major vendor offerings use ECC of some kind. BarryB is correctthat some cheaper entry-level and midrange offerings from other vendors might cut corners in this area.I don't doubt BarryB's assertion that the ECC method used in the EMC products may be differently implemented than theECC in the IBM DS8000, but that doesn't mean the IBM DS8000's ECC implementation is flawed.
ECC protection is important for all RAID systems that perform rebuild, and even more importantthe larger the GB-hours listed in the table above.
- XIV is designed for high-utilization, not less than 50 percent
I mentioned that the typical Linux, UNIX or Windows LUN is only 30-50 percent full, and perhaps BarryBthought I was referring to the typical "XIV customer". This average is for all disk storage systems connectedto these operating systems, based on IBM market research and analyst reports. The XIV is expected to run at much higher utilization rates, and offers features like "thin provisioning" and "differential snapshot" to make this simple to implement in practice.
- Pre-emptive Self-Repair
Most often, disks don't fail without warning. Usually, they give out temporary errors first, and then fail permanently.The XIV architecture allows for pre-emptive self-repair, initiating the re-replication process after detecting temporary errors, rather than waiting for a complete drive failure.
I had mentioned that this process used "spare capacity, not spare drives" but I was notified that there are three spare drives per system to ensure that there is enough spare capacity, so I stand corrected.
New drives don't have to match the same speed/capacity as the new drives, so three to five years from now, whenit might be hard to find a matching 500GB SATA drive anymore, you won't have to.
- No RAID scheme eliminates backups or Business Continuity Planning
The XIV supports both synchronous and asynchronous disk mirroring to remote locations. Backup software willbe able to backup data from the XIV to tape. A double drive failure would require a "recovery action", eitherfrom the disk mirror, or from tape, for the few GB of data that need to be recovered.
A third alternative is to allow end-users to receive backups of their own user-generated content. For example, I have over 15,000 photos uploaded over the past six years to Kodak Photo Gallery, which I use to share with my friends and family. For about $180 US dollars, they will cut DVDs containing all of my uploaded files and send them to me, so that I do not have to worry about Kodak losing my photos.In many cases, if a company or product fails to deliver on its promises, the most you will get is your money back, but for "free services" like HotMail, FreeDrive, FlickR and others, you didn't pay anything in the first place, andthey may point this limitation of liability in the "terms of service".
- XIV can be used for databases and other online transaction processing
The XIV will have FCP and iSCSI interfaces, and systems can use these to store any kind of data you want. I mentionedthat the design was intended for large volumes of unstructured digital content, but there is nothing to prevent the use of other workloads. In today's Wall Street Journal article[To Get Back Into the Storage Game, IBM Calls In an Old Foe]:
Today, XIV's Nextra system is used by Bank Leumi, a large Israeli bank, and a few other customers for traditional data-storage tasks such as recording hundreds of transactions a minute.
BarryB, thanks for calling the truce. I look forward to talking about other topics myself. These past two weeks have been exhausting!
technorati tags: IBM, XIV, RAID-X, RAID-5.99, RAID-5, RAID-10, RAID-6, EMC, BarryB, Risk, GB-hours, NetApp, Ramskov, Robin+Harris, StorageMojo, elephant, circus gold, Wall Street Journal, WSJ, Bank Leumi, traditional workloads, digital content, unstructured data, HotMail, FreeDrive, FlickR, KodakGallery, online, photos
In my post yesterday [Spreading out the Re-Replication process
], fellow blogger BarryB [aka The Storage Anarchist
]raises some interesting points and questions in the comments section about the new IBM XIV Nextra architecture.I answer these below not just for the benefit of my friends at EMC, but also for my own colleagues within IBM,IBM Business Partners, Analysts and clients that might have similar questions.
- If RAID 5/6 makes sense on every other platform, why not so on the Web 2.0 platform?
Your attempt to justify the expense of Mirrored vs. RAID 5 makes no sense to me. Buying two drives for every one drive's worth of usable capacity is expensive, even with SATA drives. Isn't that why you offer RAID 5 and RAID 6 on the storage arrays that you sell with SATA drives?Let's take a look at various disk configurations, for example 3TB on 750GB SATA drives:
And if RAID 5/6 makes sense on every other platform, why not so on the (extremely cost-sensitive) Web 2.0 platform? Is faster rebuild really worth the cost of 40+% more spindles? Or is the overhead of RAID 6 really too much for those low-cost commodity servers to handle.
- JBOD: 4 drives
- JBOD here is industry slang for "Just a Bunch of Disks" and was invented as the term for "non-RAID".Each drive would be accessible independently, at native single-drive speed, with no data protection. Puttingfour drives in a single cabinet like this provides simplicity and convenience only over four separate drivesin their own enclosures.
- RAID-10: 8 drives
- RAID-10 is a combination of RAID-1 (mirroring) and RAID-0 (striping). In a 4x2 configuration, data is striped across disks 1-4,then these are mirrored across to disks 5-8. You get performance improvement and protection against a singledrive failure.
- RAID-5: 5 drives
- This would be a 4+P configuration, where there would be four drives' worth of data scattered across fivedrives. This gives you almost the same performance improvement as RAID-10, similar protection againstsingle drive failure, but with fewer drives per usable TB capacity.
- RAID-6: 6 drives
- This would be a 4+2P configuration, where the first P represents linear parity, and the second represents a diagonal parity. Similar in performance improvement as RAID-5, but protects against single and double drive failures, and still better than RAID-10 in terms of drives per TB usable capacity.
For all the RAID configurations, rebuild would require a spare drive, but often spares are shared among multiple RAID ranks, not dedicated to a single rank. To this end, you often have to have several spares per I/O loop, and a different set of spares for each kind of speed and capacity. If you had a mix of 15K/73GB, 10K/146GB, and 7200/500GB drives, then you would have three sets of spares to match.
In contrast, IBM XIV's innovative RAID-X approach doesn't requireany spare drives, just spare capacity on existing drives being used to hold data. The objects can be mirroredbetween any two types of drives, so no need to match one with another.
All of these RAID levels represent some trade-off between cost, protection and performance, and IBM offers each of theseon various disk systems platforms. Calculating parity is more complicated than just mirrored copies, but this can be done with specialized chips in cache memory to minimize performance impact.IBM generally recommends RAID-5 for high-performance FC disk, and RAID-6 for slower, large capacity SATA disk.
However, the questionassumes that the drive cost is a large portion of the overall "disk system" cost. It isn't. For example,Jon Toigo discusses the cost of EMC's new AX4 disk system in his post [National Storage Rip-Off Day]:
- EMC is releasing its low end Clariion AX4 SAS/SATA array with 3TB capacity for $8600. It ships with four 750GB SATA drives (which you and I could buy at list for $239 per unit). So, if the disk drives cost $956 (presumably far less for EMC), that means buyers of the EMC wares are paying about $7700 for a tin case, a controller/backplane, and a 4Gbps iSCSI or FC connector. Hmm.
- Dell is offering EMC’s AX4-5 with same configuration for $13,000 adding a 24/7 warranty.
(Note: I checked these numbers. $8599 is the list price that EMC has on its own website. External 750GB drivesavailable at my local Circuit City ranged from $189 to $329 list price. I could not find anything on Dell'sown website, but found [The Register] to confirm the $13,000 with 24x7 warranty figure.)
Disk capacity is a shrinking portion of the total cost of ownership (TCO). In addition to capacity, you are paying forcache, microcode and electronics of the system itself, along with software and services that are included in the mix,and your own storage administrators to deal with configuration and management. For more on this, see [XIV storage - Low Total Cost of Ownership].
- EMC Centera has been doing this exact type of blob striping and protection since 2002
As I've noted before, there's nothing "magic" about it - Centera has been employing the same type of object-level replication for years. Only EMC's engineers have figured out how to do RAID protection instead of mirroring to keep the hardware costs low while not sacrificing availability.
I agree that IBM XIV was not the first to do an object-level architecture, but it was one of the first to apply object-level technologies to the particular "use case" and "intended workload" of Web 2.0 applications.
RAID-5 based EMC Centera was designed insteadto hold fixed-content data that needed to be protected for a specific period of time, such as to meet government regulatory compliance requirements. This is data that you most likelywill never look at again unless you are hit with a lawsuit or investigation. For this reason, it is important to get it on the cheapest storage configuration as possible. Before EMC Centera, customers stored this data on WORM tape and optical media, so EMC came up with a disk-only alternative offering.IBM System Storage DR550 offers disk-level access for themost recent archives, with the ability to migrate to much less expensive tape for the long term retention. The end result is that storing on a blended disk-plus-tape solution can help reduce the cost by a factor of 5x to 7x, making RAID level discussion meaningless in this environment. For moreon this, see my post [OptimizingData Retention and Archiving].
While both the Centera and DR550 are based on SATA, neither are designed for Web 2.0 platforms.When EMC comes out with their own "me, too" version, they will probably make a similar argument.
- IBM XIV Nextra is not a DS8000 replacement
Nextra is anything but Enterprise-class storage, much less a DS8000 replacement. How silly of all those folks to suggest such a thing.
I did searches on the Web and could not find anybody, other than EMC employees, who suggested that IBM XIV Nextra architecture represented a replacement for IBM System Storage DS8000. The IBM XIV press release does not mentionor imply this, and certainly nobody I know at IBM has suggested this.
The DS8000 is designed for a different "use case" andset of "intended workloads" than what the IBM XIV was designed for. The DS8000 is the most popular disk systemfor our IBM System z mainframe platform, for activities like Online Transaction Processing (OLTP) and large databases, supporting ESCON and FICON attachment to high-speed 15K RPM FC drives. Web 2.0 customers that might chooseIBM XIV Nextra for their digital content might run their financial operations or metadata search indexes on DS8000.Different storage for different purposes.
As for the opinion that this is not "enterprise class", there are a variety of definitions that refer to this phrase.Some analysts look at "price band" of units that cost over $300,000 US dollars. Other analysts define this as beingattachable to mainframe servers via ESCON or FICON. Others use the term to refer to five-nines reliability, havingless than 5 minutes downtime per year. In this regard, based on the past two years experience at 40 customer locations,I would argue that it meets this last definition, with non-disruptive upgrades, microcode updates and hot-swappable components.
By comparison, when EMC introduced its object-level Centera architecture, nobody suggested it was the replacement for their Symmetrix or CLARiiON devices. Was it supposed to be?
- Given drive growth rates have slowed, improving utilization is mandatory to keep up with 60-70 percent CAGR
Look around you, Tony- all of your competitors are implementing thin provisioning specifically to drive physical utilization upwards towards 60-80%, and that's on top of RAID 5/RAID 6 storage and not RAID 1. Given that disk drive growth rates and $/GB cost savings have slowed significantly, improving utilization is mandatory just to keep up with the 60-70% CAGR of information growth.
Disk drive capacities have slowed for FC disk because much of the attention and investment has been re-directed to ATA technology. Dollar-per-GB price reduction is slowing for disks in general, as researchers are hitting physicallimitations to the amount of bits they can pack per square inch of disk media, and is now around 25 percent per year.The 60-70 percent Compound Annual Growth Rate (CAGR) is real, and can be even growing faster for Web 2.0providers. While hardware costs drop, the big ticket items to watch will be software, services and storage administrator labor costs.
To this end, IBM XIV Nextra offers thin provisioning and differential space-efficient snapshots. It is designed for 60-90 percent utilization, and can be expanded to larger capacities non-disruptively in a very scalable manner.
Well, I hope that helps clear some things up.
technorati tags: IBM, XIV, Nextra, EMC, BarryB, RAID-0, RAID-1, RAID-5, RAID-6, RAID-10, RAID-X, AX4, Dell, AX4-5, FC, SAS, SATA, iSCSI, TCO, blob, object-level, disk, storage, system, Centera, ESCON, FICON, Symmetrix, CLARiiON, ATA, CAGR, Web2.0
On his The Storage Architect
blog, Chris Evans wrote [Twofor the Price of One
]. He asks: why use RAID-1 compared to say a 14+2 RAID-6 configuration which would be much cheaper in terms of the disk cost?
Perhpaps without realizing it, answers itwith his post today [XIV part II
So, as a drive fails, all drives could be copying to all drives in an attempt to ensure the recreated lost mirrors are well distributed across the subsystem. If this is true, all drives would become busy for read/writes for the rebuild time, rather than rebuild overhead being isolated to just one RAID group.
Let me try to explain. (Note: This is an oversimplification of the actual algorithm in an effortto make it more accessible to most readers, based on written materials I have been provided as partof the acquisition.)
In a typical RAID environment, say 7+P RAID-5, you might have to read 7 drives to rebuild one drive, and in the case of a 14+2 RAID-6, reading 15 drives to rebuild one drive. It turns out the performance bottleneck is the one driveto write, and today's systems can rebuild faster Fibre Channel (FC) drives at about 50-55 MB/sec, and slower ATA disk at around 40-42 MB/sec. At these rates, a 750GB SATA rebuild would take at least 5 hours.
In the IBM XIV Nextra architecture, let's say we have 100 drives. We lose drive 13, and we need to re-replicate any at-risk 1MB objects.An object is at-risk if it is the last and only remaining copy on the system. A 750GB that is 90 percent full wouldhave 700,000 or so at-risk object re-replications to manage. These can be sorted by drive. Drive 1 might have about 7000 objects that need re-replication, drive 2might have slightly more, slightly less, and so on, up to drive 100. The re-replication of objects on these other 99 drives goes through three waves.
- Wave 1
Select 49 drives as "source volumes", and pair each randomly with a "destination volume". For example, drive 1 mapped todrive 87, drive 2 to drive 59, and so on. Initiate 49 tasks in parallel, each will re-replicate the blocks thatneed to be copied from the source volume to the destination volume.
- Wave 2
50 volumes left.Select another 49 drives as "source volumes", and pair each with a "destination volume". For example, drive 87 mapped todrive 15, drive 59 to drive 42, and so on. Initiate 49 tasks in parallel, each will re-replicate the blocks thatneed to be copied from the source volume to the destination volume.
- Wave 3
Only one drive left. We select the last volume as the source volume, pair it off with a random destination volume,and complete the process.
Each wave can take as little as 3-5 minutes. The actual algorithm is more complicated than this, as tasks complete early the source and volumes drives are available for re-assignment to another task, but you get the idea. XIV hasdemonstrated the entire process, identifying all at-risk objects, sorting them by drive location, randomly selectingdrive pairs, and then performing most of these tasks in parallel, can be done in 15-20 minutes. Over 40 customershave been using this architecture over the past 2 years, and by now all have probably experienced at least adrive failure to validate this methodology.
In the unlikely event that a second drive fails during this short time, only one of the 99 task fails. The other 98 tasks continue to helpprotect the data. By comparison, in a RAID-5 rebuild, no data is protected until all the blocks are copied.
As for requiring spare capacity on each drive to handle this case, the best disks in production environments aretypically only 85-90 percent full, leaving plenty of spare capacity to handle re-replication process. On average,Linux, UNIX and Windows systems tend to only fill disks 30 to 50 percent full, so the fear there is not enough sparecapacity should not be an issue.
The difference in cost between RAID-1 and RAID-5 becomes minimal as hardware gets cheaper and cheaper. For every $1 dollar you spend on storage hardware, you spend $5-$8 dollars managing the environment. As hardware gets cheaper still, it might even be worth making three copies of every 1MB object, the parallel processto perform re-replications would be the same. This could be done using policy-based management, some data gets triple-copied, and other data gets only double-copied, based on whether the user selected "premium" or "basic" service.
The beauty of this approach is that it works with 100 drives, 1000 drives, or even a million drives. Parallel processingis how supercomputers are able to perform feats of amazing mathematical computations so quickly, and how Web 2.0services like Google and Yahoo can perform web searches so quickly. Spreading the re-replication process acrossmany drives in parallel, rather than performing them serially onto a single drive, is just one of the many uniquefeatures of this new architecture.
technorati tags: Chris Evans, RAID-1, RAID-5, RAID-6, performance, bottleneck, FC, SATA, disk, system, IBM, XIV, Nextra, objects, re-replication, spare capacity