Inside System Storage -- by Tony Pearson

Tony Pearson Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson )
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (4)

1 localhost commented Trackback

Again, Tony, if you would stick to the facts, you'd have had a pretty good post, even though you are probably now in full violation of your claimed restrictions against engaging in "blogger wars" a couple of posts back.<div>&nbsp;</div> But still it seems you just can't resist twisting facts to fit your distorted perspective on the world of Flash.<div>&nbsp;</div> And you make my point again that you really don't understand flash drive technology.<div>&nbsp;</div> First mistake: Given the significantly higher MTBPR of EFDs vs. HDDs and the faster rebuild times, EMC recommends using RAID 5 for the flash drives we sell, not RAID-10 (nor RAID 6). <div>&nbsp;</div> For you to suggest otherwise is obviously an intentional attempt to mislead your readers.<div>&nbsp;</div> Of course, most readers will recognize that RAID-10 would in fact obviate the benefit of the larger capacity (as you demonstrate). EMC's recommended best practice for EFDs is to use RAID 5, and thus those same 8 200GB drives will provide 1,400GB of usable capacity, vs. the paltry 1,022 on the DS8K with the 146GB drives.<div>&nbsp;</div> Thus, customers get about 37% more usable capacity from EMC for the same STEC drive, providing a lower $/GB usable.<div>&nbsp;</div> Second mistake: as you point out so clearly, IBM is not selling the STEC ZeusIOPS drive with 512GB of raw capacity - neither in its 300GB format nor its 400GB format (EMC sells the 400GB).<div>&nbsp;</div> Technically, the 512GB ZeusIOPS drive is both "newer" and "different," as it is the latest capacity point added to the ZeusIOPS product line. So no, my statement is truthful: IBM isn't selling the identical driveS, we only have the 256GB (raw) device in common.<div>&nbsp;</div> And EMC customers alone can take advantage of the ADDITIONAL cost benefits of the larger capacity, while IBM customers are stuck requiring more than 2 times the number of flash drives to get to the same usable capacity as say, 8 400GB EFDs in RAID 5 7+1.<div>&nbsp;</div> More capacity with fewer drives = lower acquisition AND operating costs.<div>&nbsp;</div> Third: you denigrate EMC's use of "EFD" (Enterprise Flash Drives", but it is far from a marketing term. Just as the industry refers to enterprise disk drives to summarize the technical features of HDDs optimized for the 24xforever workloads our storage arrays support, EFD identifies SSDs that meet the requirements of our storage platforms as well. In both cases, the term "enterprise" serves to differentiate these devices from those designed for less demanding environments. <div>&nbsp;</div> Noting that IBM eventually selected the same STEC drives that EMC calls "enterprise" would seem to underscore your agreement that these drives alone meet the requirements of a storage array - especially since there are cheaper alternatives in the market. If you didn't care about the "enterprise" features, I'm sure you would have chosen to resell/OEM a lesser drive.<div>&nbsp;</div> BTW - EFD is a descriptor that IBM should support rather than attack...it is in our entire industry's best interests to expand the set of drives that meet the more stringent requirements of the enterprise. It is not a term to be used lightly...<div>&nbsp;</div> And fourth: you still haven't shown us the math that justifies IBM's conservative approach. <div>&nbsp;</div> I've shown that EFDs formatted to 200/400 will easily meet the life expectancy requirements of the warranty in any real-world application environment - when used in an EMC array, that is.<div>&nbsp;</div> You and BarryW, on the other hand, offer nothing other than hand-waving, innuendo and unsubstantiated FUD in response.<div>&nbsp;</div> So I'll ask once again: SHOW US THE MATH that explains why customers have to pay for so much unusable capacity to use flash drives on the DS8K. GIVE US THE MATH that shows the drive would suddenly fail under ANY projected real-world workload if IBM were to be less conservative.<div>&nbsp;</div> I honestly don't expect you to do so, because you would then have to admit one or more of the following is true:<div>&nbsp;</div> a) That the DS8K can't really generate sufficient write workload to exceed the P/E life of the drive under any conditions;<div>&nbsp;</div> b) That the drive performance characteristics under mixed read/write workloads means that it will never actually experience the pathological write load you assert puts data at risk;<div>&nbsp;</div> c) That the DS8Ks paltry write cache is insufficient to delay writes significantly to minimize P/E wearing on flash drives;<div>&nbsp;</div> d) That the the DS8Ks I/O block sizes and cache deficiencies actually exacerbate the "write amplification" factor and therefore significantly accellerate P/E wear, even under relative light workloads;<div>&nbsp;</div> e) That the DS8K does not do ANY SSD monitoring to detect or protect against premature wear-out, and updating the code of the soon-to-be-retired dinosaur to do so is out of the question. Thus the only option is to be conservative (and make your customers pay more to cover for the inadequacies of the DS8K platform);<div>&nbsp;</div> f) That you never actually DID the math,<div>&nbsp;</div> g) That your DS8K developers were in fact totally unaware that the capacity was tunable on the ZeusIOPS until the day EMC announced the new larger capacities;<div>&nbsp;</div> h) That you realize now that if your engineers had actually done their homework, you wouldn't be caught in the competitive disadvantage selling smaller, less space-efficient SSDs - costing customers more $/GB, $/IOPS and $/power+cooling;<div>&nbsp;</div> i) that IBM not only doesn't get flash, it is incapable of admitting or correcting its misleading white papers on the subject. (Last I checked, Clod Berrara's Feb 2009 white paper STILL claimed that SLC NAND would tolerate 1,000,000 P/E cycles - now some 3 months later and still the facts are wrong in that paper).<div>&nbsp;</div> j) all of the above.<div>&nbsp;</div> I truly suspect that the answer is j), and I'm sure most of our readers do as well. You've given them no reason to think otherwise.<div>&nbsp;</div> And your misrepresentations of the facts make it clear you are trying to hide something.<div>&nbsp;</div> Or you just don't know.<div>&nbsp;</div> Either way, Tony - cut out the baloney, and just DO THE MATH!!!

2 localhost commented Permalink

Just my 2 cents guys:<div>&nbsp;</div> I'd just like to say I appreciate these debates. The information being presented by both sides merits research on my own part to actually get facts straight between the two vendors. While we are IBM locked (which I have no problem with as IBM has been fantastic partners to my organization) Barry's comments, while strangely over passionate, are relevant and interesting. Tip to Barry, try to present your information a little more professionally because I have a tough time taking you seriously when you harp on a single phrase and finish various statements with triple exclamation points like my sister does on Facebook. It does make me curious though, why IBM is so cautious on the formatting of such drives? I would tend to say it's mostly for CYA purposes if they really are behind on understand SSDs which is not a bad thing and being a second mover on the format issue does filter them from any catastrophic issues that may occur with EMC's offering (although the chance of that is most likely slim).<div>&nbsp;</div> Anyway, I look forward to reading more from the Barrys and Tony. Thanks guys.

3 localhost commented Trackback

Khue,Thanks for your two cents. Both IBM and EMC offer the latest technology from STEC, both will stand behind drive replacements if/when they occur. The key difference I believe is philosophy.<div>&nbsp;</div> EMC prefers that entire LUNs be migrated from HDD to SSD, and as such offering the largest capacities and providing more usable TB per drive with more aggressive formatting serves their purpose well. While a bigger drive reduces the $/GB, it increases the $/IOPS, as each drive can only do so many IOPS regardless of capacity or formatting.<div>&nbsp;</div> IBM prefers smart placement of data sets, files and individual database tables that make the most use of SSD's unique capabilities. IBM has shown that with only 10 percent of the SSD, you might get as much as 90 percent of the performance benefits. IBM offers a variety of tools to help identify the right data to move, and tools to perform the migration.<div>&nbsp;</div> Where EMC wants its disk systems to work harder for you, IBM wants them to work smarter.<div>&nbsp;</div> -- Tony

4 localhost commented Trackback

BarryB,You make a good argument why RAID-5 should be the best choice for both protection and performance balance for SSD. I am glad to see both IBM and EMC recommend RAID-5 for SSD. This brings up two questions in my mind:<div>&nbsp;</div> 1) Given that RAID-5 is best choice for SSD, why did EMC invest additional development investment to also offer RIAD-10 and RAID-6 which are not recommended for use with SSD?<div>&nbsp;</div> 2) Why would you poke fun that IBM only offers RAID-5 on SSD when even EMC suggests this is the best recommendation?<div>&nbsp;</div> --- Tony

Add a Comment Add a Comment