Inside System Storage -- by Tony Pearson

Tony Pearson Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson )
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (7)

1 localhost commented Permalink

Chris,It's true. The SAN Volume Controller (SVC) uses SDRAM with battery backup. The tested configuration runs an 8-node cluster, each node has dual Xeon processors and 8GB of SDRAM memory. The latest SPC-1 benchmark was 155,519 IOPS for 12TB configuration. By comparison, Texas Memory Systems tested their RAMsan-320 which came in at 112,491 IOPS for a small 68GB configuration. <p>As for SPC-2, the latest SVC benchmark is 4,544 MB per second. This workload tests large file transfers, large data base queries and Video on Demand workloads. SPC-2 doesn't show any results from Texas Memory Systems, and I suspect they may not have a large enough configuration to run it.</p> <p>I don't know any details of the inside technology of the RAMsan-320, but it might have fewer processors, slower memory, fewer host adapters, slower ports, don't know.</p> <p>The SVC has been around since 2003, we are on its fourth generation, has over 2,200 customers, and was engineered for optimal performance.</p>

2 localhost commented Permalink

Tony, you write: "The fastest disk system remains the IBM SAN Volume Controller, with an SPC-1 and SPC-2 benchmarks in excess of those published by Sold-state-disk-manufacturer Texas Memory Systems, Inc."<div>&nbsp;</div> That seems contrary to common sense. Could you explain please?Thanks,Chris.

3 localhost commented Permalink

Tony,<div>&nbsp;</div> A few comments:<div>&nbsp;</div> 1. I think your post, insofar as it refers to Hybrid drives (mix of flash memory with hard drives), is accurate.2. IBM does indeed have the top SPC-1 results (by the way I am delighted to just see SPC discussed in a blog). I do appreciate your thoughts on why the Texas Memory Systems results are lower than IBMs, but I think you missed the real reason... the type of servers used in the testing. If we could possibly borrow the server that IBM used in its testing, I think we could make that fine IBM server look even better than your also excellent storage makes it look. Can you work that out for us?3. Not all solid state disks have write performance issues or write wear issues. Systems made with DDR RAM, such as ours, have dramatically better write performance than disk or flash and do not have wear issues like flash memory.4. Your first comment to max out memory in the server has two possible weaknesses. First, server memory is worthless where write performance is the number one bottleneck. Second, server memory in capacities similar to what solid state disks, such as ours, can provide can be more expensive per capacity, cannot be easily shared among multiple servers simultaneously, and almost always have to be thrown out when the server is replaced. An external solid state disk often has a lower cost per capacity than large memory in servers, can work in a SAN connected to multiple servers simultaneously, and does not have to be replaced if the server is upgraded.5. Regarding your second point, adding performance tiers for some businesses equals complexity. For other businesses, adding tiers equals performance and cost-effectiveness. The whole promise of ILM is that we don't have to be stuck with just three tiers of performance. In fact, I think your SVC is an excellent way to open up the enterprise to multiple storage tiers and yet manage this complexity from a single appliance.<div>&nbsp;</div> Thanks again for the discussion.<div>&nbsp;</div> Woody HutsellTexas Memory Systems

4 localhost commented Trackback

Thanks, Tony. Very creative. Nicest Christmas present I'll get this year.<div>&nbsp;</div> Couple of points:1) Chris Evans says it much better, and more knowledgeably than I, on his blog at:http://storagearchitect.blogspot.com/2006/12/new-raid.html"The problem is, the drive is not involved in the rebuild process - it dumbly responds to the request from the controller to re-read all the data. What we need are more intelligent drives combined with more intelligent controllers; for example; why not have multiple interfaces to a single HDD? Use a hybrid drive with more onboard memory to cache reads while the heads are moving to obtain real data requests. Store that data in memory on the drive to be used for drive rebuilds. Secondly, why do we need to store all the data for all rebuilds across all drives? Why with a disk array of 16 drives can't we run multiple instances of 6+2 RAID across different sections of the drive?"<div>&nbsp;</div> I would like to see that discussion joined.<div>&nbsp;</div> 2) I made this post at StorageMojo:http://storagemojo.com/?p=336,but to make it easier on anyone reading this comment I will repeat it here:“Why ZFS?”Montag, 18. Dezember 2006http://www.c0t0d0s0.org/archives/2410-Why-ZFS.html[Article excerpt]“It´s really really cool that ZFS will be integrated into Leopard. When you read forums like digg or slashdot there seems to be an utter absence of creativity or knowledge, about the advantages someone gets by using ZFS in client operating systems.To get a view of the benefits, i will describe my workflows on hahdafang, my primary mac@home:”<div>&nbsp;</div> “Economics of ZFS” from the same blog:http://www.c0t0d0s0.org/archives/2413-Economics-of-ZFS.html[article referenced in the above post]“Paul Murphy posted a good article about ZFS economics in his blog.”[Source article]“ZFS, HW RAID, and expensive mis-apprehensions”Posted by Paul Murphy @ 12:15 amhttp://blogs.zdnet.com/Murphy/?p=759[article excerpt]“Solaris 10 now ships with ZFS - and ZFS obsoletes both PC style RAID controllers and the external RAID controllers used with bigger systems.” <div>&nbsp;</div> From my limited perspective, it appears that ZFS bypasses RAID and rebuild problems?<div>&nbsp;</div> I once predicted that IBM would buy Sun. It's not too late.<div>&nbsp;</div> Here's my Christmas present to you.I worked for years Implementing IT environments that used what I called "Rolling Down the Storage Hill". "Rolling Up" was possible but didn't happen very often. I based the "Rolling" on the ROI/TCO ratio. This Information is still really hard to get. One day I was pitching my "Rolling" solution to a client and he looked me in the eye and said, "That sounds an awful lot like ILM". I have never used "Rolling" again. Same solution, much nicer name.<div>&nbsp;</div> I worked for years in obscurity developing what I knew would be the "Storage Revolution". I called it E2EIoD (End-to-End Information on Demand). I was pitching this to some knowledgeable people and one of them said, "That sounds an awful lot like SOA". Sure enough, IBM had beat me to the finish and done a better job. <div>&nbsp;</div> I've still got my Amiga running OS/2.

5 localhost commented Trackback

Woody,Thank you for the clarification and additional information. I have edited my post to include links to the SPC benchmark numbers so readers can make their own comparisons, and for clarity.

Add a Comment Add a Comment