Inside System Storage -- by Tony Pearson

Tony Pearson Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson )
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (2)

1 Andrew_Larmour commented Permalink

Hi Tony, interesting presentation - thanks for posting. I've been wondering for a while about replacing spinning disks with SSDs in applications where very high I/O rates are needed. For instance, when I size some software products, in order to achieve the database I/O needed to sustain high transaction rates, I am having to specify large RAID 1+0 arrays. The size of these disks in this array is almost irrelevant - if 1Gb disks were still available, then the combined size in the array would easily suffice in almost all applications. These arrays are often 40 to 100 disks in size. The space required would typically be less than 5Gb. As you can see, by the time we put in the minimal disk (136Gb(?) these days) and have an array of (say) 40 disks, the available capacity far outweighs the required space. <br /> . <br /> I've been thinking that a smaller RAID 1+0 array of SSDs that takes advantage of the I/O speed increase could potentially meet the performance requirements. Not to mention the green aspect with reduced power requirements. Obviously the array would need to be able to adequately handle the increased throughput of the SSDs on a per drive basis and I am guessing that those don't exist yet. The other thing I am not sure about the write performance though with SSDs and how that would effect the overall equation. <br /> . <br /> What do you think?

2 TonyPearson commented Permalink

Andrew, yes, putting a little data across many drives is known as "short-stroking", a way to increase performance by having as many physical arms and spindles to access the data. <div>&nbsp;</div> So, IBM offers three ways to take advantage of SSD. First, we offer SSD inside our SVC, DS5000 and DS8000 as resident storage. This has the advantage that multiple hosts connected to these can take advantage of the faster IOPS. With the SSD, you can easily move data into SSD as needed, and back down to spinning disk to make room for something else. The SVC allows you to RAID1 between SSD and spinning disk, which greatly reduces the cost of protecting your data against unexpected hardware loss. In most other cases, clients find RAID-5 for SSD adequate protection, as SSD do not have moving parts and do not fail as often as spinning disk. <div>&nbsp;</div> Second, we offer SSD as "Non-volatile cache" in our N series. These are called Performance Accelerator Modules or PAM. SSD are slower than DRAM used for this, but SSD is less expensive and therefore we can have much larger capacities than most cache in systems today. This allows SSD to be simply part of the caching algorithm, eliminating the touch decisions of what goes on SSD and what doesn't. <div>&nbsp;</div> Third, we offer SSD in our various servers. This limits the SSD to that individual machine, but allows substantially better performance at PCIe bus speeds rather than being concerned about SAN fabric links and/or controller bandwidth capacities. This can improve speed for boot and reboots, provide SSD across several VMs when using VMware or Hyper-V, and improve IOPS for specific applications. <div>&nbsp;</div> -- Tony

Add a Comment Add a Comment