SSD's are becoming a reality
orbist 060000HPM5 Comments (6) Visits (6756)
I've been following the Solid State Disk (SSD) evolution over the last year or two with a selfish eye. Running one of the largest test stands we have in the Hursley SVC lab (from a number of spindles point of view) I too suffer the pain of drive fall-out. Its more of just an annoyance to me though, as it means the last performance run I have just done is invalid, as the arrays were degraded - and even using RAID-10 this can make a noticeable difference. Just one degraded array out of many hundreds can throw my benchmarks out of whack - the joys of aging 15K RPM disks (especially they way these ones have been thrashed over their life)... Now I guess the only plus side is unlike a real storage administrator, who has his users calling up and complaining about performance, its only me and my team that suffer and lose a few hours worth of data.
As our labs have grown in footprint and consumption over the last few years, we are probably one of the biggest power and cooling users on site. Hence my interest in faster performing, lower consuming SSD's.
There has been a lot of hype and discussion about SSD's over the last year or two, and if you are to belive some of the 'NAND' manufacturers the market for storage by 2011 will be split 50:50 between rotating imagnetic platters and NAND, or maybe HLNAND based SSD's.
I too feel that we are on the brink of something quite special in the storage industry, if the drawbacks of today's SSD's can be overcome. And it would appear they can.
It would also seem like some of the big players in the magnetic disk market are lagging behind. When investigating what was around in the market a few months ago I was interested in the latest Samsung offering, the technical specs for example showed over 10,000 read IO/s, but I had to make sure I had correctly read the 13, yes thirteen write IO/s. Ok, so thats not going to do what I want.
We did a bit more exploring and came across STEC, producers of what they call the ZeusIOPs SSD. I must admit to initially being skeptical of their claims. Their 2Gb Fibre drive claiming 50,000 read IO/s and almost 19,000 write IO/s, it comes with a 5 year guarantee, which equates to a tested 2+ million writes per cell. They very kindly offered a sample drive for me to play with to see if it would meet my needs.
The test stand I use daily is made up of tens of IBM DS4000 series controllers, usually behind just a pair of SVC nodes, driven by some high end pSeries hosts. I merrily fitted the Zeus into one of the carriers and removed all the standard drives and arrays. Creating a JBOD from the Zeus and presenting it to SVC. So far so good. However the controller did its best and was soon saturated some way short of the Zeus's spec'd limits. I needed a plan B.
The drive supported public loop attach, so a few days later when the interposer FC to SFP converter board arrived I tried again. The switch saw the loop and SVC saw the drive. A quick blast from my performance tools and sure enough the host was seeing close to 50,000 read ops and just shy of 19,000 write ops! Over 100MB/s and closer to 200MB/s with SVC's sequential pre-fetch. Compared with the speeds and feeds I have discussed previously, thats just plain amazing!
Now this really is quite interesting, but as discussed by Tony, and in my comment, standing back and looking at where we are, this has the chance of challenging the general construction rules for a storage controller, a data center and even the applications themselves.
Controller caches; will we still need them? If the drive can surface the data in sub micro-second timings, does a cache actually give you much benefit? Does the SSD become the cache itself, or does it enter as most suggest as a new Tier0. But what happens in a few years when the price is right? Not only Tier0, but Tier1 and 2 will be available at such speeds. This is going to be a very disruptive technology. One question would be, can we make use of such IO/s numbers in a relatively small capacity. The drive I tested was 73GB with 146GB already available and no doubt larger capacities are not far away. A very IO hungry core database or application may be able to squeeze into a fairly small capacity (now that performance is not proportional to spindles) but does your average application need such performance.
I'd therefore propose that SSD's are becoming a reality Robin, and that its not going to be long before the major players can match the power of STEC's drive and the price comes rocketing down. Afterall, its not the NAND that costs the money today, we are paying for the IP in the drive that overcomes the reliability and performance characteristics of writing to flash.