<   Previous Post  SVC - Business Probl...
Kicking up a 'flash'...  Next Post:   >

Comments (28)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 localhost commented Trackback

This comment will surely surprise you: <div>&nbsp;</div> CONGRATULATIONS!<div>&nbsp;</div> This is an extremely important acheivement of the sort I personally envisioned back when we started on our path to flash. And there will undoubtedly be several more milestones as solid-state persistent storage technologies make their way into commercial storage solutions. You and the Almaden team are to be commended.<div>&nbsp;</div> But this really isn't an arms race, or even an inter-vendor battle. Practical use cases must align the cost of the technology and with both the prformance and capacity requirements. I suspect there's lots more work for us all to do before we have just the right balance of RAM, NAND and spinning rust that will be required for mass-market appeal.<div>&nbsp;</div> One note about my presentation, though - "EFD" doesn't stand for EMC flash drive - the "E" is for "Enterprise." True, many of our field folk have taken to referring to the STEC drives EMC flash drives, but that's more because EMC is still the only place you can buy them in an array.<div>&nbsp;</div> Oh - and those mixed workload issues you not-so-subtly hinted at...there are ways to mitigate those with a little bit of old-fashioned innovation and some integration between the drive and the array microcode. You guys might not have figured that out yet.<div>&nbsp;</div> But honestly, congrats on the accomplishment, and thanks for joining in the efforts to make flash a commercial reality!

2 localhost commented Trackback

I thought that the SVC 4.x code was rated for something under 300,000 IO/s with like 8 nodes? How did you get past that?

3 localhost commented Permalink

BarryB,<div>&nbsp;</div> THANKS! Look out for more news soon. And thanks for the correction re EFD.<div>&nbsp;</div> OSSG,<div>&nbsp;</div> As with all things performance it is never that simple.<div>&nbsp;</div> The 8 node SPC-1 cluster did achieve just under 300K SPC-1 IOPs. However SPC-1 is closer to an 8K 40/60 workload.<div>&nbsp;</div> In my internal benchmarking a 70/30 4K all miss workload will achieve around 120K IOPs per node pair. This is with the cache enabled, so writes are being mirrored. If we run a pure read miss workload, then we get just over 200K IOPs per node pair. <div>&nbsp;</div> As the FusionIO cards give excellent response time we could disabled the cache in SVC for these tests, this reduces the work each node has to do, and the traffic on the fabric as there is no write mirroring in progress. This pushes the 70/30 4K number to almost that of a read miss workload - assuming the backend storage can cope - which in this case it could. <div>&nbsp;</div> The SVC cluster is running a subtly modified version of the SVC code which removes some of the configuration limits. As I've said before the cluster code designed for &gt;64 node clusters, but our official GA test and support statement is for up to 8 nodes...

4 localhost commented Trackback

Which begs the question, if SVC is essentially on complete passthrough mode, where's the value add? Wouldn't a company be able to just buy the same number of SSDs and attach them directly to hosts to get the same performance?

5 localhost commented Permalink

So this particular config had only SSD behind it, but thats not going to be a real life config for some years. You forget all the benefits that SVC brings, you can attach normal HDD based controllers too and can migrate hot / not hot data between HDD and SSD and back, you can FlashCopy, Mirror, Thin Provision etc and you can still use the cache for the HDD products, since we provide per vdisk control of caching. Not to mention the single point of management provisioning etc from Tiers0 through 3 depending on the needs of the application / host.

6 localhost commented Permalink

PS. One other thing we had to spend a lot of time tuning was the optimal data rate / queue depth to be maintained at the flash devices themselves. Given that you want to keep the flash busy enough to get the best out of the available channels, while not overloading it and causing potential issues with the algorithms performing garbage collection, wear leveling etc - this work has been done and the SVC backend queuing algorithms configured to sustain workloads within the optimal ranges (as we do for all storage controllers we support) - thus SVC is performing this work for you, and you don't need to tune each host in turn for the workload required - its handled by SVC.

7 localhost commented Permalink

Yeah, that's cool of course. But how a you going to replace the PCI flash card if it fails? And, by the way, you havent implemented any RAID on that cards?

8 localhost commented Permalink

True, the low level card does not support RAID, however SVC 4.3.0 introduced Virtual Disk Mirroring, so you can mirror across two flash controllers, which not only protects against controller failure, but also allows you to replace a card should it fail - while maintaining online access.

9 localhost commented Permalink

I have been to the DS5000 announcement today, a great future is ahead of us! The solid state disk story will now receive a boost since the first results are made...<div>&nbsp;</div> Can't wait to be able to test it myself !!!<div>&nbsp;</div> greetings<div>&nbsp;</div> ps: SVC is not only able to mirror the VDisks, you can also do RAID 0 with it. With these two functions you can do a sort of RAID 1+0 accross the SSD disks.

10 localhost commented Trackback

Barry, thanks for your comment correcting my error in saying this was an SPC-1 benchmark. My bad.