Benchmarks can prove something
orbist 060000HPM5 Comments (4) Visits (10703)
At the risk of re-gurgitating recent Storage benchmarketing scars and rumblings, its probably escaped most people that we just published another Storwize V7000 SPC-1 benchmark.
Why another I hear you ask, well the first one we published was back in November when the product first GA'd.
However, we did initially release the product with a limited number of enclosure attachments, due to some SAS related trauma we'd found during test. It has to be noted, that such trauma was actually nothing to do with the V7000 product itself, nor our SAS interfaces. I am not a liberty to point the finger of blame, but anyway, all's well that ends well, and the full 240 drive configuration is available for all customers after upgrading to 220.127.116.11.
Part of the SPC-1 ruleset is that the tested configuration has to be available for anyone to purchase within 90 days of the publish. Therefore when we knew we were going to limit the initial attachment to 120 drives, we realised the configuration we'd already benchmarked (240 drives) could not be published at GA.
After some parallel thinking we came up with the idea of showcasing not only the internal disk, but also show the external virtualization ability that comes courtesy of the V7000 being the same binary install image as SVC.
The end result actually showcases both SVC and V7000 virtualization, and also FINALLY PROVES without question that virtualizing storage ACTUALLY ENHANCES PERFORMANCE.
Now lets get all the usual arguments out the way... benchmarks are synthetic, not real life, are aimed at negating cache, don't mean anything to end users... whatever your thinking, can we all agree the SPC-1 benchmark is the same for everyone that runs it, and can be used as a yard stick.
For example, looking at historical SVC releases, where 3.1.0 managed xxx and 4.2.0 managed yyy with the new hardware etc. In addition, vendors are always going to submit the most optimised solution.
Since SPC has a $/iop, you want to get the highest number you can, but at the same time keep the $/iop as low as possible. For example, $/iop wouldn't stack up if you submitted a system that was, oh, say full of SSD flash drives........... everyone could do that...
So lets look at the two V7000 submissions :
A00097 - using 120x 10K RPM SAS internal to V7000, and virtualizing a DS5020 with 80x 15K RPM drives achieved 56.5K
A00101 - using 240x 10K RPM SAS internal to V7000 achieved 53K
Lets do some simple maths, stay with me here... 240x 10K in V7000 got 53K, therefore half that number of drives (120) would get 53/2 = 26.5K
If we subtract this 26.5K (corresponding to to 120 drives from the original submission) from the 56.5K we are left with what the V7000 managed to get from the external virtualized DS5020 - i.e. 30K.
Lets look at the DS5020 on its own :
A00081 - DS5020 using 80x 15K FC drives achieved 26K iops. (Which by no co-incidence is identical to the config we virtualized)
So the DS5020 on its own, did 26K, but when virtualized we got 30K - so by some simple maths we have shown that by virtualizing the DS5020 we have added 4K iops. Some 15% increase....
I've always said that SVC (and now V7000) can actually enhance throughput, despite what FUD others may spread, once and for all, this proves, with an independent benchmark, which we agreed above is an apples for apples yard-stick. Do the maths in your own way, but you'll see that it all stacks up. Anyone looking to better utilise or simply sweat an asset for that little bit longer, both SVC and V7000 can help ease the pain of migration and help improve performance at the same time.