<   Previous Post  Announced - SVC...
SVC v4.2.1 - Technic...  Next Post:   >

Comments (6)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 localhost commented Permalink

Storage Anarchist makes an important, but unsurprising point: Vendors throw hardware and unrealistic configurations at benchmarks like the SPC-1. As he says, most of the SPC-1 results are hopelessly short-stroked with 15k drives, implicitly telling you that good performance in these arrays requires you to buy premium-priced drives, and then agree to access 1/3rd or less of their capcity.<div>&nbsp;</div> But the unmade point is that some, not all, of the benchmarks offer full-disclosure. With a little patience and reading, Storage Anarchist and others can decide for themselves if a given benchmark configuration has relavance in their environment. They can see, for example, what the implied capacity utilization is. For instance, with a little diligence, I found that Barry's #2 outlier might be better explained by the fact that the result achieved over 90% disk utilization and used (common) 10k drives.<div>&nbsp;</div> My favorite full disclosure fun, if there is such a thing, is to check out the appendices of the SPC-1 results to see the relative level of expertise required to configure the arrays and hosts. Now you know what you need to know, or pay for, in order to achieve the results. Systems can vary widely in this regard I've observed.<div>&nbsp;</div> So, I agree partly with Storage Anarchist and think Barry may be using incomplete 'science' to sell everyone a Shark.

2 localhost commented Trackback

Hi Barry,Thanks for the gracious welcome to blogosphere. I have followed may of your discussions on your blog, and enjoy them immensely. I posted a response to your post on my blog at http://dotConnector.typepad.com. Looking forward to discussing it further.<div>&nbsp;</div> Cheers, K.

3 localhost commented Trackback

Chris,Fear has nothing to do with it. Hopefully it is clear now that there is a direct proportionality between spindle count and the benchmark results, and that at the correlation is linear to very high accuracy. <div>&nbsp;</div> If EMC also decides to participate, all we would prove is that we are on par with the SVC and HDS platforms, which would sell the DMX short of its true capabilities, and mislead our customers into thinking that there are no differences between these architectures. <div>&nbsp;</div> That is just not true, and we prefer to work with our customers on a one-on-one basis, and model specific real workloads to establish that. The DMX is not in the same league as the rest of the offerings in the industry. We are more than happy to prove it to individual customers - in the form of performance guarantees and bakeoffs, if necessary, but subjecting ourself to a benchmark that cannot tell the difference between platform is a disservice to our intellectual capital and our customers.<div>&nbsp;</div> EMC has always had critics, and silencing them by agreeing with them is not a very fruitful way to tackle it.<div>&nbsp;</div> When a benchmark can discriminate between sophisticated systems, I am sure we will be having a different conversation. <div>&nbsp;</div> Thanks, Cheers, Kartik.

4 localhost commented Trackback

Hi Kartik,<div>&nbsp;</div> When you say "Hopefully it is clear now that there is a direct proportionality between spindle count and the benchmark results, and that at the correlation is linear to very high accuracy", I feel I must point out that this is not always true. Some storage devices can not saturate their disks after a certain number. The only way to find out which devices have this limitation and where the performance levels off is with benchmarks. <div>&nbsp;</div> Yes, SPC-1 is not perfect, but at least it is cache hostile so allows readers to see how the device they're considering will perform when it has to do random reads.

5 localhost commented Trackback

Hi OSSG,You know I have never disagreed with you on this point.<div>&nbsp;</div> Yes... if we test to fail the scaling, there is useful information we can extract from the bench mark. But, in the regime that has been tested for the 500-1600 drive range, and given that at least the USP V has shown no signs of breaking scaling at full capacity, I have no reason to believe that any of the modern systems (with large cache, large cache bandwidth, fast processors, optimized microcode, fast front-ends) will choke before the spindles do. <div>&nbsp;</div> On one hand, one might argue that we should "prove" that the DMX with 1600+ drive capacities does not violate scaling. On the other hand, this would also "dumb" the DMX down and send the message that architecture does not matter.<div>&nbsp;</div> If I had to chose between critics and customers, I'll weather any amount of criticism before I mislead my customers. Having worked with Symmetrix family performance for 8 years, I know architecture DOES matter. I know I am asking you to take my work for it - and I understand if you decline to. But if you were my customer, I believe I could establish the veracity of my claim with much more detail.<div>&nbsp;</div> So, with all due respect, while your point is theoretically correct, I don't think we should participate in the SPC.<div>&nbsp;</div> Thanks, Cheers, Kartik.

6 localhost commented Trackback

I have a comment, but I'm going to hold it in until I see your next blog post, Kartik :)