<   Previous Post  One box fits all -...
Location does someti...  Next Post:   >

Comentarios (30)

1 localhost ha hecho un comentario el Retroenlace

&gt;Who's going to be the first to 'sponsor' a DMX SPC-1 test<div>&nbsp;</div> I thought it was going to be IBM. We knew it was going to happen eventually, hell we talked about it last year when this first kicked off and indeed I thought IBM would be the first to submit SPC numbers for EMC gear.<div>&nbsp;</div> I'VE LOST A BET BY BACKING IBM ONCE AGAIN! ;-p

2 localhost ha hecho un comentario el Retroenlace

&gt;I'VE LOST A BET BY BACKING IBM ONCE AGAIN! ;-p<div>&nbsp;</div> Though now I know what it's like to be a DS8000 owner..<div>&nbsp;</div> Boom! Boom! (Basil Brush)

3 localhost ha hecho un comentario el Retroenlace

Despite your whole-hearted attempt to guess the most likely "excuse", you missed two:<div>&nbsp;</div> #6. NetApp boxes perform quite well when utilizing less than 10% of the total capacity. But that performance does not scale with utilization - run the same benchmark at 70+% capacity utilization on a NetApp box, and everyone knows that response time goes to heck in a handbasket. <div>&nbsp;</div> The benchmark is thus totally useless as a predictor of "real world" performance, much less as a tool to compare different systems.<div>&nbsp;</div> And then there's #7: All of the Above.<div>&nbsp;</div> In most cases, I'm mildly bemused by the absurtity of using the SPC-1 to compare how different systems will actually perform outside of the fictitous world of Second Life.<div>&nbsp;</div> But with NetApp, I'll sleep well tonight, peaceful in the knowledge that anyone who uses their results to justify purchase of NetApp gear will be inextricably caught in their Venus Flytrap of degraded performance as utilization increases, inflated upgrade costs (hardware and software) and software licensing lock-ins that eliminates any possible resale value once they figure out how bad they've been had.<div>&nbsp;</div> Bad Mojo.<div>&nbsp;</div> But like the quote falsely attributed to P.T. Barnum says "There's a sucker born every minute...and two to take 'em."<div>&nbsp;</div> Ultimately, folly such as the SPC is self-correcting - those that live by the benchmark will ultimately suffer their demise by the benchmark. <div>&nbsp;</div> In the meantime, I can't help but notice that overall storage market share seems inversely porpotionate to the number of posted SVC results. Especially after EMC's double-digit growth in every reported aspect of its business last quarter and last year - no SPCs whatsoever, and yet still #1 in every market segment they compete in.<div>&nbsp;</div> Go Figure...

4 localhost ha hecho un comentario el Enlace permanente

I'm no marketeer, and I've long since known that statistics can tell you anything you like...<div>&nbsp;</div> "still #1 in every market segment they compete in."<div>&nbsp;</div> If you exclude Storage Virtualization surely...

5 localhost ha hecho un comentario el Retroenlace

Alas, dear namesake, don't flatter yourself in your quest for relevance. "Storage Virtualization" is neither a definable market nor a market segment of another market. It is merely a technology - just like "memory virtualization" and "CPU virtualization."<div>&nbsp;</div> The reality is that people don't shop for "storage virtualization", they buy "storage" of different flavors: disk, tape, WORM, RAM, Flash, etc. This "storage" comes wrapped in various sized packages offering different features and capabilities to support a range of storage requirements. Virtualizing the storage is one such feature, as are replication, mirrors, RAID, snapshots, encryption, de-dupe, MAID, ad infinitum.<div>&nbsp;</div> Fact is (and you can't deny it), every external RAID storage device implements "storage virtualization" to various degrees - by definition! Yes even Symmetrix and CLARiiON provide "storage virtualization." So, since each of EMC's storage platforms are #1 in market share in their respective market segments, and since EMC has products in virtually every named market segment except the (contracting) tape market and the (emerging) cloud storage segment, I'll contend that yes, indeed, if you want to measure which vendor has collected the most revenue for products that can virtualize storage, then EMC is #1 there too.<div>&nbsp;</div> You should probably just stick with benchmarketing.

6 localhost ha hecho un comentario el Retroenlace

Barry - <div>&nbsp;</div> How bout we call HP and Hitachi and all chip in for a DMX? ;-)<div>&nbsp;</div> In all honestly, would love to see DMX in there too...

7 localhost ha hecho un comentario el Enlace permanente

BarryB, statistics, symantics, way back I explained that while you can claim some degree of virtualization even with RAID thats not what I'm talking about. <div>&nbsp;</div> You know as well as I do that I was refering to Invista. <div>&nbsp;</div> Anyway, Taylor, yeah, we have one somewhere... Its one of the things with SVC we get to play with and truly see what competitors products do when we plug them behind SVC. Which is why I almost fell off my chair reading Chucks reply about CX3 really flying... We have to work out how much sustained concurrent I/O each vendors product can handle coming out of SVC so we can set optimal queue depths (one of the reasons we have to qualify controllers) and well, I'm not going to say any more.<div>&nbsp;</div> As for your comment on Chuck's blog : "Maybe there is something specific to SPC that EMC doesn't like...I truly don't know." <div>&nbsp;</div> I've heard so many excuses but I think its because it tries to simulate a real workload where the re-referenced data site does not always come from the same blocks. i.e. it makes the cache algorithms work for their money. The EMC bloggers out there have commented in the past that they see this as cache negation. It may well be, but what you are left with is how well the actual drive subsystems / loops etc can actually sustain I/O rate. IMHO this can only mean that EMC have tried SPC and don't get that great a result - otherwise they would be submitting, even if it was only to get some comments for their marketing pages as they seem to think its for. <div>&nbsp;</div> Seriously though, I can't believe that they haven't got hold of the test suite and tried it? Can you? And if they have and the results were good we wouldn't all be having this discussion would we?!<div>&nbsp;</div>

8 localhost ha hecho un comentario el Retroenlace

I can tell you first hand and beyond any doubt that EMC has not acquired, configured or tested the SPC benchmarks on any of their storage platforms (or anyone else's, for that matter). Nor have they paid someone else to do it for them so that they can learn about it. Never had it, never paid for it, never got it, never run it.<div>&nbsp;</div> Don't need to in order to know that it doesn't represent the workloads EMC's systems typically are asked to support.<div>&nbsp;</div> Especially knowing first-hand that the SPC's founders made it clear from the start that the specific intent was to define benchmarks that were cache hostile, specifically so as to disadvantage Symmetrix. By design intent, not accident (learned a lot of things in the acquisition of a prior "competitor", don't you know).<div>&nbsp;</div> Would you be surprised to know that the original SPC founders also designed what eventually became the TPC's, back when they worked at DEC? And that the TPC's were specifically designed to demonstrate that a "cheap departmental computer" could beat the IBM mainframe at OLTP applications?<div>&nbsp;</div> A couple of smart ones, those guys.<div>&nbsp;</div> So, do you think it's any accident that ALL of the SPC workloads focus on sequential I/O and/or random I/O patterns that are totally inconsistent with any production database engine in use today? <div>&nbsp;</div> By specific design, these tests measure the efficiency of serving I/O off of disk, and (by design) attempt to ensure that cache is not only unable to help, but actually an expensive overhead that slows things down.<div>&nbsp;</div> Yet, out there in the real world, Symmetrix systems run with seemingly uncanny high cache hit rates, apparently "guessing" which blocks will be needed next without even a hint from the host applications, delivering I/O response times unattainable with simple raw disk I/O (at least until Flash Drives came along...but that's "Post SPC" talk, so we'll skip that, for now).<div>&nbsp;</div> BarryW - call up Moshe, and ask him to explain it to you...I'm sure he'll be happy to share what he knows about the magic in those cache and prefetch algorithms that were designed and field-proven on his watch (and tell him I said Hi, while you're at it). <div>&nbsp;</div> And just FYI - when you measure raw, uncached I/O, you're missing several dimensions of the picture. Consistently delivering 1ms write response times on disk dives that average 6-12ms at best is a tremendous competitive advantage vs. uncached storage...no matter whether SVC's added overhead is only 65 mics, even you have to admit 1ms is remarkable for a hard disk subsystem.<div>&nbsp;</div> Fact is that SPC-1, SPC-2 and SPC-3 were(are) designed to demonstrate how well a system operates under synthetically "pure" conditions on sparsely utilized disks and WITHOUT THE BENEFITS OF LARGE CACHE - conditions that bear little relationship to the chaotic workloads today's enterprise arrays handle. <div>&nbsp;</div> But having the shortest code path to underutilized disk doesn't necessarily mean that that system will perform best under chaotic workloads on near-full drives. If nothing else, as capacity utilization increases, so does access density, but each spindle can still deliver a fixed number of IOPS. It takes foresight and creativity to migigate the physical limitations of a disk drive, and the SPC benchmarks actually STOP MEASURING when the disk drives get saturated.<div>&nbsp;</div> Not surprisingly, out in the real world, it is in fact where the SPC STOPS taking measurements that Symmetrix STARTS to show its differentiated value. "Predictable performance under unpredictable workloads," as it were.<div>&nbsp;</div> You can be sure that the SPC founders and designers have intentionally cut the benchmark off where they did with calculated forethought and design.<div>&nbsp;</div> This is why, after 10 years of existence, the Storage Performance Council still has not developed a benchmark that models the random I/O workloads of typical database engines and applications, much less the chaotic workloads of 1000's of servers convering upon a single array with seemingly random I/O requsts...<div>&nbsp;</div> They know this is where Symmetrix remains unchallenged. <div>&nbsp;</div> And I can assure you it's not because the SPC members don't understand databases - in fact, it's where they STARTED. That they still don't offer an OLTP benchmark can only mean that they know this is where Symmetrix will (still) beat all comers.<div>&nbsp;</div> Just ask Moshe.

9 localhost ha hecho un comentario el Retroenlace

Disclosure; I'm a NetApp employee.<div>&nbsp;</div> StorageAnarchist, have you read the benchmark? You said;<div>&nbsp;</div> Despite your whole-hearted attempt to guess the most likely "excuse", you missed two:<div>&nbsp;</div> #6. NetApp boxes perform quite well when utilizing less than 10% of the total capacity. But that performance does not scale with utilization - run the same benchmark at 70+% capacity utilization on a NetApp box, and everyone knows that response time goes to heck in a handbasket.<div>&nbsp;</div> As an example, the NetApp test with snapshots ran 12.5TB of data out of 20TB RAW, and 12.5TB out of 15.5TB USABLE. Pick either figure; it's way higher than your figure of 10%. In fact, for raw it's nearly the 70% figure you quote; more than 70% if based on usable. <div>&nbsp;</div> I'm guessing that you're whole-heartedly guessing too. What's next? Attacking Walter Baker of SPC directly? Oh wait, Chuck's already done that...

10 localhost ha hecho un comentario el Retroenlace

I have not yet read the benchmark, but without reading the methodology, I'm fairly confident that if Netapps had any opportunities to configure the devices to show theirs in a positive light, they probably took it. <div>&nbsp;</div> That said, the perfect rebuttal would be another benchmark from EMC configuring their box they way they want to see it. If they dont like the workload from SPC-1, all they need to do is use another one and make sure that anyone else who cared to verify their work has access to the same workload data.

11 localhost ha hecho un comentario el Retroenlace

Gear tests are like NASCAR, showing only how fast you can go to the left.<div>&nbsp;</div> I chatted with the NetApp guys on Friday and they are not purporting to show folks what mileage they will actually get from either box. (My position in the matter is "a plague on both your houses.")<div>&nbsp;</div> I just like how Chuck Hollis disses SPC by saying that it is universally panned. Clicking on the link on his blog, purportedly demonstrating the universal panning of the testing spec, it is to another entry...by Chuck Hollis. Good to know that EMC is the honest broker in all of this.

12 ha hecho un comentario el Retroenlace

[Trackback] Can't wait for the excuses (Barry Whyte - An exchange and discussion of Storage Virtualization)

13 ha hecho un comentario el Retroenlace

[Trackback] %title%

14 ha hecho un comentario el Retroenlace

[Trackback] When I at first commented I clicked the "Notify me when new comments are added" check box and now each time a remark is added I get 3 emails with the same remark. Exists any method you can get rid of people from that service? Thank you!

15 ha hecho un comentario el Retroenlace

[Trackback] Hey there, i feel that i saw you visited my web site so i pertained to "return the pick". I am looking for issues to improve my website! I guess its sufficient to utilize a few of your concepts !!