<   Previous Post  Did it need a press...
Wordle Blog Map  Next Post:   >

Comments (5)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 localhost commented Trackback

I can suggest a few reasons why you might not want to relegate your SATA storage to some cheap external array:<div>&nbsp;</div> 1) Availability. The vast majority of SATA drives today are not designed for the data center use case, and many models will exhibit failure rates far greater than their Fibre Channel or SAS "enterprise drive" equivalents. To mitigate this, your storage controllers need to be designed to provide data integrity checks and to frequently verify that all the bits on the drives stay written as they were originally written. As the good folks at CERN found out, many low-cost SATA arrays don't provide anything other than simple RAID 5, with actual ECC to verify the data; few will even check to make sure that the data still matches the RAID 5 parity.<div>&nbsp;</div> (shameless plug: all EMC arrays provide data integrity protection, validation and correction for SATA drives; on Symm and CLARiiON, SATA drive reliability is roughly equivalent to that of 10K rpm FC drives - this is not necessarily the case on any other storage platform).<div>&nbsp;</div> 2) Reliable Performance (and this one isn't the one BarryW often considers). Look no further than the viral video of the Fishworks JBOD to see a perfect example of what I mean - an inexpensive SATA array actually demonstrates a 100+x magnification of response times when someone YELLS at a drive! If the SATA array hasn't been designed to protect drives from rotational velocity interference (caused by the spinning rotors and voice-coil head actuators), and to isolate drives from each other and from the rest of the fram (so bumps don't disturb the heads), then the Fishworks demo could be a BEST-CASE experience; worst cast your performance could suffer so much that data is corrupted (possibly in a manner that is undetectable - see #1 above).<div>&nbsp;</div> Of course, the added latency caused by regenerating a cache-miss I/O through the SVC or USP-V is also significant, but BarryW always gets defensive when I mention that ;).<div>&nbsp;</div> 3.) Slot Power, cooling and cost - despite BarryW's continuous asssertions to the contrary, it is frequently less expensive to add SATA drives to an existing array (that supports them) than to add a separate dedicated SATA array. And yes, Barry, I mean including ALL costs, including cache, slots, floor tiles, software, etc., and I also mean both Acquisition Cost, and TCO. Pricing of SATA drives for Symm &amp; CLARiiON has changed significantly since the last time you looked (as has flash drive pricing, by the way), making it even more economical to tier within a single array than might be to add even "cheap" SATA-only arrays.<div>&nbsp;</div> 4) Scale. SVC is severely limited in the number of actual LUNs it can manage, and even more limited in its ability to migrate LUNs across the separate 2-node pairs. In fact, it would take at least 4 and as many as 8 separate SVC domains, each with 4 node pairs, just to virtualize all the LUNs a single DMX4 can export - and even then, you could not non-disruptively relocate the presentation of any LUN to any SVC port: you are limited to relocate ONLY within each node pair. For customers whose needs scale beyond the paltry 2000 LUNs per node that SVC supports, an In-the-Box approach is not only more cost-effective, it is also simpler and easier to manage, with any-to-any migration within the array.<div>&nbsp;</div> There are more practical reasons for putting SATA in-the-box instead of in a separate array, but I'll stop there for now. I do think considerations like these are really what Chuck is trying to express.

2 localhost commented Trackback

See what I mean, more nibbling...<div>&nbsp;</div> Thanks for the input Barry... no mention of Invista in all of that I see.........<div>&nbsp;</div> I'll come back to your points later, off out for the night - have a good weekend folks.<div>&nbsp;</div> Barry

3 localhost commented Trackback

You don't really have to respond - I'm just laying out some factual reasons why customers might indeed want to use SATA inside their Tier 1 array. And I'm not even arguing that it's the only way to do it - it's you that's trying to make this a black OR white discussion. Heck, I know of customers who do BOTH...and they LIKE "grey"...and in my experience, many of the arguments you raise against external-only don't really apply.<div>&nbsp;</div> As to Invista, I'll step up and say that it is alive and well, and running some pretty large enterprises and their storage environments. Thanks for asking.<div>&nbsp;</div> But when you swing back at me, please remember - I'm the SYMM dude, so be gentle :)

4 localhost commented Trackback

I agree. And reading back I can see that my original post did come across as suggesting its the only way to do it.<div>&nbsp;</div> Thats not what I intended, I was trying to say that there are reasons why some people prefer to use an external SATA controller, or midrange controller for SATA. At the same time other that already have a big enterprise box may well want to put SATA in there if they already have the box and spare drive slots for example. <div>&nbsp;</div> As you say, this is more of a grey issue - and I guess thats why I responded to Chuck as his post came across as black and white as well - in the opposite direction.<div>&nbsp;</div> Barry

5 localhost commented Trackback

Peace ;)