<   Previous Post  "MAX POWER" - spin...
My blog URL and RSS...  Next Post:   >

Comments (4)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 localhost commented Trackback

Zimmer,<div>&nbsp;</div> Thanks for taking the time to share your thoughts.<div>&nbsp;</div> 1. SVC node failure. So yes, the caching pairs in SVC means two nodes service each vdisk. It doesn't actually cut throughput in half, due to the node going write through, it has less work to do and so can handle more I/O than before. 2. Single set of storage tiers. You are getting het up about vdisks and IO groups. I'm talking about 'storage pools' i.e. managed disks and groups. These are accessible to all nodes in the cluster. So any node can own a vdisk that is created from said 'single set of storage tiers' - to me sounds very much like a cluster accessible resource. Therefore its not an IO group island, its the entire cluster that can access the backend disk storage. You can create a vdisk in IOG 0 and map it to your host using pool Z, then you can create another vdisk from pool Z in IOG 1 and map it to the same host. Therefore all nodes (and all hosts) have access to the same single set of storage. The vdisks are virtual remember, the mdisks are real.<div>&nbsp;</div> 3. If you do need to move a vdisk between IOG's then we do have to quiesce the I/O to ensure that the cache flushes out to disk. But unless the IOG is seriously overloaded (and you've not been monitoring its usage for the last n months) then why would you need to move them?<div>&nbsp;</div> 4. I think you missed my point here totally. When we talk about migration, failover etc, its the backend disks that are key - because we don't 'own' any storage capacity, thats the critical thing - The point of the cluster is the storage pools. The ability to have all 8 nodes access the same set of storage. You can non-distruptivley move data from one set of mdisks to another, anywhere in the cluster. A vdisk being owned by 2 of those nodes isn't really an issue and is the same as all other controllers. The ability for any node to use any storage (tier) and for those tiers to be managed under a single entity (the cluster) is different from all other controller models.<div>&nbsp;</div> 5. Making statements that - compare USP with a single IOG isn't realistic either (dont believe your 'how to sell against SVC FUD pdf'). SVC is using the same set of storage at the backend and so is providing 8 nodes of processing power to one set of disks. I'd point you to the SPC results where on SPC-1 USP can sustain less than 73% of the throughput as SVC. In my book that doesn't match the scaling numbers you quote.<div>&nbsp;</div> Barry

2 localhost commented Trackback

I guess we all expected something much bigger from HDS yesterday, an HA solution using mirroring between USP's... Not exactly the kind of cluster I was describing above.<div>&nbsp;</div> You have of course been able to do this with SVC for almost a year, split cluster configs - which gives you the ability to split the node pairs across two power domains and mirror a single vdisk across storage in the two domains using Vdisk Mirroring. Sounds like a worth post in its own right.

3 localhost commented Trackback

Barry, the point is that the USP V will be the first enterprise storage solution to offer a clustering capability for the Open System environment, similar to what is currently available for the Mainframe environment with IBM and HDS’ HyperSwap &amp; EMC’s AutoSwap, not the first midrange storage virtualization platform to offer an advanced clustering capability. Please read the press release.<div>&nbsp;</div> Also Barry, I don't think that anyone is questioning the viability of SVC's clustering capabilities from an 8-way node state (cluster and agent) and configuration capability perspective. You are correct, the SVC platform was designed based on these cluster-based attributes.<div>&nbsp;</div> You are also correct to state, and I quote "An n-way caching model has much more overheads on the node to node links to ensure that consistency is maintained across all nodes". <div>&nbsp;</div> However, it is the 2-way node cluster, employed by SVC (or SVC I/O Group), that limits SVC's scalability and introduces "islands" of virtualization. Case in point for example is that should an SVC vdisk or SVC node go offline it can only be dynamically and non-disruptively failed over to its partnered node, not to any other node in the cluster, thus cutting an I/O Group's throughput capabilities in half -- sounds like a 2-way controller to me!<div>&nbsp;</div> Furthermore, you say and I quote, "But the concept of SVC was to provide a single management point, with a generic access to a single set of storage pools (tiers)." - single set of storage pools sounds contradictory to me - the reality is that each I/O Group (2 SVC nodes) own their own pool (island) of storage. By definition (from your SVC configuration guide), if a vdisk from one SVC I/O Group is to be moved to another SVC I/O Group, during a manual workload balancing exercise, it is disruptive to the application - the application must be quiesced - do a search on "quiesce" - 5th one down I believe. Limits the scope of a heterogeneous tiered storage implementation, wouldn’t you say?<div>&nbsp;</div> So to recap, SVC does provide clustering capabilities for state and configuration but must rely on good old 2-way controller based techniques for LUN (vdisk) ownership, dynamic failover and non-disruptive migration, limiting SVC platform's scalability to the SVC I/O Group level . True that a USP V, USP VM or DMX can also be considered islands, however in the case of the USP V, it is an island that can offer 30x more scalability (on average) compared to an SVC I/O Group and where a USP VM offers 8x the scalability.

4 localhost commented Trackback

Barry, my response to your numbered comments: <div>&nbsp;</div> 1. Throughput in half maybe not, but I would imagine performance would degrade based on the cache write through policy. Any workload with any percentage of writes would most definitely suffer from additional latency. <div>&nbsp;</div> 2. Understood – see point #4<div>&nbsp;</div> 3. So we agree changing a vdisk from IOG “A” to IOG “B” requires an application outage but migrating a vdisk’s data from one mdisk to another or to another mdisk group is not. I would think that with application performance workload requirements continuously changing, that changing the IOG of a vdisk might happen more often than you imply<div>&nbsp;</div> 4. OK – but the bottom line is that applications can sometimes (and more often these days) have many associated “data” volumes whose vdisks would more often than not be owned by the same IOG. Would not the limited resources of an IOG become constrained when many volumes needed to be migrated (scalability issue, no?).<div>&nbsp;</div> 5. Tough to compare apples to oranges. The SVC SPC1 benchmark used a max 15 ms response time and the USP V at 5 ms. So find the 5ms intersection point in the SVC SPC-1 curve for a more apples to apples comparison. Not to mention that the SVC SPC-1 benchmark was carried out with 16xDS4700 modular arrays behind the SVC cluster versus a single USP-V. So I guess there is no apples to apples comparison here.<div>&nbsp;</div> And finally I could go into the merits of how the USP V leverages the same sw array code base for both internal and external storage and how this enables an “intelligent” tiered storage mgmt solution that supports application mobility (rather than just vanilla volume mobility), but I’ll save this for another day.<div>&nbsp;</div> Harry