<   Previous Post  Next UK (EU) SVC...
New Flexible Hardwar...  Next Post:   >

Comments (6)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 Piw commented Permalink

That's why I wish terms like active-active and dual-active wouldn't be used as synonyms. <br /> Active-active should be reserved for storages that can serve data from same volume through both controllers simultaneously (round-robin within lun) with destage of mirrored write from preferred node. <br /> Dual-active would describe storages that hold ownership of specific lun on one controller and serve read/writes only from it - of course both controllers own different luns, so both serve data same time. <div>&nbsp;</div> So it looks like Storwize storages only behave like dual-active by incorporating ALUA to load balance between nodes, but in fact can be active-active when MPIO calls for it? <br /> But because preferred node is chosen base upon pure number of already "controlled" luns, such load balance may be not optimal (let's say node1 gets all low traffic luns). What are reasons to go ALUA like dual-active and not full active-active when storage is capable of? <br />

2 Piw commented Permalink

Bah, I just read my post and it is a little confusing. So lets clarify types of storages: <br /> a) dual-active: DS4k style - lun is "owned" by controller, only 1 controller can read/write to it, paths from hosts are active/passive <br /> b) active-active with ALUA: Storwize or VNX style, lun have "preferred" controller, second controller can do "proxy" read through "prefered" node, writes are destaged by one node, paths from hosts are preferred active/active (ALUA) <br /> c) symmetric (true) active-active: Most monolithic storages. Luns don't have owner or preferred controller (seen outside), direct reads from both controllers, paths from hosts are active/active with round robin. <br /> So I guess that question was is SWv3700 is 3rd type?

3 orbist commented Permalink

Hi, all the SVC and Storwize family run the same code, so all are active/active ALUA <div>&nbsp;</div> The preference is historical, and is just an attempt at load balance / better chance of read cache hits - if the reads all goto the same node (since read cache isn't mirrored) <div>&nbsp;</div> As you say, the load balancing is a bit arbitrary, given as you say its done via a count, rather than actual workload.

4 Shaun@AU commented Permalink

when using v3700 iscsi with windows 2008 native MPIO, optimal and non optimal paths get detected. non optimal being non preferred controller? is this behavior correct?

5 Aleksandr_Moscow commented Permalink

Performance degraded with v3700 connected to SAN by iSCSI. Configuration: 3 Enclosure (Controller+2 Expansion) with 36 disks 3TB 7.2K 6Gb SAS. High CPU Usage (&gt;50%), Slow latency (50 ms) ... I

6 jasonistre commented Permalink

Without ALUA being available or supported for VMware for svc, v7000, and v3700 I understand the read cache miss issue with RR, but doesn't that also mean that read cache for a LUN is being built on both nodes thus consuming more space, some duplicate, some not? If the entire environment is RR without ALUA then isnt 50% of potential read cache space lost due to a read cache being built for all LUNs on both nodes? So not only are you getting misses when it's in cache on the other node, but you are creating another read cache for the same LUN and consuming more space?