<   Previous Post  Next UK (EU) SVC...
New Flexible Hardwar...  Next Post:   >

Comments (7)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 Piw commented Permalink

That's why I wish terms like active-active and dual-active wouldn't be used as synonyms. <br /> Active-active should be reserved for storages that can serve data from same volume through both controllers simultaneously (round-robin within lun) with destage of mirrored write from preferred node. <br /> Dual-active would describe storages that hold ownership of specific lun on one controller and serve read/writes only from it - of course both controllers own different luns, so both serve data same time. <div>&nbsp;</div> So it looks like Storwize storages only behave like dual-active by incorporating ALUA to load balance between nodes, but in fact can be active-active when MPIO calls for it? <br /> But because preferred node is chosen base upon pure number of already "controlled" luns, such load balance may be not optimal (let's say node1 gets all low traffic luns). What are reasons to go ALUA like dual-active and not full active-active when storage is capable of? <br />

2 Piw commented Permalink

Bah, I just read my post and it is a little confusing. So lets clarify types of storages: <br /> a) dual-active: DS4k style - lun is "owned" by controller, only 1 controller can read/write to it, paths from hosts are active/passive <br /> b) active-active with ALUA: Storwize or VNX style, lun have "preferred" controller, second controller can do "proxy" read through "prefered" node, writes are destaged by one node, paths from hosts are preferred active/active (ALUA) <br /> c) symmetric (true) active-active: Most monolithic storages. Luns don't have owner or preferred controller (seen outside), direct reads from both controllers, paths from hosts are active/active with round robin. <br /> So I guess that question was is SWv3700 is 3rd type?

3 orbist commented Permalink

Hi, all the SVC and Storwize family run the same code, so all are active/active ALUA <div>&nbsp;</div> The preference is historical, and is just an attempt at load balance / better chance of read cache hits - if the reads all goto the same node (since read cache isn't mirrored) <div>&nbsp;</div> As you say, the load balancing is a bit arbitrary, given as you say its done via a count, rather than actual workload.

4 Shaun@AU commented Permalink

when using v3700 iscsi with windows 2008 native MPIO, optimal and non optimal paths get detected. non optimal being non preferred controller? is this behavior correct?

5 Aleksandr_Moscow commented Permalink

Performance degraded with v3700 connected to SAN by iSCSI. Configuration: 3 Enclosure (Controller+2 Expansion) with 36 disks 3TB 7.2K 6Gb SAS. High CPU Usage (&gt;50%), Slow latency (50 ms) ... I

6 jasonistre commented Permalink

Without ALUA being available or supported for VMware for svc, v7000, and v3700 I understand the read cache miss issue with RR, but doesn't that also mean that read cache for a LUN is being built on both nodes thus consuming more space, some duplicate, some not? If the entire environment is RR without ALUA then isnt 50% of potential read cache space lost due to a read cache being built for all LUNs on both nodes? So not only are you getting misses when it's in cache on the other node, but you are creating another read cache for the same LUN and consuming more space?

7 Tomasko commented Permalink

ALUA summary SVC,V7000 .. <div>&nbsp;</div> 1) active-active controllers share the some memory area <div>&nbsp;</div> 2) active-active ALUA controllers has their own memory which is not directly accessible from second node <div>&nbsp;</div> 3 ) if driver is ALUA capable then all IO request (read, write) are handled with preferred node in case of all preferred paths failure multipath driver will redirect all IO to the non preferred paths thus non preferred controller. In case where preferred controller works all IO will be redirected from non preferred controller to preferred controller. The controllers will not change the ownership of the LUNs if this condition lasts less than 5 minutes. After 5 minutes non preferred controller stops redirecting IO to preferred controller and takes ownership of the LUNs <div>&nbsp;</div> 4) if driver is not ALUA capable (round - robin policy betwean active - "pasive" paths ) then if io request is send to non preferred controller this controller has to transfer this request (proxy) to the managing controller for the LUN by copying the I/O request to the managing controller’s cache. The preferred controller issues the I/O request to the LUN and caches the response data. The managing controller then transfers this response data to the non-preferred controller’s cache so that the response data can be returned to the host through the controller/host ports to which the host initially sent the request. This is why a proxy read (read through the non preferred controller) has additional processing overhead. Important to note that write requests are not affected by proxy processing overhead because write requests are automatically mirrored to both controllers’ caches <div>&nbsp;</div> 5) of course in case of ALUA capable driver you can do load balancing policy between active paths