Its a given today that SAN implementations have redundancy by design.
When we sell SAN switches we always urge clients to use dual fabrics. Its accepted industry best practice.
By having hosts with dual HBAs, each host can attach to both fabrics and survive the failure of a SAN switch.
Equally when we sell SAN Volume Controller (SVC) we always sell the SVC nodes in pairs.
Each SVC node can run an SVC I/O group stand-alone, which again allows us to survive a node failure and do things like concurrent firmware updates.
So far so good.
But common practice appears to be that when installing redundant hardware, that we place all the hardware into one rack.
In fact I routinely look in a rack and see Switch One directly mounted above Switch Two.
I see the same with SVC clusters, all the nodes often jammed into the same rack, mounted one on top of the other.
Is this a good idea?
I suspect 99.9% of the time it makes no major difference. But its the 0.1% that can cause great pain.
Imagine going to work on a failed switch and then accidentally powering down the working one.
If each switch was in a separate rack the likelihood of doing this is significantly reduced.
I recently visited a major Australian Bank where each of their Fibre Channel directors were located on opposite ends of a long row of racks, several meters apart.
The separation of distance guaranteed that any event in one rack could not possibly affect the other rack.
Good design... by design..... I liked it.
There is no reason you cannot install SVC nodes in the same way, each node in a separate rack.
Just don't separate them too far apart. When servicing a node you like to be able to see what the front panel of the other node is displaying.
Having said that, SVC split cluster (where we separate the nodes across or between buildings) truly separates the nodes for the highest levels of availability.