SVC Split Cluster - How it Works
orbist 060000HPM5 Comments (2) Visits (26278)
I thought it was worth spending a few minutes describing a feature that SVC has been supporting for just over a year now. Call it what you want, "Split Cluster", "Split I/O Group", "Hyper-swap", "Pork" (Sorry HAM). After HDS made such a wet fish splash about the wonders of clustered storage, which turned out to be just a method to mirror between two USP-V controllers, with some host software that allowed you to failover between the two should one vanish (hardly my view of a cluster) - then a few of them had a bite at SVC trying to justify why SVC isn't itself a cluster? Maybe I'm missing something, maybe the 5000 production clusters are missing something...
Back around this time last year I was filling you all in on the two new major I/O functions that SVC was adding in the 4.3 software release. Namely, Space-efficient Virtual Disks (SEV) otherwise known in the industry as thin provisioning, and Virtual Disk Mirroring (VDM) - think of LVM Mirroring in the SAN. The latter function, a mechanism by which a single Virtual Disk could have a copy on two Managed Disk Groups (storage pools, storage controllers). This feature had a few 'as if by magic' side-effects.
Another side-effect of VDM is the ability to 'split' the cluster but maintain access to clustered servers an applications. Imagine you have two servers acting as a cluster for a given application. These two servers are in different machine rooms, power domains and are attached to different fabrics. You also have two storage controllers, one in each machine room. You want to mirror data between the controllers, and at the sametime provide access to end users should you lose power, or SAN attachment in one of the machine rooms. Now you can, without any disruption at the time of failure. OK, so Hyper-swap has been around for some time on System Z. But in open systems, usually this means buying enterprise class storage systems.
Time for a picture..
Basically the two nodes in an I/O group are 'split' across two sites. A copy of the Virtual Disk is stored at each site also. This means that you can lose either the SAN or power at the first site. In the latter case, you need the clustering software at the application and server layer to failover and use the server at the second site. Because the SVC VDM function keeps both copies of the storage in lock step, and the SVC cache is being mirroed between both nodes, the loss of one site has no disruption to the second site.
As with any split cluster, sometimes called split-brain solution, a tie break is needed. SVC already has this requirement. Should an SVC cluster be evenly split, we need a tie-break in the form of our quorum disks. Usually this is almost transparent to SVC users. We pick 3 quorum disks from the Managed Disks attached to the cluster. In this case you need to modify the assignment of the quorum disk to ensure it is in a third power domain, thus enabling either site to continue and take ownership of the quorum tie break if there is a site failure.
At present we request that any users wishing to implement such a solution submit an RPQ. The links between fabrics at either site have certain requirements that need to be validated. Look out for updates to the configuration guide and rules soon, when these are published, full support can be provided without the need for RPQ as long as you have stayed within these rules.
One of our blogger friends over at HDS made a wild statement that nobody else could offer this function, I guess this isn't true, IBM offer it with both DS8000 and SVC, EMC offer it, and now HDS do too. In any case, SVC is likely to be the most flexible solution, especially as you can use the storage controller of your choice at any of the three sites - and of course with SVC they could be from three different vendors and three different price points... Best of all, this is all possible using the base SVC virtualization license... not additional charge.