SVC : HiW : part 2 - Import your data
orbist 060000HPM5 Visits (5679)
In part2 of this "HiW" - How it Works - discussion of SVC, I look at how you can take an existing SAN infrastructure and Virtualize it.
Import your Data
Its likely that today you have a SAN of some kind. It maybe a couple of 32 port switches, or it maybe a pair (or more) of enterprise class Directors with many hundreds of ports each.
Should you be in the position where you have more than one disk controller then SVC could be the answer to your admin headaches. You could be lucky and have one vendor providing all the storage controllers in your SAN. Even then, from which one do you provision todays request from. Where do you have free capacity. Where are you not going to cause a performance issue if you 'carve them a LUN'. You could be in a position where you have multiple vendors storage devices, maybe due to an acquisition, consolidation or other cost cutting initiative. Maybe even the guy (or gal) that understood that device is no longer with you...
Anyway, all of those situations cry out for a virtual storage environment.
Lets keep it simple and you can scale out from there. Taking todays SAN, you have some host servers attached to the SAN and they are using LUNs that are provisioned from device X. Device X has a mapping, masking or other such scheme that basically says, present LUN Y to host Z. The host is likely to just be a logical object that is a bucket containing some number of FibreChannel HBA WWPNs.
You have just installed your nice shiny new SVC nodes, be they the existing 8G4 model nodes, or maybe the new Entry Edition 8A4 nodes. You've created a cluster from the nodes by selecting one as the 'boss' and inputting the cluster IP address via the front panel buttons, added an SSH key to enable secure access via a terminal or browser and added at least one more node to the cluster. Now you can get down to business.
SVC requires at least two fabric zones. Where previously you would zone your disk controller devices with all the host systems (WWPN's) that needed access to the disks, you now create at least two zones. Taking device X above, you present half the WWPN from each disk controller (split evenly between the redundant WWPN in the controller) to say half the SVC ports on each node (2 zones minimum). Run a discovery command on SVC and it will find the controller devices you have mapped.
Depending on the storage controller you have, you may need to physically present the existing LUNs to the SVC "host". Again you may need to scan the "host" devices on the controller. You should see the SVC node WWPN and again you may need to create "pseudo-host" objects for each SVC node (half, pair, cluster etc...) that you want to map LUNs to.
Going back to SVC, running discovery again (may not need to be done if a controller presents LUNs nicely onto a fabric) you will now see some number of mdisks. These are the LUNs that the controller is presenting to SVC. Each mdisk has a UUID, a controller ID, and a LUN id. These will correspond directly with the UUID of the storage LUN, and the LUN id is the same as the SCSI id by which the LUN was presented to the "host".
You now have some number of mdisk that are equivalent to what your host server previously was accessing. Just as a storage controller needs to know which WWPN it should present, map, mask a LUN to, so does SVC. So the same set of actual host WWPN need to be grouped into an SVC "host object". Once created you can now map the mdisk to the host object. To do so, you create a virtual disk, that directly corresponds with the mdisk. In SVC parlance, this is an "image mode vdisk".
Once you have created the vdisk = mdisk = controller LUN, then you link the vdisk with the SVC "host object" and now the exact same LUN that used to be presented to the host directly through the storage controller and SAN is now presented to the host via SVC and the SAN.
Now... There are a couple of key points here. With SVC this is a once in a lifetime operation. You've taken a "direct attached LUN", by direct I mean disk to host via SAN, and inserted SVC into the middle of the picture. This is once in a lifetime because any subsequent upgrade, of the storage controller, or SVC results in no interruption to disk I/O. Some other products will require this interruption EVERY time you upgrade the virtualization device itself. I'd question this BIG TIME. One of the promises of virtualization is that you need never suffer an upgrade outage again... SVC can guarantee this.. can other such products?
Second key point is virtualization at its best. Migration. So you now have a single LUN that was carved from some bit of an existing RAID array. You can now migrate this. As soon as you move just one extent from that image mode vdisk, its now a fully virtualized vdisk. You can move it from one storage controller to another, you can re-stripe it across the same storage controller... you now have the ability to change the performance characteristics of that LUN (vdisk) without any interruption to service. That has to be the ultimate beauty of virtualization. The abstraction that SVC provides means that where the actual blocks or extents live has no bearing on what the host system sees. You could move the data from solid state disks to SATA disks (or vice versa) without any change in access. I/O will continue without interruption, your users may complain of course if you do move from SSD to SATA, but the choice is yours.
Anyway... I've discussed in detail how you'd import data into a virtual storage environment. And yes, there is a one time disruption with SVC. You need to insert the device into the data path, so it needs to know who to talk to, upstream and downstream. But this is a one time only change with SVC, (not like USP - once per generation) and now you can move data around without interruption, choosing where its best to live today... or where the system decides its best to live tomorrow...