Comments (18)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 skyron commented Permalink

What would be the advantage of adding a Storwize v7000 behind an SVC? Would it not be a better idea to replace the SVC with the Storwize since it has the same functionality and code?

2 anthonyv commented Permalink

Great question. Both paths are valid. Existing SVC clients may want to get more use out of the license they purchased (rather than buy external virtualization on the Storwize V7000). The SVC also has more cache and more CPU so it can handle more workload. By chooseing an SVC you can also keep the virtualization layer separate to the disk layer... meaning IBM could be the virtualization layer and another vendor could be the disk layer.

3 Khue commented Permalink

How about customers (like myself) who are currently trying to alleviate the pressures of the SVC licensing model by migrating to a v7000? Currently I have a v7000 and an SVC. Most of my volumes are VMFS which are quite easy to deal with as far as migrating data/content, however the SVC does currently support 3 volumes that are production and are not VMFS. They are NTFS attached directly to Win2k3 hosts. If I wanted to migrate those volumes from the SVC (4700 respectively) to the v7000 how would I accomplish this? Is there a way I can simply migrate the volumes from the SVC to the v7000? It would seem that if you placed the v7000 behind the SVC, you would still be hitting the licensing issue with the SVC due to the fact that the v7000 would treated like a disk subsystem and not a virtualization engine and subsystem combo.

4 anthonyv commented Permalink

The only way to avoid an SVC license issue is to create a new LUN on the DS4800 and use it to migrate existing volume to image mode, then map that DS4800 LUN direct to the Storize V7000. <div>&nbsp;</div> My suggestion however is to create volumes on Sorwize V7000 that exactly match SVC VDisks, then map them to SVC and use them to migrate to image. You will need to increase your SVC license limits but only for a very short period (basically only for the period of time it takes to migrate to image and then take an outage to unmap the volume from SVC and re-map it to host).

5 TUBS commented Permalink

Hi, <div>&nbsp;</div> great work :-) I would like to use our SVC (2 CG8 Nodes in a usual cluster configuration yet) to add failover functionality to our two V7000 according to the "Guideline for configuring DAN Volume Controller Split I/O Group Clustering" Version 6.3.0 from Nov 18, 2011 by Dr. Axel Koester (http://www-01.ibm.com/support/docview.wss?&amp;uid=ssg1S7003701). <div>&nbsp;</div> I have a question to step 4 of your description: "Define the SVC to the Storwize V7000 as a host (as described above) and map all volumes to the SVC." <div>&nbsp;</div> Shall I use all four WWNs (two of SVC node) and map them to one named host (SVC_1)? Or shall I define two hosts (one for each node)? <div>&nbsp;</div> Regards <br /> Henrik

6 anthonyv commented Permalink

Hi Henrik. <div>&nbsp;</div> Use only 1 host definition, not two. There is no need to define it twice (once for each node) as then you have to be sure to map the same volumes to both hosts in the same LUN order. If you use one host with all the SVC nodes in it.... then less risk of human error.

7 Romingw commented Permalink

I have one question about the 2 controller showing up on SVC, will mdisks be showing up on each of the controller? Should I create 1 pool on SVC for mdisks coming from both controller?

8 anthonyv commented Permalink

Great question.... yes each controller will have roughly half the mdisks. <br /> But you always use only a single pool to bring them together, <br /> Dont worry that they appear to come from two controllers... failover and redistribution works just fine.

9 riv_luis commented Permalink

I have a 8 node SVC and I am virtualizing one Storwize. I have found that I can only add 16 WWNs in a Host Object in the Storwize, but 8 nodes means 32 WWNs, so I have to create 2 host objects in the Storwize, each with 16 WWNs. Is that correct? Is there any additional considerations?. <div>&nbsp;</div> Thanks in advance.

10 anthonyv commented Permalink

With the GUI you can add 16 ports at a time. <br /> If there are more than 16 ports you need to add the first 16, then from the GUI you can 16 more and so on. <br /> This works fine, ( have done it) <div>&nbsp;</div> If you want to you can also use the cli to add host ports to the host definition. <div>&nbsp;</div> Do not define two hosts.

11 jsgaww commented Permalink

Anthony, Have you ever used the balance.pl script for rebalancing vdisks after adding storage to an existing storage system? <br />

12 martingr75 commented Permalink

Hi Anthony, is there a reason why my SVC appears degraded on the V7000?. When I create the host it shows online, but as soon as I map any volume, the SVC host goes to degraded. I'm using only two ports on each V7000 node but 16 ports for a 4-node SVC. I saw a zoning recomendation of not use more than 8 paths to a host on the V7000, does it applies to the SVC when it acts as a host as well?. This is for a split cluster configuration. The SVC sees the volume from the V7000 for quorum purposes and it works ok. Just the degraded SVC host on the V7000 bothers me. Hope you can help me.

13 Mallah commented Permalink

Hi Anthony, <div>&nbsp;</div> first of all thx for your excellent article..i have 2 open questions. <br /> 1. after zoning a v3700/v7000 to a SVC Split Cluster (Code6.4.1.2 - latest one) and mapping the Luns to all SVC Ports (1 Host with 8 WWPNs), i see two storage controllers, one for each node canister, as you describe above... but only one controller have all my mapped Luns from v3700/v7000 and NOT "roughly half the mdisks" each controller, as you said in your comment 8. Is that correct so far? <div>&nbsp;</div> <div>&nbsp;</div> my second question is, <br /> many Redbooks about SVC Split Cluster recommend to choose the right preferred SVC Node for a Volume(Vdisk), so Volume and SVC Node should reside in the same Location ( Domain ) to prevent unnecessary roundtrips. <div>&nbsp;</div> <div>&nbsp;</div> but i am not able to change the preferred SVC Node for a Volume(Vdisk) after creating one the Lun.. <br /> while creating it is possible to choose the preferred Node, but not later, is this right? <div>&nbsp;</div> so what i should to do to fulfill the recommendation ? Is there a possibility to change the preferred SVC Node for a Lun after creating? <div>&nbsp;</div> <div>&nbsp;</div> Regards <br /> Melih

14 bio1975 commented Permalink

After reading this post and i need to add a new enclosure to a V7000 used as external storage by SVC, I felt the doubts about my current configuration <br /> I have 42 disks configured (two enclosures) in 5 Mdisk (4 to 8 drives and a one whith 6 disk) all 5 Mdisk in raid 10 (first wrong that I see that are not configured to ensure the fault of an enclosure) and 3 spare drive. <br /> Now i want to add enclosure with 24 disk (same size and model of other) <br /> Inizalmente thought to create 4 to 8 discs Mdisk ... <br /> Is not possible delete all configuration of v7000 .... what suggestion would you give me for configuration? <br /> Another consideration, the Mdisk are all put into only one storage pool where they are created by 9 volume of 1 TB and 1 of 700GB. All volumes are passed to SVC <br /> Thanks <br /> Fabio

15 avandewerdt commented Permalink

You can absolutely remove existing configuration, just delete all your vdisks, then mdisks, then pools. <br /> As for best config, if your passing all the storage to SVC, why not create 8+P RAID5 and create one volume per array. Is there a specific reason why you are using RAID10?