Comentarios (9)

1 wharton ha hecho un comentario el Enlace permanente

Hello,Nagger, <br /> I've read your presentation about SSP3,and you listed its limitations, now i have a question on SSP3: if only the LPARs created in SSP could captured by VMControl? that is, we can't capture the traditional LPARs which is out of SSP , am i right? Thanks..

2 nagger ha hecho un comentario el Enlace permanente

Wharton, <br /> Nope you have that wrong. Systems Director with VMControl has various Storage Pools that it can deal with: <br /> 1) VIOS local disks in a volume group given vSCSI to client LPARs 2) NPIV LUNs via the VIOS server and 3) Shared Storage Pools. <br /> But if you Capture an LPAR in to a VMControl Appliance then you have to Deploy the Appliance to the same disk technology (pool type). <br /> In fact SSP support is the new pool type as the others have been around a couple of years. Cheers Nigel

3 wharton ha hecho un comentario el Enlace permanente

Thanks for your quickly reply,nagger. This is what i wanna know -if you Capture an LPAR in to a VMControl Appliance then you have to Deploy the Appliance to the same disk technology (pool type).-so, if i have 8*LPARs with lv from vios vg, now i wanna migrate them to SSP, what i should do is 1) LPM one LPAR from vios vg to SSP 2)capture LPMed LPAR to an appliance 3)deploy 7*LPAR to SSP with LPMed appliance. or i have to LPM one by one, am i right? Thank you ...

4 nagger ha hecho un comentario el Enlace permanente

Wharton, I am starting to think you have not watch the movies. You can't LPM from VIOS VG/LV based client disks to SSP disks. So your 1) is impossible . How could you possibly think it would somehow copy, say 1TB, of disk space between the machines? It makes no sense. Take a master client LPAR with VIOS VG/LV disks and migrate it (just like in the video) to SSP disks. Then Cature it as a SSP Appliance and Delpoy the Applinace 8 times to SSP disks then you have 8 copis of the original (but twith unique IP address, hostname, security certificates etc) . If you currently have 8 different LPARs using VIOS VG/LV and want to keep the setup &amp; data of all of them then you need to migrate each of them to SSP (just like in the video). Which ever way you go you will end up with client LPARs ready for LPM. Cheers Nigel Griffiths.

5 wharton ha hecho un comentario el Enlace permanente

Thank you,Nigel.

6 radam ha hecho un comentario el Enlace permanente

Hello Nagger, <br /> Does the cluster I create on my VIO(s) to hold the Shared Storage Pool have any notion of quorum? <br /> If I have a cluster with 2 nodes, VIOs, and a SSP serving up rootvg volumes to some client LPARs and 1 of my nodes goes down, will the SSP, and clients, continue to be served by the remaining node? <br /> Thanks.

7 nagger ha hecho un comentario el Enlace permanente

If a VIOS fails or if you take it down for servicing then its client virtual machines can't talk to their disks - this is true for SSP disks or any other because only that VIOS has access to the server end of the vSCSI. Other VIOS in the SSP cluster will carry on as normal. I am not aware of any quorums. This may not be true in the future as new functions get added.

8 radam ha hecho un comentario el Enlace permanente

Nagger, <br /> Let me know if what I'm seeing is what you would expect. <br /> We have the 2 redundant VIOs, each seeing all of the VSCSI disks used for client rootvg disks. <br /> They also see the large LUN used for the SSP and the small, 2GB LUN used for the ssp repository disk. <br /> We create the cluster and ssp, using the 'cluster -create' command on ioa0065. <br /> Then we execute the 'cluster -addnode' command, on ioa0065, to add ioa0066 as the second node. <br /> We then execute, on ioa0065, several mkbdsp commands to carve out 64GB LUNs to use as rootvgs on client LPARs. <br /> So when I execute the lsmap command on ioa0065 I see the vdisk entries from the mkbdsp commands. <br /> When I execute the lsmap command on the second node, ioa0066, I do not see any vdisk entries. <br /> Is this correct? <br /> When we failed the ioa0065 VIO to test redundancy, the client LPARs using VSCSI disks from the VIOs stayed up but the client LPARs using vdisks from the ssp as rootvgs came down. <br /> Should the mkbdsp commands run on ioa0065 have created entries on ioa0066? <br /> Inquiring minds want to know... <br /> As always, thanks.

9 nagger ha hecho un comentario el Enlace permanente

Hi Radam, Reading between the lines these two VIOS are on the same machine. If you want dual paths via the two VIO Servers then you have missed out a step. The initial mkbdsp command will 1) create the LU of the requested size (please, please, please call these SSP devices either a LU or a SSP disk (is is nothing like a LUN), 2) assign it to a vSCSI slot (i.e. link to your VIO client), 3) give it a user sensible name. You have to go to the other VIOS and run the same mkdbsp command BUT don't include the size. This forces SSP to look for a LU of the same name and link that to the vSCSI slot for the 2nd path. To list LU you are using the wrong command IMHO. Try using lssp -clustername XXX - sp YYY -bd. Answer to Is that correct? yes it is - without the second mkbdsp command. And yes I would except it to fail as you describe without that second mkbdsp command.