Comentários (7)

1 Jacco_Blaauw comentou às Link permanente

Great article Nigel! <div>&nbsp;</div> A great Xmas present from IBM with SSP3 on the VIOS and IBM Systems Director 6.3.2 supporting it as well. <br /> Already talking to local archtitects and hopefully they will finally start listening to agree with these kind of (needed) implementations in 2013! Can find a reason yet why not. So ff they agree I will be having a lot of fun next year! <br /> Taliking with IBM development as well to agree on upgrading our 6.3.1. server to 6.3.2 as soon as possible <br /> I wanna have more fun soon! <div>&nbsp;</div> Grz Jacco

2 eric_marchal comentou às Link permanente

Hi Nigel, <div>&nbsp;</div> Thanks for this article as marvelous as usual! <br /> I was as much as excited as you were. Here is my first q&amp;d ROE here: I updated ISD server to 6.3.2 and updated the VIOS agents and subagents to my VIOS pair. <div>&nbsp;</div> First good thing: I can deploy an AIX .ovf directly from the IBM using a NIM image repository. provisionning is done automatically from SSP3 pool. <div>&nbsp;</div> Unfortuantely there are some other problems I encouter: <br /> - first problem: the cimserver (updated to version 2.9.1.50 by ISD 6.3.2) core dumps say every hour (I've opened a PMR on this) <br /> - second problem: during the VMcontrol capture process there is an error when process tries to allocate a LUN on SSP3 for the capture as you can see herafter <div>&nbsp;</div> December 21, 2012 11:24:05 AM CET-Level:150-MEID:0--MSG: DNZLOP411I Capturing virtual server aix4380c to virtual appliance AIX721 in repository vios14a. <br /> December 21, 2012 11:24:05 AM CET-Level:150-MEID:0--MSG: DNZLOP912I Disk group to be captured: DG_12.18.2012-10:55:16:580 <br /> December 21, 2012 11:24:05 AM CET-Level:150-MEID:0--MSG: DNZLOP900I Requesting SAN volume(s) <br /> December 21, 2012 11:24:06 AM CET-Level:200-MEID:0--MSG: Subtask activation status changed to "Complete with errors". <br /> December 21, 2012 11:24:06 AM CET-Level:1-MEID:0--MSG: Job activation status changed to "Complete with errors". <br /> December 21, 2012 11:24:06 AM CET-Level:50-MEID:0--MSG: CreateVMD failed to allocate and assign a volume. Reason: java.lang.Exception: CreateVMD failed to allocate and assign a volume. Reason: NULL <br /> December 21, 2012 11:24:06 AM CET-Level:200-MEID:0--MSG: Subtask activation status changed to "Complete with errors". <div>&nbsp;</div> I Will continue experiments. <br /> Merry X-mas to you Nigel and any reader here. <div>&nbsp;</div> Eric Marchal <br /> AIX Senior Systems Engineer <br /> IBM-Belgium.

3 matekamoris comentou às Link permanente

Hi Nigel, <div>&nbsp;</div> I have tested Shared Storage Pool v3 with VMcontrol integration, deploying a new virtual appliance (AIX 6.1TL08) takes me 31sec (no joke, see the output below). <div>&nbsp;</div> ---------------------------------------------------------------------------------------------------------------------------------------------------------------- <br /> # smcli deployva -v -g 0x57e20 -V 0x5a540 -m -1099510068992607402_01 -a deploy_new -A "poolstorages=326512,product.vs0.com.ibm.ovf.vmcontrol.system.networking.hostname=schist,product.vs0.com.ibm.ovf.vmcontrol. <br /> adapter.networking.ipv4addresses.5=10.10.10.200,product.vs0.com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.5=255.255.255.0,product.vs0.com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway=10.10.10.254,product.vs0.com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses=10.20.20.200 10.20.2.251,product.vs0.com.ibm.ovf.vmcontrol.system.networking.domainname=domain.test" <br /> Mon Jan 28 17:53:38 CET 2013 deployva Operation started. <br /> Attempt to get the default customization data for deploy_new. <br /> Attempt to get the deploy_new customization data. <br /> Update collection with user entered attributes. <br /> Attempt to validate the deploy request for 369,984. <br /> Attempt to deploy new. <br /> Workload carbon-linked_clones-virtual_appliance_32287 was created. <br /> Virtual server schist added to workload carbon-linked_clones-virtual_appliance_32287. <br /> Workload carbon-linked_clones-virtual_appliance_32287 is stopped. <br /> DNZIMC094I Deployed Virtual Appliance carbon-linked_clones-virtual_appliance to new Server schist hosted by system . <br /> Mon Jan 28 17:54:10 CET 2013 deployva Operation took 31 seconds. <br /> --------------------------------------------------------------------------------------------------------------------------------------------------------------- <div>&nbsp;</div> I'm going to write a post about linked clones. You can have a look - if you are interested- on the previous one it explains how to use VMcontrol over SSP. (http://chmod666.org/index.php/adventures-in-ibm-systems-director-in-system-p-environment-part-5-vmcontrol-and-shared-storage-pool/). Just one word to tell you how linked clones are : AWESOME ! <div>&nbsp;</div> Benoît - chmod666.org <div>&nbsp;</div>

4 Jack Jiang comentou às Link permanente

Hi nagger, I'm a VMC tester and paied a couple of monthes on VMC SSP testing last year. Currently there are some customer ask me what SSP do when capture/deploy. Only snapshot or complete copy backound in vios, if only do snapshot, whether it'll slow down the VS perforance once capture the VS and then deploy many VA. <br /> One more thing, following solution, which one is best for customer. <div>&nbsp;</div> SSP Thick provisioning and disk subsystem Thick provisioning <br /> SSP Thick provisioning and disk subsystem Thin provisioning <br /> SSP Thin provisioning and disk subsystem Thin provisioning  <br /> SSP Thin provisioning and disk subsystem Thick provisioning

5 mikosrt comentou às Link permanente

Hello Nigel, We are starting to look a SSP on our environment and was wondering if you or anyone else has a recommendation on how we should implement it. We have 6 Power 780 machines on our Data Center and all the LPARs are fully virtualized, each 780 has a pair of VIOS and a range of about 60 to 100 LPARs on each. My questions is: Should I create a second pair of VIOS for SSP on each 780, or should we use the same pair of VIOS that we have on each 780 and they are already doing SEA, vscsi and NPIV adapters to the client LPARs? <br /> Thanks, Luis

6 nagger comentou às Link permanente

Hi mikosrt, <br /> Shared Storage Pools are just a regular VIOS feature (assuming you are running the current level which is 2.2.2.2 at the moment). There is no need to use different VIOS pairs. I have not had problems but then I don't have large production workloads. Nor have I heard of problems and I would expect too, if customers ran it to issues. I guess I should recommend you run a prototype machine/VIOS to check it out and gain familiarity with the operations and commands but it is all pretty simple.

7 nagger comentou às Link permanente

Hi Jackyby, <br /> On the Capture/ Deploy. <br /> I am told the Capture makes a complete copy of the master LPAR disks. On Deploy the disks are cloned = new data structure to the captures disk blocks. Then when you update/write a block to the new LPAR, the blocks are copied so the new LPAR has its own copy of the block. This is not expected to have any performance issue. Remember SSP uses 64MB size blocks so it will not happen often. For example initially there are writes to /etc and /tmp then updates to the AIX space is small and the bulk of the read-only AIX (roughly 2.5 GB) is never written. <div>&nbsp;</div> On Thin/Thick SSP3 provisioning and Thin/Thick disk provisioning. <br /> For non-vital non-production LPAR workloads there is no problem with Thin+Thin as most disks provisioning is at a much smaller "chunks size". <br /> The golden rule is never ever run out of free block in the pools - that is a ghastly place to go. <br /> For vital production workloads and if you are worried that you might not notice and react in time to running out of free space then Thick + Thick might be a good idea but then you might also not notice the space from the LPAR OS point of view getting low on free space. <br /> I guess the other combinations are all up to you. They all work and you have to balance efficient reduced cost disk space use against systems administration rigor. <br /> Cheers Nigel