IBM Support

Using NIM Alternate Disk Migration (NIMADM)

Question & Answer


Question

How to use NIMADM

Answer

Using NIM Alternate Disk Migration (NIMADM)

What this document will cover:
What is NIMADM
Preparing for a NIMADM
Create a copy of rootvg to a free disk (or disks) and simultaneously migrate it to a new version or release level of AIX
Using a copy of rootvg, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX
Using a NIM mksysb resource, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX
Using a NIM mksysb resource, restore to a free disk (or disks) and simultaneously migrate to a new version or release level of AIX
Waking up and putting to sleep the migrated disk
Using a post migration script with nimadm
Logs used during the nimadm process and sample entries
Debug techniques if nimadm fails




What this document will not cover:
The phases of the nimadm process
Flags used with the nimadm command
Requirements for nimadm
Limitations for nimadm
Note: The above information can be found in the man page at
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds4/nimadm.htm



What is NIMADM

NIMADM stands for Network Install Manager Alternate Disk Migration

The nimadm command is a utility that allows the system administrator to do the following:

· Create a copy of rootvg to a free disk (or disks) and simultaneously migrate it to a new version or release level of AIX.

· Using a copy of rootvg, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX.

· Using a NIM mksysb resource, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX.

· Using a NIM mksysb resource, restore to a free disk (or disks) and simultaneously migrate to a new version or release level of AIX.

The nimadm command uses NIM resources to perform these functions.


Preparing for a NIMADM

There are a few requirements that must be met before attempting to use I'll mention just some of these here.

· The NIM master must have the fileset installed in its own rootvg and in the SPOT that will be used for the migration. Both need to be at the same level. It is not necessary to install the alternate disk utilities on the client.



· The lpp_source and SPOT NIM resources that have been selected for the migration MUST match the AIX level to which you are migrating.

· The NIM master (as always) should be at the same or higher AIX level than the level you are migrating to on the client.

· The target client must be registered with the NIM master as a standalone NIM client.
· You will need to have the connection working between the master and client by either using rsh or nimsh.

· The NIM master must be able to execute remote commands on the client using rsh. rsh will need to be working in order for nimadm to work.
· Verify with the following on the client:
· # lssrc -ls inetd
· exec, login, and shell need to be active
· if not all active, please vi /etc/inetd.conf and make sure they are not commented out. If they have a # sign in front, remove the # , save the file and run refresh -s inetd, then verify with lssrc -ls inetd.

· Ensure the NIM client has a spare disk (not allocated to a volume group) large enough to contain a complete copy of its rootvg. If rootvg is mirrored, break the mirror and use one of the disks for the migration.

· Ensure the clients NIM master has a volume group (for example, nimadmvg) with enough free space to cater for a complete copy of the client's rootvg. If more than one AIX migration is occurring for multiple NIM clients, make sure there is capacity for a copy of each clients rootvg.



Create a copy of rootvg to a free disk (or disks) and simultaneously migrate it to a new version or release level of AIX

Creating a migrated copy of rootvg on another disk is probably the most common and straight forward use of nimadm. It dramatically reduces the amount of downtime compared to just doing a normal nim migration. As long as you have a free disk or disks large enough for a migrated copy of the rootvg and a nim server that is at the same level or higher than the level you want to migrate to this can be done. Once it completes you just reboot to the migrated disk and you are back up at the migrated level. If you discover any problems at the new level you just need to change the boot list back to the previous disk and reboot.

To perform a nimadm via SMIT do the following on the nim master:
# smitty nimadm
Perform NIM Alternate Disk Migration

Perform NIM Alternate Disk Migration

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]
* Target NIM Client [select your client]
* NIM LPP_SOURCE resource [select the lpp_source]
* NIM SPOT resource [select the spot]
* Target Disk(s) to install [select the disk(s) you want to migrate to]
DISK CACHE volume group name []

NIM IMAGE_DATA resource []
NIM BOSINST_DATA resource []
NIM EXCLUDE_FILES resource []
NIM INSTALLP_BUNDLE resource []
NIM PRE-MIGRATION SCRIPT resource []
NIM POST-MIGRATION SCRIPT resource []

Phase to execute [all]
NFS mounting options []
Set Client bootlist to alternate disk? yes
Reboot NIM Client when complete? no
Verbose output? no
Debug output? no

ACCEPT new license agreements? No <-- change this to yes

Nothing else is required but it is recommended to enter a volume group for disk caching in order to avoid NFS issues.

If you prefer to use command line, the command would be:
# nimadm -c <client hostname> -l <lpp_source> -s <spot> -d <target disk(s)> -Y

In order to also include a volume group for disk caching you would use the following command:
# nimadm -c <client hostname> -l <lpp_source> -s <spot> -j <VG name> -d <target disk(s)> -Y





Using a copy of rootvg, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX

If you have a nim client and you need a mksysb of it that can be put on another system that requires a higher release or version than the client is currently at, this will accomplish that.

There isn’t currently a way to do a Client to mksysb Migration via SMIT, you will need to use command line on the nim master:

To accomplish a Client to mksysb Migration via command line you would do the following:

# nimadm –c <client hostname> -O <path for migrated mksysb file> -s <spot> -l <lpp_source> -j <VGname> -Y -N <new nim mksysb name>






Using a NIM mksysb resource, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX

If you have a mksysb of a nim client that you would like to install on another system which requires a higher version or release level you could do a mksysb to mksysb migration and then install the migrated mksysb on the new system.

There isn’t currently a way to do a mksysb to mksysb Migration via SMIT, you will need to use command line on the nim master:

To accomplish a mksysb to mksysb Migration via command line you would do the following:

nimadm -T <the current mksysb resource> -O <full path and filename of the new mksysb image file> -s <SPOT name> -l <lpp_source name> -j <Volume group to create temporary filesystems in> -Y -N <New mksysb resource_name>



Using a NIM mksysb resource, restore to a free disk (or disks) and simultaneously migrate to a new version or release level of AIX



There may be times when you have a mksysb of a nim client that you want to restore to a free disk of another client while also migrating the mksysb to a new version or release level. This method would be especially useful if the system you are wanting to restore it to needs to have (or you would like it to have) a newer version or release level but you aren’t able to take the system down to install a migrated mksysb.
Once the mksysb to client migration completes you will just need to boot to the migrated disk and you will be back up at the migrated level. If there are any problems at the new level you can just change the bootlist back to the previous disk and reboot.

There isn’t currently a way to do a mksysb to Client Migration via SMIT, you will need to use command line on the nim master:

To accomplish a mksysb to Client Migration via command line you would do the following:

nimadm –T <the existing mksysb resource> –c <client hostname> –s <SPOT name> -l <lpp_source name> -d TargetDisks –j VGname –Y

Note: No changes are made to the existing mksysb resource.



Waking up and putting to sleep the migrated disk

There may be times that you want to wake up the migrated disk.
This should be done from the nim server, not the client, using the following command:

# nimadm –W -c <nim client> -s <spot> -d <Target disk>

After waking it up, be sure to put it back to sleep before rebooting.
That can be done with the following command:

# nimadm –S -c <nim client> -s <spot>



Using a post-migration script with nimadm

There may be times when you want to remove and/or install some other filesets after the migration completes. You can do that with a post-migration NIM script.

The nimadm utility can perform both pre and post migration tasks. This is accomplished by running NIM scripts either before or after a migration. The tool accepts the following flags for pre and post migration script resources:

-a PreMigrationScript Specifies the pre-migration NIM script resource.
-z PostMigrationScript Specifies the post-migration NIM script resource.

pre-migration

This script resource that is run on the NIM master, but in the environment of the client's alt_inst file system that is mounted on the master (this is done by using the chroot command). This script is run before the migration begins.


post-migration

This script resource is similar to the pre-migration script, but it is executed after the migration is complete.

I will give an example of a post-migration only, although the configuration is the same for both.

In this example I will show you how to uninstall and install a fileset.

You will first need to collect the filesets that you want to install after the migration. Then place them into a local directory on your NIM master. Along with the software, also place a copy of a NIM script in the same directory on the NIM master. The script name for this example is XYZpost.ksh.

root@<nim_master>: /usr/local/XYZ # ls -ltr
total 544
-r-xr-xr-x 1 root system 51200 May 1 11:43 devices.pciex.xyz.rte
-r-xr-xr-x 1 root system 715 Nov 1 16:57 XYZpost.ksh
-rw-r--r-- 1 root system 2310 Nov 2 14:57 .toc

The contents of the script are simple. This script will de-install the old device fileset and then immediately install the latest version of the XYZ device fileset.

#!/usr/bin/ksh

echo “Uninstalling XYZ fileset: devices.pci.xyz.rte.”

installp -u devices.pci.xyz.rte

echo Return code is: $?

echo echo “Installing XYZ fileset: devices.pciex.xyz.rte.”

cd /usr/local/XYZ/

installp -aXd . devices.pci.xyz.rte

echo Return code is: $?

cd /

At this point you would copy the same directory and all of its contents to the NIM client.

root@<nim_master>: /usr/local # scp –pr XYZ lparaix01:/usr/local/

…etc…

<client_alt> : /usr/local/XYZ # ls -ltr

total 0

-r-xr-xr-x 1 root system 51200 May 1 11:43 devices.pciex.xyz.rte


-r-xr-xr-x 1 root system 715 Nov 1 16:57 XYZpost.ksh
-rw-r--r-- 1 root system 2310 Nov 2 14:57 .toc

NOTE: Make sure that any scripts you write for use with nimadm start with an appropriate ‘hashbang’ to announce it is a shell script and the shell that must be used to execute it e.g. #!/usr/bin/ksh. If you forget to do this nimadm will fail to execute your script and will report an error message similar to the following:

+-----------------------------------------------------------------------------+

Executing nimadm phase 7.

+-----------------------------------------------------------------------------+

Executing user chroot script /usr/local/XYZ/XYZpost.ksh.

/<client_alt>/alt_inst/tmp/.alt_mig_chroot_script.11731036: Cannot run a file that does not have a valid format.

The next step is to define the script as a NIM resource so that nimadm can call the resource during the migration process. For this example the new NIM resource will be called XYZPOST.

This is easily achieved using smit nim_mkres:

root@<nim_master>: / # smit nim_mkres

| script = an executable file which is executed on a client |

Define a Resource

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[Entry Fields]

* Resource Name [XYZPOST]

* Resource Type script

* Server of Resource [master] +

* Location of Resource [/usr/local/XYZ/XYZpost.ksh] /

We can confirm that the NIM script resource is now available using the lsnim command.

root@<nim_master> : / # lsnim -t script

XYZPOST resources script

root@<nim_master> : / # lsnim -l XYZPOST

XYZPOST:

class = resources

type = script

Rstate = ready for use

prev_state = unavailable for use

location = /usr/local/XYZ/XYZpost.ksh

alloc_count = 0

server = master

Now that the script is in place, and defined to NIM, it is ready to use. We will migrate the system from AIX 5.3 to AIX 6.1 using nimadm. Once the migration phase is complete (phases 1 to 6), the post-migration script will be executed in the NIM clients nimadm (chroot) environment on the NIM master. Once this is finished the NIM clients data is synced back to the NIM clients alternate disk and the boot image is created. The migration process is then complete.

We add the –z flag to our nimadm command line options to specify the post migration resource.

You can have nimadm run all phases in sequence with the following command.

root@<nim_master> : / # nimadm -j nimadmvg -c <client_alt> -s spotaix610605 -l lpp_sourceaix610605 -d hdisk2 -z XYZPOST -Y

Once the nimadm operation is finished the NIM client is rebooted. We verify it is now running AIX 6.1. And the correct version of the device fileset is installed.

<client_alt> : / # oslevel -s

6100-06-05-1115

<client_alt> : / # lslpp -l devices.pciex.xyz.rte

Fileset Level State Description

----------------------------------------------------------------------------

Path: /usr/lib/objrepos

devices. pciex.xyz.rte

6.1.5.0 COMMITTED AIX Support for XYZ Device

And there you have it, an example of using a post migration script with nimadm.



Logs used during the nimadm process and sample entries


The following logs are created during the nimadm process:

On the nim master
/var/adm/ras/alt_mig/<client hostname>_alt_mig.log
Note: the following is also in the smit.log if done through smit

NIM MASTER DATE: Mon Feb 15 08:41:18 CST 2010
NIM CLIENT DATE: Mon Feb 15 08:41:17 CST 2010
NIMADM PARAMETERS: -H -ccooliobso -l6100_04_lpp -s6100_04_spot -dhdisk1 -V -Y
Starting Alternate Disk Migration.

+-------------------------------------------------------------------+
Executing nimadm phase 1.
+-------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -V -M 6.1 -P1 -d "hdisk1"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5.
Creating logical volume alt_hd6.
Creating logical volume alt_hd8.
Creating logical volume alt_hd4.
Creating logical volume alt_hd2.
Creating logical volume alt_hd9var.
Creating logical volume alt_hd3.
Creating logical volume alt_hd1.
Creating logical volume alt_hd10opt.
Creating logical volume alt_lg_dumplv.
Creating logical volume alt_hd11admin.
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/var file system.
Generating a list of files
for backup and restore into the alternate file system...
Backing-up the rootvg files and restoring them to the alternate file system...
x 0 ./
x 0 ./.SPOT
x 0 ./.SPOT/usr
x 0 ./.SPOT/usr/sys
x 0 ./.SPOT/usr/sys/inst.images
x 0 ./.image.data.removetag
x 29 ./.rhosts
x 702 ./.sh_history
x 0 ./530images
x 0 ./ALT_MIG_IMAGES
x 0 ./ALT_MIG_SPOT
x 0 ./audit
x 8 ./bin
x 0 ./dev
x 0 ./dev/.SRC-unix
x 0 ./dev/IPL_rootvg
x 0 ./dev/__vg10
x 0 ./dev/__vg42
x 0 ./dev/alt_hd1
x 0 ./dev/alt_hd10opt
x 0 ./dev/alt_hd11admin
x 0 ./dev/alt_hd2
x 0 ./dev/alt_hd3
x 0 ./dev/alt_hd4
x 0 ./dev/alt_hd5
x 0 ./dev/alt_hd6
x 0 ./dev/alt_hd8
x 0 ./dev/alt_hd9var
x 0 ./dev/alt_lg_dumplv
x 0 ./dev/altinst_rootvg
x 0 ./dev/audit
x 0 ./dev/cd0
.
.
.
. <deleted numerous lines for brevity>
.
.
.
Phase 1 complete.

+-------------------------------------------------------------------+
Executing nimadm phase 2.
+-------------------------------------------------------------------+
Exporting alt_inst filesystems from client cooliobso.austin.ibm.com
to NIM master lucidbso.austin.ibm.com:
Exporting /alt_inst from client.
Exporting /alt_inst/admin from client.
Exporting /alt_inst/home from client.
Exporting /alt_inst/opt from client.
Exporting /alt_inst/tmp from client.
Exporting /alt_inst/usr from client.
Exporting /alt_inst/var from client.

+-------------------------------------------------------------------+
Executing nimadm phase 3.
+-------------------------------------------------------------------+
NFS mounting client's alt_inst filesystems on the NIM master:
Mounting cooliobso.austin.ibm.com:/alt_inst.
Mounting cooliobso.austin.ibm.com:/alt_inst/admin.
Mounting cooliobso.austin.ibm.com:/alt_inst/home.
Mounting cooliobso.austin.ibm.com:/alt_inst/opt.
Mounting cooliobso.austin.ibm.com:/alt_inst/tmp.
Mounting cooliobso.austin.ibm.com:/alt_inst/usr.
Mounting cooliobso.austin.ibm.com:/alt_inst/var.

+-------------------------------------------------------------------+
Executing nimadm phase 4.
+-------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.

+-------------------------------------------------------------------+
Executing nimadm phase 5.
+-------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/tmp
Restoring base operating system.
New volume on /export/lpp_source/6100_04_lpp/installp/ppc/bos:
Cluster 51200 bytes (100 blocks).
Volume number 1
Date of backup: Wed Sep 23 15:05:49 2009
Files backed up by name
User BUILD
x 0 ./
x 6176 ./bosinst.data
x 15079 ./image.data
x 8102 ./lpp_name
x 0 ./usr
x 0 ./usr/lpp
x 0 ./usr/lpp/bos
x 652832 ./usr/lpp/bos/liblpp.a
.
.
.
. <deleted numerous lines for brevity>
.
.
.

x 24 ./usr/lib/libmlsenc.a
x 155417 ./usr/ccs/lib/libmls.a
x 21 ./usr/lib/libmls.a
total size: 121131532
files restored: 1832
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.
Running migration merge method: ODM_merge vfs.
Running migration merge method: ODM_merge xtiso.conf.
Running migration merge method: ODM_merge PdAtXtd.
Running migration merge method: ODM_merge PdDv.
Running migration merge method: convert_errnotify.
Running migration merge method: passwd_mig.
Running migration merge method: login_mig.
Running migration merge method: user_mrg.
Running migration merge method: secur_mig.
Running migration merge method: RoleMerge.
Running migration merge method: methods_mig.
Running migration merge method: mkusr_mig.
Running migration merge method: group_mig.
Running migration merge method: ldapcfg_mig.
Running migration merge method: ldapmap_mig.
Running migration merge method: convert_errlog.
Running migration merge method: ODM_merge GAI.
Running migration merge method: ODM_merge PdAt.
Running migration merge method: merge_smit_db.
Running migration merge method: ODM_merge fix.
Running migration merge method: merge_swvpds.
Running migration merge method: SysckMerge.

+-------------------------------------------------------------------+
Executing nimadm phase 6.
+-------------------------------------------------------------------+
Installing and migrating software.
Checking space requirements for installp install.
Expanding /alt_inst/opt client filesystem.
Filesystem size changed to 262144
Expanding /alt_inst/usr client filesystem.
Filesystem size changed to 3276800
Expanding /alt_inst/var client filesystem.
Filesystem size changed to 262144
Installing software with the installp installer.
+-------------------------------------------------------------------+
Pre-installation Verification...
+-------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

WARNINGS
--------
Problems described in this section are not likely to be the source of any
immediate or serious failures, but further actions may be necessary or
desired.

Already Installed
-----------------
The number of selected filesets that are either already installed
or effectively installed through superseding filesets is 43. See
the summaries at the end of this installation for details.

NOTE: Base level filesets may be reinstalled using the "Force"
option (-F flag), or they may be removed, using the deinstall or
"Remove Software Products" facility (-u flag), and then reinstalled.

<< End of Warning Section >>

SUCCESSES
---------
Filesets listed in this section passed pre-installation verification
and will be installed.
-- Filesets are listed in the order in which they will be installed.
-- The reason for installing each fileset is indicated with a keyword
in parentheses and explained by a "Success Key" following this list.
-- If a fileset has requisites they are listed (indented)
beneath the fileset.

xlC.sup.aix50.rte 9.0.0.1 (Selected)
XL C/C++ Runtime for AIX 5.2
Requisites:
bos.rte 6.1.4.0 (INSTALLED)
bos.rte.Dt 6.1.2.0 (INSTALLED)
bos.rte.ILS 6.1.4.0 (INSTALLED)
bos.rte.SRC 6.1.4.0 (INSTALLED)
bos.rte.X11 6.1.0.0 (INSTALLED)
bos.rte.aio 6.1.4.0 (INSTALLED)
bos.rte.archive 6.1.4.0 (INSTALLED)
bos.rte.bind_cmds 6.1.4.0 (INSTALLED)
bos.rte.boot 6.1.4.0 (INSTALLED)
bos.rte.bosinst 6.1.4.0 (INSTALLED)
bos.rte.commands 6.1.4.0 (INSTALLED)
bos.rte.compare 6.1.4.0 (INSTALLED)
bos.rte.console 6.1.4.0 (INSTALLED)
bos.rte.control 6.1.4.0 (INSTALLED)
bos.rte.cron 6.1.4.0 (INSTALLED)
bos.rte.date 6.1.4.0 (INSTALLED)
bos.rte.devices 6.1.4.0 (INSTALLED)
bos.rte.devices_msg 6.1.4.0 (INSTALLED)
bos.rte.diag 6.1.4.0 (INSTALLED)
bos.rte.edit 6.1.4.0 (INSTALLED)
bos.rte.filesystem 6.1.4.0 (INSTALLED)
bos.rte.iconv 6.1.4.0 (INSTALLED)
bos.rte.ifor_ls 6.1.1.0 (INSTALLED)
bos.rte.im 6.1.0.0 (INSTALLED)
bos.rte.install 6.1.4.0 (INSTALLED)
bos.rte.jfscomp 6.1.4.0 (INSTALLED)
bos.rte.libc 6.1.4.0 (INSTALLED)
bos.rte.libcfg 6.1.4.0 (INSTALLED)
bos.rte.libcur 6.1.4.0 (INSTALLED)
bos.rte.libdbm 6.1.0.0 (INSTALLED)
bos.rte.libnetsvc 6.1.0.0 (INSTALLED)
bos.rte.libpthreads 6.1.4.0 (INSTALLED)
bos.rte.libqb 6.1.4.0 (INSTALLED)
bos.rte.libs 6.1.0.0 (INSTALLED)
bos.rte.loc 6.1.4.0 (INSTALLED)
bos.rte.lvm 6.1.4.0 (INSTALLED)
bos.rte.man 6.1.4.0 (INSTALLED)
bos.rte.methods 6.1.4.0 (INSTALLED)
bos.rte.misc_cmds 6.1.4.0 (INSTALLED)
bos.rte.mlslib 6.1.4.0 (INSTALLED)
bos.rte.net 6.1.4.0 (INSTALLED)
bos.rte.odm 6.1.4.0 (INSTALLED)
bos.rte.printers 6.1.4.0 (INSTALLED)
bos.rte.security 6.1.4.0 (INSTALLED)
bos.rte.serv_aid 6.1.4.0 (INSTALLED)
bos.rte.shell 6.1.4.0 (INSTALLED)
bos.rte.streams 6.1.4.0 (INSTALLED)
bos.rte.tty 6.1.4.0 (INSTALLED)
xlC.aix61.rte 10.1.0.2 (TO BE INSTALLED)
xlC.rte 10.1.0.2 (TO BE INSTALLED)
.
.
.
. <deleted numerous lines for brevity>
.
.
.
Success Key:
Selected -- Explicitly selected by user for installation.
Maintenance -- Maintenance Level fileset update; being installed
automatically to enable the level of the system to be
tracked.
Mandatory -- Considered to be important to the system; will always
be installed when detected on the installation media.
Requisite -- Requisite of other filesets being installed.
P_Requisite -- Previously installed fileset's requisite; being installed
automatically now to ensure system's consistency. (Only
installed automatically when "auto-install" (-g flag)
is specified.)
Supersedes -- Superseding fileset update; not selected, chosen instead
of an older, selected update. (Only chosen in this fashion
when "auto-install" is specified (-g flag)).

<< End of Success Section >>

FILESET STATISTICS
------------------
590 Selected to be installed, of which:
547 Passed pre-installation verification
43 Already installed (directly or via superseding filesets)
5 Additional requisites to be automatically installed
----
552 Total to be installed

+-------------------------------------------------------------------+
Installing Software...
+-------------------------------------------------------------------+

installp: APPLYING software for:
xlC.sup.aix50.rte 9.0.0.1


. . . . . << Copyright notice for xlC.sup.aix50.rte >> . . . . . . .
Licensed Materials - Property of IBM

5724S7100
Copyright IBM Corp. 1991, 2007.
Copyright AT&T 1984, 1985, 1986, 1987, 1988, 1989.
Copyright Unix System Labs, Inc., a subsidiary of Novell, Inc. 1993.
All Rights Reserved.
IBM is a registered trademark of IBM Corp. in the U.S.,
other countries or both.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for xlC.sup.aix50.rte >>. . . .

Filesets processed: 1 of 552 (Total time: 21 secs).
.
.
.
. <deleted numerous lines for brevity>
.
.
.
Finished processing all filesets. (Total time: 55 mins 39 secs).

+-------------------------------------------------------------------+
Pre-commit Verification...
+-------------------------------------------------------------------+
Verifying requisites...done
Results...

SUCCESSES
---------
Filesets listed in this section passed pre-commit verification
and will be committed.
-- Filesets are listed in the order in which they will be committed.
-- The reason for committing each fileset is indicated with a keyword
in parentheses and explained by a "Success Key" following this list.
-- If a fileset has requisites they are listed (indented)
beneath the fileset.

bos.alt_disk_install.rte 6.1.4.2 (Selected)
Alternate Disk Installation Runtime

devices.common.IBM.ml 1.4.0.1 (Selected)
Multi Link Interface Runtime

Success Key:
Selected -- Explicitly selected by user to be committed.
Maintenance -- Maintenance Level fileset update; being committed
automatically to preserve the ability to track the level
of the system (prevents rejecting).
Requisite -- Requisite of filesets being committed; requisites are always
committed when "auto-commit" (-g flag) is specified.

<< End of Success Section >>

+-------------------------------------------------------------------+
Committing Software...
+-------------------------------------------------------------------+

installp: COMMITTING software for:
bos.alt_disk_install.rte 6.1.4.2

Filesets processed: 1 of 2 (Total time: 55 mins 39 secs).

installp: COMMITTING software for:
devices.common.IBM.ml 1.4.0.1

Finished processing all filesets. (Total time: 55 mins 40 secs).

Some configuration files could not be automatically merged into the system
during the installation. The previous versions of these files have been
saved in a configuration directory as listed below. Compare the saved files
and the newly installed files to determine if you need to recover
configuration data. Consult product documentation to determine how to
merge the data.

Configuration files which were saved in /lpp/save.config:
/etc/3270.keys
/etc/3270keys.hft
/etc/aixmibd.conf
/etc/bootptab
/etc/hostmibd.conf
/etc/inetd.conf
/etc/mail/sendmail.cf
/etc/map3270
/etc/mh/MailAliases
/etc/mh/components
/etc/mh/digestcomps
/etc/mh/distcomps
/etc/mh/forwcomps
/etc/mh/maildelivery
/etc/mh/mhl.digest
/etc/mh/mhl.format
/etc/mh/mhl.forward
/etc/mh/mhl.reply
/etc/mh/mtstailor
/etc/mh/rcvdistcomps
/etc/mh/repl.filter
/etc/mh/replcomps
/etc/mh/scan.size
/etc/mh/scan.time
/etc/mh/scan.timely
/etc/mh/x400.comp
/etc/mib.defs
/etc/nfs.clean
/etc/ntp.conf
/etc/rc.bsdnet
/etc/rc.net
/etc/rc.nfs
/etc/rc.tcpip
/etc/rpc
/etc/services
/etc/slip.hosts
/etc/slp.conf
/etc/snmpd.conf
/etc/snmpd.peers
/etc/snmpmibd.conf
/etc/syslog.conf
/etc/telnet.conf

Configuration files which were saved in /usr/lpp/save.config:
/usr/lpp/X11/bin/dynamic_ext
/usr/lpp/X11/defaults/xinitrc
/usr/lpp/X11/defaults/xserverrc
/usr/lpp/X11/lib/X11/XpConfig/C/print/models/HPDJ1600C/model-config
/usr/lpp/X11/lib/X11/XpConfig/C/print/models/HPLJ4family/model-config
/usr/lpp/X11/lib/X11/XpConfig/C/print/models/PSdefault/model-config
/usr/lpp/X11/lib/X11/app-defaults/Msmit
/usr/lpp/X11/lib/X11/app-defaults/XTerm
/usr/lpp/X11/lib/X11/xdm/Xresources
/usr/lpp/X11/lib/X11/xdm/Xsession
/usr/lpp/X11/lib/X11/xdm/xdm-config

Please wait...

/usr/sbin/rsct/install/bin/ctposti
0513-071 The ctcas Subsystem has been added.
0513-071 The ctrmc Subsystem has been added.
done
+-------------------------------------------------------------------+
Summaries:
+-------------------------------------------------------------------+

Pre-installation Failure/Warning Summary
----------------------------------------
Name Level Pre-installation Failure/Warning
----------------------------------------------------------------------
invscout.ldb 2.2.0.2 Already installed
bos.rte 6.1.4.0 Already installed
bos.rte.Dt 6.1.2.0 Already installed
bos.rte.ILS 6.1.4.0 Already installed
bos.rte.SRC 6.1.4.0 Already installed
bos.rte.aio 6.1.4.0 Already installed
bos.rte.archive 6.1.4.0 Already installed
bos.rte.bind_cmds 6.1.4.0 Already installed
bos.rte.boot 6.1.4.0 Already installed
bos.rte.bosinst 6.1.4.0 Already installed
bos.rte.commands 6.1.4.0 Already installed
bos.rte.compare 6.1.4.0 Already installed
bos.rte.console 6.1.4.0 Already installed
bos.rte.control 6.1.4.0 Already installed
bos.rte.cron 6.1.4.0 Already installed
bos.rte.date 6.1.4.0 Already installed
bos.rte.devices 6.1.4.0 Already installed
bos.rte.devices_msg 6.1.4.0 Already installed
bos.rte.diag 6.1.4.0 Already installed
bos.rte.edit 6.1.4.0 Already installed
bos.rte.filesystem 6.1.4.0 Already installed
bos.rte.iconv 6.1.4.0 Already installed
bos.rte.ifor_ls 6.1.1.0 Already installed
bos.rte.install 6.1.4.0 Already installed
bos.rte.jfscomp 6.1.4.0 Already installed
bos.rte.libc 6.1.4.0 Already installed
bos.rte.libcfg 6.1.4.0 Already installed
bos.rte.libcur 6.1.4.0 Already installed
bos.rte.libpthreads 6.1.4.0 Already installed
bos.rte.loc 6.1.4.0 Already installed
bos.rte.lvm 6.1.4.0 Already installed
bos.rte.man 6.1.4.0 Already installed
bos.rte.methods 6.1.4.0 Already installed
bos.rte.misc_cmds 6.1.4.0 Already installed
bos.rte.net 6.1.4.0 Already installed
bos.rte.odm 6.1.4.0 Already installed
bos.rte.printers 6.1.4.0 Already installed
bos.rte.security 6.1.4.0 Already installed
bos.rte.serv_aid 6.1.4.0 Already installed
bos.rte.shell 6.1.4.0 Already installed
bos.rte.streams 6.1.4.0 Already installed
bos.rte.tty 6.1.4.0 Already installed


Installation Summary
--------------------
Name Level Part Event Result
-----------------------------------------------------------------------
xlC.sup.aix50.rte 9.0.0.1 USR APPLY SUCCESS
xlC.aix61.rte 10.1.0.2 USR APPLY SUCCESS
wio.fcp 6.1.4.0 USR APPLY SUCCESS
.
.
.
. <deleted numerous lines for brevity>
.
.
.
devices.common.IBM.ml 1.4.0.1 USR COMMIT SUCCESS
devices.common.IBM.ml 1.4.0.1 ROOT COMMIT SUCCESS
rm: Directory /tmp/inutmpU6u5ia is not empty.

Checking space requirements for rpm install.
Installing software with the rpm installer.
package mkisofs-1.13-4 is already installed
install_all_updates: Initializing system parameters.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Checking for updated install utilities on media.
install_all_updates: Processing media.
install_all_updates: Generating list of updatable installp filesets.
#---------------------------------------------------------------------
# No filesets on the media could be used to update the currently
# installed software.
#
# Either the software is already at the same level as on the media, or
# the media contains only filesets which are not currently installed.
#---------------------------------------------------------------------

install_all_updates: Generating list of updatable rpm packages.
install_all_updates: No updatable rpm packages found.

install_all_updates: Checking for recommended maintenance level 6100-04.
install_all_updates: Executing /usr/bin/oslevel -rf, Result = 6100-04
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
Restoring device ODM database.

+-------------------------------------------------------------------+
Executing nimadm phase 7.
+-------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.

+-------------------------------------------------------------------+
Executing nimadm phase 8.
+-------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 42633 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk1.

+-------------------------------------------------------------------+
Executing nimadm phase 9.
+-------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /cooliobso_alt/alt_inst/var
forced unmount of /cooliobso_alt/alt_inst/usr
forced unmount of /cooliobso_alt/alt_inst/tmp
forced unmount of /cooliobso_alt/alt_inst/opt
forced unmount of /cooliobso_alt/alt_inst/home
forced unmount of /cooliobso_alt/alt_inst/admin
forced unmount of /cooliobso_alt/alt_inst

+-------------------------------------------------------------------+
Executing nimadm phase 10.
+-------------------------------------------------------------------+
Unexporting alt_inst filesystems on client cooliobso.austin.ibm.com:

+-------------------------------------------------------------------+
Executing nimadm phase 11.
+-------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -V -M 6.1 -P3 -d "hdisk1"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
0505-122 Warning: alt_blvset failed.
Bootlist is set to the boot disk: hdisk1 blv=hd5

+-------------------------------------------------------------------+
Executing nimadm phase 12.
+-------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client cooliobso.




The log on the nim client is at
/var/adm/ras/alt_disk_inst.log

#############################################
Mon Feb 15 08:41:17 CST 2010
cmd: /ALT_MIG_SPOT/sbin/alt_disk_copy -V -M 6.1 -P1 -d hdisk1
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5.
Creating logical volume alt_hd6.
Creating logical volume alt_hd8.
Creating logical volume alt_hd4.
Creating logical volume alt_hd2.
Creating logical volume alt_hd9var.
Creating logical volume alt_hd3.
Creating logical volume alt_hd1.
Creating logical volume alt_hd10opt.
Creating logical volume alt_lg_dumplv.
Creating logical volume alt_hd11admin.
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/var file system.
Generating a list of files
for backup and restore into the alternate file system...
Backing-up the rootvg files and restoring them to the alternate file system...
Phase 1 complete.
## Phase 3 ###################
##################################################
Mon Feb 15 10:12:14 CST 2010
cmd: /ALT_MIG_SPOT/sbin/alt_disk_copy -V -M 6.1 -P3 -d hdisk1
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
0505-122 Warning: alt_blvset failed.
Bootlist is set to the boot disk: hdisk1 blv=hd5



Debug techniques if nimadm fails

If nimadm fails, it’s best to first look at the logs to determine the cause of failure. If there isn’t enough information in the logs to determine the problem it may be necessary to run nimadm in debug mode.
Debug mode can be enabled by using the –D option if you are using command line or by setting Debug output? to yes if you are using smitty nimadm

There are many times that problems with nimadm can be resolved by using disk caching. That will eliminate issues that arise due to slow networks and/or NFS issues. Disk caching can be enabled by using the –j option and specifying the volume group to use on the nim master to perform the nimadm. If you are using smitty nimadm you would specify the volume group to use with DISK CACHE volume group name.

Since this is a migration you will want to verify that your system is in a consistent state before performing the nimadm.
On the client, run the following commands:
# oslevel –s this will tell you what level the system is currently at
# oslevel –sq the highest level that is output should match the oslevel –s output from above.
# lppchk –v this should return to the prompt with no output
# lsvg rootvg | grep SIZE verify the size is at least 32 megabytes
# bosboot –ad /dev/ipldevice verify the boot image is created and there are no errors.

For additional checks you may want to do prior to the nimadm you may want to review the Preparing to Migrate document at
http://www-01.ibm.com/support/docview.wss?uid=isg3T1011431

If you have a pre-migration script resource defined on your nim master you can specify for that to be run. If you do it will be run during phase 4 of the nimadm process.

If you get an error similar to the following:
0505-205 nimadm: The level of bos.alt_disk_install.rte installed in SPOT <spot> (0.0.0.0) does not match the NIM master's level
You will need to verify the level of bos.alt_disk_install.rte on the master and in the spot.
To check the master:
# lslpp –l bos.alt_disk_install.rte
To check the spot:
# nim –o showres <spot> | grep bos.alt_disk_install.rte
If the levels aren’t the same (or if the spot doesn’t have the fileset) you can install/update it with smitty nim_inst_all
Then select the spot and the lpp_source that has the same level fileset that is on the master, then for Software to Install enter bos.alt_disk_install.rte

Although there are many different kinds of errors possible with nimadm, having the correct level of bos.alt_disk_install.rte in the spot and doing the pre-migration checks and using the cache option will eliminate most of them. For anything else a debug output will probably be needed for further analysis.
[{"Product":{"code":"SWG10","label":"AIX"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"Installation- backup- restore","Platform":[{"code":"PF002","label":"AIX"}],"Version":"5.3;6.1;7.1","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}}]

Document Information

More support for:
AIX

Software version:
5.3, 6.1, 7.1

Operating system(s):
AIX

Document number:
670795

Modified date:
17 June 2018

UID

isg3T1012571

Manage My Notification Subscriptions