PACKAGE: Update Release 3.1.1.10
IOSLEVEL: 3.1.1.10
|
VIOS level is |
The AIX level of the NIM Master
level must be equal to or higher than |
|
Update Release 3.1.1.10 |
AIX
7200-04-01 |
In June
2015, VIOS introduced the minipack as a new service stream delivery vehicle as
well as a change to the VIOS fix level numbering scheme. Please refer to the
VIOS Maintenance Strategy here
for more details regarding the change to the VIOS release numbering scheme.
Be sure to
heed all minimum space requirements before installing.
Review the list of fixes included
in Update Release 3.1.1.10
To take full
advantage of all the functions available in the VIOS, it may be necessary to be
at the latest system firmware level. If a system firmware update is necessary,
it is recommended that the firmware be updated before you update the VIOS to
Update Release 3.1.1.10.
Microcode or system firmware
downloads for Power Systems
Update
Release 3.1.1.10
updates your VIOS partition to ioslevel 3.1.1.10.
To determine if Update Release 3.1.1.10
is already installed, run the following command from the VIOS command line.
$
ioslevel
If Update
Release 3.1.1.10
is installed, the command output is 3.1.1.10.
VIOS 3.1.1.10 adds support for multiple IP addresses and disk-based
internode communication for Shared Storage Pool. Multiple IP addresses and disk communication
add interfaces that allow the cluster to be more resilient in communicating
with other nodes.
Note: The
primary IP address and hostname is required for making any cluster
configuration changes such as cluster create, add node, remove node, etc. To
change the primary hostname or IP address, the user needs to remove
the node first, configure that node with the new primary hostname or IP
address, and then add that node back into the cluster.
In the
upgrade from 2.X.X.X to 3.X.X.X a change was made to the SSP database
manager. To ensure a non-disruptive
upgrade of an SSP cluster from a 2.X.X.X version to a 3.X.X.X version of VIOS,
the user must first upgrade all nodes in the cluster to the latest version of
2.2.6.X, and then wait until the rolling upgrade completes. At that point, users can install version
3.1.1.X (or any 3.X.X.X version of VIOS) and continue to use their cluster as
normal.
Note: 3.X.X.X nodes are unable to join SSP clusters that
contain nodes below level 2.2.6.31 or clusters with exclusively 2.2.6.31 or
higher nodes prior to the nodes completing a rolling upgrade and saying that
they are ON_LEVEL.
Customers
that do not use Shared Storage Pool will be unaffected by this change.
VIOS
3.1.1.10 can run on any of the following Power Systems:
POWER 7 or
later.
The following requirements
and limitations apply to Shared Storage Pool (SSP) features and any associated
virtual storage enhancements.
Software Installation
SSP
Configuration
|
Feature |
Min |
Max |
Special* |
|
Number of
VIOS Nodes in Cluster |
1 |
16 |
24 |
|
Number of
Physical Disks in Pool |
1 |
1024 |
|
|
Number of
Virtual Disks (LUs) Mappings in Pool |
1 |
8192 |
|
|
Number of Client
LPARs per VIOS node |
1 |
250 |
400 |
|
Capacity
of Physical Disks in Pool |
10GB |
16TB |
|
|
Storage
Capacity of Storage Pool |
10GB |
512TB |
|
|
Capacity
of a Virtual Disk (LU) in Pool |
1GB |
4TB |
|
|
Number of
Repository Disks |
1 |
1 |
|
|
Capacity of
Repository Disk |
512MB |
1016GB |
|
|
Number of
Client LPARs per Cluster |
1 |
2000 |
|
Network Configuration
Storage Configuration
Shared Storage Pool capabilities and limitations
Please ensure
that your rootvg contains at least 30 GB and
that there is at least 4GB free space before you attempt to update to Update Release
3.1.1.10. Run the lsvg rootvg command, and then ensure there is
enough free space.
Example:
$ lsvg rootvg |
|
|
|
VOLUME GROUP: |
rootvg |
VG IDENTIFIER: |
00f6004600004c000000014306a3db3d |
VG STATE: |
active |
PP SIZE: |
64 megabyte(s) |
VG PERMISSION: |
read/write |
TOTAL PPs: |
511 (32704 megabytes) |
MAX LVs: |
256 |
FREE PPs: |
64 (4096 megabytes) |
LVs: |
14 |
USED PPs: |
447 (28608 megabytes) |
OPEN LVs: |
12 |
QUORUM: |
2 (Enabled) |
TOTAL PVs: |
1 |
VG DESCRIPTORS: |
2 |
STALE PVs: |
0 |
STALE PPs: |
0 |
ACTIVE PVs: |
1 |
AUTO ON: |
yes |
MAX PPs per VG: |
32512 |
|
|
MAX PPs per PV: |
1016 |
MAX PVs: |
32 |
LTG size (Dynamic): |
256 kilobyte(s) |
AUTO SYNC: |
no |
HOT SPARE: |
no |
BB POLICY: |
relocatable |
PV RESTRICTION: |
none |
INFINITE RETRY: |
no |
A single, merged lpp_source is not supported
for VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a
single boot update by using the alternate method described at the following
location:
SDD and SDDPCM migration
procedures when migrating VIOS from version 1.x to version 2.x
Virtual I/O Server support for Power Systems
VIOS Update Release 3.1.1.10 may be applied directly to any VIOS
at level 3.1.1.00.
The VIOS must first be upgraded to 3.1.1.00 before the 3.1.1.10
service pack update can be applied. To
learn more about how to do that, please read the information provided here.
Warning: The update may fail if
there is a loaded media repository.
To check for a loaded media repository, and then unload it,
follow these steps.
1.
To check for loaded images, run the following command:
$ lsvopt
The Media column lists any loaded media.
2.
To unload media images, run the following commands on all
Virtual Target Devices that have loaded images.
$ unloadopt -vtd
<file-backed_virtual_optical_device >
3.
To verify that all media are unloaded, run the following command
again.
$ lsvopt
The command output should show No
Media for all VTDs.
The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports
rolling updates for SSP clusters. The VIOS can be updated to Update Release
3.1.1.10 using rolling updates.
Non-disruptive rolling updated to VIOS 3.1 requires all SSP
nodes to be at VIOS 2.2.6.31 or later. See detailed instructions in the VIOS
3.1 documentation
The rolling updates enhancement allows the user to apply Update
Release 3.1.1.10 to the VIOS logical partitions in the cluster individually
without causing an outage in the entire cluster. The updated VIOS logical
partitions cannot use the new SSP capabilities until all VIOS logical
partitions in the cluster are updated.
To upgrade the VIOS logical partitions to use the new SSP
capabilities, ensure that the following conditions are met:
·
All VIOS logical partitions must have VIOS Update Release
version 2.2.6.31 or later installed.
·
All VIOS logical partitions must be running. If any VIOS logical
partition in the cluster is not running, the cluster cannot be upgraded to use
the new SSP capabilities.
Instructions: Verify the cluster is
running at the same level as your node.
1.
Run the following command:
$ cluster -status -verbose
2.
Check the Node Upgrade Status field, and you should see one of
the following terms:
UP_LEVEL: This means that the software level of the logical partition is higher
than the software level the cluster is running at.
ON_LEVEL: This means the software level of the logical partition and the
cluster are the same.
There is now a method to verify the VIOS update files before
installation. This process requires access to openssl
by the 'padmin' User, which can be accomplished by
creating a link.
Instructions: Verifying VIOS update files.
To verify the VIOS update files, follow these steps:
1.
$ oem_setup_env
2. Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl
3. Verify the link to openssl was created
# ls -alL /usr/bin/openssl /usr/ios/utils/openssl
4. Verify that both files
display similar owner and size
5. # exit
Use one of the following methods to install the latest VIOS
Service Release. As with all maintenance, you should create a VIOS backup
before making changes.
If you are running a Shared Storage Pool configuration, you must
follow the steps in Migrate Shared Storage Pool Configuration.
Note: While running 'updateios' in the
following steps, you may see accessauth messages,
but these messages can safely be ignored.
Version Specific Warning: Version 2.2.2.1, 2.2.2.2, 2.2.2.3, or
2.2.3.1
You must run updateios command
twice to get bos.alt_disk_install.boot_images
fileset update problem fixed.
Run the following command after the step of "$ updateios –accept –install –dev <directory_name >" completes.
$ updateios –accept –dev <directory_name >
Depending on the VIOS level, one or more of the LPPs below may
be reported as "Missing Requisites", and they may be ignored.
MISSING REQUISITES:
X11.loc.fr_FR.base.lib 4.3.0.0 # Base Level Fileset bos.INed 6.1.6.0 # Base Level Fileset bos.loc.pc.Ja_JP 6.1.0.0 # Base Level Fileset bos.loc.utf.EN_US 6.1.0.0 # Base Level Fileset bos.mls.rte 6.1.x.x # Base Level Fileset
Warning: If VIOS rules
have been deployed.
During update, there have been occasional issues with VIOS Rules files getting
overwritten and/or system settings getting reset to their default values.
To ensure that this doesn’t affect
you, we recommend making a backup of the current rules file. This file is located here:
/home/padmin/rules/vios_current_rules.xml
First, to capture your current system settings, run this command:
$ rules -o capture
Then, either copy the file to a
backup location, or save off a list of your current rules:
$ rules
-o list > rules_list.txt
After this is complete, proceed to
update as normal. When your update is
complete, check your current rules and ensure that they still match what is
desired. If not, either overwrite the
original rules file with your backup, or proceed to use the ‘rules -o modify’
and/or ‘rules -o add’ commands to change the rules to match what is in your
backup file.
Finally, if you’ve failed to back up
your rules, and are not sure what the rules should be, you can deploy the
recommended VIOS rules by using the following command:
$ rules
-o deploy -d
Then, if you wish to copy these new
VIOS recommended rules to your current rules file, just run:
$ rules
-o capture
Note:
This will overwrite any customized rules in the current rules file.
Applying Updates
Warning:
If the target node to be updated is part of a redundant VIOS
pair, the VIOS partner node must be fully operational before beginning to
update the target node.
Note:
For VIOS nodes that are part of an SSP cluster, the partner node
must be shown in 'cluster -status '
output as having a cluster status of OK and a pool status of OK. If the target
node is updated before its VIOS partner is fully operational, client LPARs may
crash.
Instructions:
Applying updates to a VIOS.
2.
Using ftp, transfer the update file(s) to the directory you
created.
To apply updates from a remotely mounted file system, and the remote file
system is to be mounted read-only, follow the steps:
The update release can be burned onto a CD by using the ISO
image file(s). To apply updates from the CD/DVD drive, follow the steps:
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin in order for padmin to set
authorization and establish access to the shutdown command properly.
$ clstartstop -start -n <cluster_name >
-m <hostname >
$ ioslevel
Instructions: Checking for an incomplete installation caused by a loaded media
repository.
After installing an Update Release, you can use this method to
determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository data
due to incomplete repository structure," then you have likely encountered
this problem during the installation. The media images have not been lost and
are still present in the file system of the virtual media library.
Running the lsvopt command
should show the media images.
Instructions: Recovering from an incomplete
installation caused by a loaded media repository.
To recover from this type of installation failure, unload any media
repository images, and then reinstall the ios.cli.rte package.
Follow these steps:
1.
Unload any media images
$ unloadopt -vtd
<file-backed_virtual_optical_device>
2.
Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp –Or –agX ios.cli.rte –d <device/directory >
To return to the restricted
shell:
# exit
3.
Restart the VIOS.
$ shutdown –restart
4.
Verify that the Media Repository is operational by running this
command:
$ lsrep
|
APAR |
Description |
|
IJ20543 |
POWER7FTWARE ERROR OR KDB RUNNING DIAG ON EN0H ADAPTER ON |
|
IJ20544 |
diag -cvd hung after eeh on bell |
|
IJ20547 |
Kernel crash in net_free() after cpu removal |
|
IJ20548 |
AIO may hang with affinity turned off |
|
IJ20549 |
Disable affinity |
|
IJ20552 |
max_xfer_size
negotiation with host |
|
IJ20624 |
AIX may crash with dsi during LPM |
|
IJ20650 |
TEST.OSTIC DISPLAYS RANDOM SFP SPEED TEST AFTER WRAP-PLUG |
|
IJ20765 |
installp
reinvoke fails with Trusted Execution error |