Fix Readme
Abstract
Release notes for the 3.1.4.60 VIOS Fix Pack release
Content
VIOS 3.1.4.60 Release Notes
Package Information
PACKAGE: Update Release 3.1.4.60
IOSLEVEL: 3.1.4.60
|
VIOS level is |
The AIX level of the NIM Master level must be equal to or higher than |
|
Update Release 3.1.4.60 |
AIX 7300-03-01 |
General package notes
Be sure to heed all minimum space requirements before installing.
Review the list of fixes included in Update Release 3.1.4.60
To take full advantage of all the functions available in the VIOS, it may be necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you update the VIOS to Update Release 3.1.4.60.
Microcode or system firmware downloads for Power Systems
If the VIOS being updated has filesets installed from the VIOS Expansion Pack, be sure to update those filesets with the latest VIOS Expansion Pack if updates are available.
Update Release 3.1.4.60 updates your VIOS partition to ioslevel 3.1.4.60. To determine if Update Release 3.1.4.60 is already installed, run the following command from the VIOS command line.
$ ioslevel
If Update Release 3.1.4.60 is installed, the command output is 3.1.4.60.
Note: The VIOS installation DVDs and the level of VIOS preinstalled on new systems might not contain the latest fixes available. It’s highly recommended that customers that get the physical GA level (i.e., 3.1.4.30) to update to the electronic GA levels (i.e., 3.1.4.60) as soon as possible. Missing fixes might be critical to the proper operation of your system. Update these systems to a current service pack level from Fix Central.
For Customers using NVMe Over Fabric (SAN) as their Boot Disk
Booting from NVMeoF disk may fail if certain fabric errors are returned, hence a boot disk set up with multiple paths is recommended. In case there is a failure to boot, the boot process may continue if you exit from the SMS menu. Another potential workaround is to discover boot LUNs from the SMS menu and then retry boot.
For Customers Using Third Party Java-based Software
This only applies to customers who both use third party Java based software and have run updateios -remove_outdated_filesets to remove Java 7 from their system.
To prevent errant behavior when editing customer’s /etc/environment file, updateios does not make changes to that file when run. If a customer is using software that depends on using Java and having the path to it in your PATH environment variable, the following edit should be made to allow programs that use the PATH environment variable to locate Java 8.
In the /etc/environment file, customers should see:
PATH=[various directories]:/usr/java7_64/jre/bin:/usr/java7_64/bin
To address a potential issue with Java-dependent third party software, this should be converted to:
PATH=[various directories]:/usr/java8_64/jre/bin:/usr/java8_64/bin
ITM Agents Software
ITM (IBM Tivoli Monitoring) filesets continue to be pre-installed as part of VIOS 3.x. The agents can be updated using one of the methods below:
- To update the agent and shared components (e.g. JRE, GSKt) to the latest levels, download the latest image included in the IBM Tivoli Monitoring System P Agents 6.22 Fix Pack 4 bundle . Here is a link to the readme file containing information about how to obtain the image and instructions for installing.
- To update just the agent shared components (e.g JRE, GSKit), the latest ITM service pack can be installed.
LDAP fileset updates
For VIOS partitions originally installed at 3.1.4.30 or later, errors updating the idsldap 6.4 filesets can be safely ignored as version 10.0 of the idsldap filesets are already present on the system.
Please ignore below errors:
installp: APPLYING software for:
idsldap.license64.rte 6.4.0.25
…
…
Error: IBM Security Directory Server License not detected. Install cannot continue.
instal: Failed while executing the idsldap.license64.rte.pre_i script.
0503-464 installp: The installation has FAILED for the "usr" part
of the following filesets:
idsldap.license64.rte 6.4.0.25
installp: Cleaning up software for:
idsldap.license64.rte 6.4.0.25
Please ignore below entries in “Installation Summary” :
idsldap.license64.rte 6.4.0.25 USR APPLY FAILED
idsldap.license64.rte 6.4.0.25 USR CLEANUP SUCCESS
idsldap.cltbase64.rte 6.4.0.25 USR APPLY CANCELED
idsldap.cltbase64.adt 6.4.0.25 USR APPLY CANCELED
idsldap.clt64bit64.rte 6.4.0.25 USR APPLY CANCELED
idsldap.clt32bit64.rte 6.4.0.25 USR APPLY CANCELED
Note
- If the Virtual I/O servers are installed on POWER10 systems and configured with “32Gb PCIe4 2-Port FC Adapter, Feature Code(s) EN1J and EN1K”, then the requirement is to update the adapter microcode level to 7710812214105106.070115 before updating the Virtual I/O server to 3.1.4.60 level.
Please refer to the release notes at this link
- HIPER issue: If you are upgrading to VIOS 3.1.4.60 or 4.1.1.x or above version please consult this link
3.1.4.60 New Features
VIOS 3.1.4.60 adds the following new features:
viosupgrade enhancements
The major enhancements done in viosupgrade in this release are as follows.
- Enables upgrades on systems that have a mirrored rootvg disk.
- Includes new options, such as the -noprompt flag to answer user prompts in advance in the following scenarios:
- Prompts you to confirm whether to skip logical volumes that cannot be migrated.
- Detects third-party Multi Path input output (MPIO) software and prompts you to confirm whether to proceed with the VIOS upgrade.
- Allows upgrading VIOS by using ISO images in a single step. For example, the -i flag can accept both mksysb and ISO images.
- Retains user account configurations and the home directories automatically.
- Supports all the options available in the standalone version of VIOS in the Network Installation Management (NIM) version.
- Preserves the following security configurations, by default:
- User accounts and groups with login attributes and password.
- Role based access control (RBAC) configurations, such as user roles, authorizations, privileged commands database (privcmds), privileged device database (privdevs), privileged files (privfiles) and domains.
- Login and authentication configurations.
- AIX audit sub system configurations.
- Trusted execution policies and databases.
- IPSec filters and tunnel configurations.
- Lightweight directory access protocol (LDAP) client configurations.
- OpenSSL, OpenSSH and Kerberos configurations.
Notes:
- Some configurations will be restored from the previous version, while some new configurations are activated on the newer version, and remaining other configurations will be merged from old versions to new versions. Configurations for which both previous and current versions are not used will be saved in the directory /usr/ios/security/saveconf/viosbr folder. You can review the configuration files and manually merge them.
- These security configurations will be preserved only if you upgrade from VIOS 3.1.4.60 to VIOS 4.1.1.10.
VIOS Shared Storage Pool Logging Enhancements
The two major enhancements for VIOS Shared Storage Pool in this release are as follows:
- The creation of dbn.log file within a Shared Storage Pool (SSP).
This file tracks all elections and relinquishes of the Database Node (DBN) role and debugs DBN-related problems easily. - The compression and storage of vio_daemon logs.
The number of logs that can be retained is increased by 15 times with no impact to storage capacity. This is done by compressing old VIOS logs and by tagging them with appropriate date and time information. This reduces the risk of logs that might contain critical information from being overwritten by newer logs.
N_Port ID Virtualization (NPIV) Enhancements: NVMeoF Protocol Support
The NPIV is a standardized method for virtualizing a physical Fibre Channel (FC) port. An NPIV-capable FC host bus adapter (HBA) can have multiple N_Ports, each with a unique identity. The NPIV, coupled with the adapter-sharing capabilities of the Virtual I/O Server (VIOS), allows a physical Fibre Channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER® logical partitions (LPARs) to have virtual fibre channel host bus adapters, each with a dedicated
worldwide port name. Each virtual Fibre Channel HBA has a unique storage area network (SAN) identity similar to that of a dedicated physical HBA.
The Non-Volatile Memory Express over Fabrics (NVMeoF) protocol in the NPIV stack is supported in Virtual I/O Server Version 3.1.4.0. A single virtual adapter provides access to both Small Computer Systems Interface (SCSI) and NVMeoF protocols if the physical adapter can support them. The application, which is running on the client partition and capable of handling the NPIV-NVMeoF protocol, can send I/Os in parallel to SCSI and NVMeoF disks that are coming from a single virtual adapter. The hardware and software requirements for NVMeoF protocol enablement in the NPIV stack are as follows:
- VIOS Version 3.1.4.0, or later
- NPIV-NVMeoF capable client (currently AIX® Version 7.3 Technology Level 01, or later)
- POWER 10 system with firmware version FW 1030, or later
- 32 or 64 GB FC adapters with physical NVMeoF support
VIOS Operating System Monitoring Enhancement
This release adds support for monitoring the VIOS operating system state by the POWER Hypervisor. If the VIOS partition is not responsive (due to certain conditions), then the hypervisor restarts the VIOS partition while it takes the system dump for debugging purpose. This helps to recover the VIOS partition from errors, for example, if the CPU is hijacked by the highest-level interrupt, the system progress is stopped. The ioscli viososmon command is added to understand the hang detection interval and the action when the hang is detected. This support requires POWER firmware version FW 1030, or later and VIOS Version 3.1.4.0, or later.
Support for NFSv4 Mounts on VIOS
The ioscli mount command which previously, by default, only supported the NFSv3 mount of the AIX is updated to support NFSv4 mounting. The changes allow VIOS to be able to invoke commands for the following actions:
- Setting Network File System (NFS) domain using chnfsdom from command line interface (CLI)
The setting of the NFS domain is accomplished by adding a Role-based access control (RBAC) support for the chnfsdom command.
- Invoking the NFSv4 mounting
The ioscli mount command is updated to support invoking the NFSv4 mounting. The current ioscli mount command defaults to NFSv3. You can invoke the “o vers=4” option with the new “-nfsvers <version> <Node: > <Directory> < Directory>” option that is added to the ioscli mount command. The values that are supported for the version are 3 and 4.
Note: The ioscli mount command supports NFS versions that are supported by the AIX mount command.
- Starting the nfsrgyd daemon
For version 4, if the mount is successful, a check is done to see if the nfsrgyd daemon is already started. And if it has not yet started, the nfsrgyd daemon is started.
Hardware Requirements
VIOS 3.1.4.60 can run on any of the following Power Systems:
POWER 8 or later.
Known Capabilities and Limitations
The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.
Requirements for Shared Storage Pool
- Platforms: POWER 8 and later (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)
- System requirements per SSP node:
- Minimum CPU: 1 CPU of guaranteed entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 1 GB
- At least 1 fiber-channel attached disk for data, 10GB
Limitations for Shared Storage Pool
Software Installation
- When installing updates for VIOS Update Release 3.1.4.60 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped on the node being upgraded.
SSP Configuration
|
Feature |
Min |
Max |
|
Number of VIOS Nodes in Cluster |
1 |
16* |
|
Number of Physical Disks in Pool |
1 |
1024 |
|
Number of Virtual Disks (LUs) Mappings in Pool |
1 |
8192 |
|
Number of Client LPARs per VIOS node |
1 |
250* |
|
Capacity of Physical Disks in Pool |
10GB |
16TB |
|
Storage Capacity of Storage Pool |
10GB |
512TB |
|
Capacity of a Virtual Disk (LU) in Pool |
1GB |
4TB |
|
Number of Repository Disks |
1 |
1 |
|
Capacity of Repository Disk |
512MB |
1016GB |
|
Number of Client LPARs per Cluster |
1 |
2000 |
*Support for additional VIOS Nodes and LPAR Mappings:
Prerequisites for expanded support:
- Over 16 VIOS Nodes requires that the SYSTEM (metadata) tier contains only SSD storage.
- Over 250 Client LPARs per VIOS requires each VIOS have at least 4 CPUs and 8 GB memory.
Here are the new maximum values for each of these configuration options, if the associated hardware specification has been met:
|
Feature |
Default Max |
High Spec Max |
|
Number of VIOS Nodes in Cluster |
16 |
24 |
|
Number of Client LPARs per VIOS node |
250 |
400 |
Other notes:
- Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
- The Shared Storage Pool cluster name must be less than 63 characters long.
- The Shared Storage Pool pool name must be less than 127 characters long.
- The maximum supported LU size is 4TB, however, for high I/O workloads it is recommended to use multiple smaller LUs as it will improve performance. For example, using 16 separate 16GB LUs would yield better performance than a single 256GB LU for applications that perform reads and writes to a variety of storage locations concurrently.
- The size of the /var drive should be greater than or equal to 3GB to ensure proper logging.
Network Configuration
- Uninterrupted network connectivity is required for operation. i.e., The network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
- A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the IBM Knowledge Center.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool capabilities and limitations
- On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.
- When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported.
- LPAR clients are not supported if they use JFS as their filesystem. If JFS is used, there is a risk of data corruption in the event of a network outage. JFS2 and other file systems are unaffected by this issue.
Installation Information
Pre-installation Information and Instructions
Please ensure that your rootvg contains at least 30 GB and that there is at least 4GB free space before you attempt to update to Update Release 3.1.4.60. Run the lsvg rootvg command, and then ensure there is enough free space.
Example:
$ lsvg rootvg
|
|
|
|
VOLUME GROUP:
|
rootvg
|
VG IDENTIFIER:
|
00f6004600004c000000014306a3db3d
|
VG STATE:
|
active
|
PP SIZE:
|
64 megabyte(s)
|
VG PERMISSION:
|
read/write
|
TOTAL PPs:
|
511 (32704 megabytes)
|
MAX LVs:
|
256
|
FREE PPs:
|
64 (4096 megabytes)
|
LVs:
|
14
|
USED PPs:
|
447 (28608 megabytes)
|
OPEN LVs:
|
12
|
QUORUM:
|
2 (Enabled)
|
TOTAL PVs:
|
1
|
VG DESCRIPTORS:
|
2
|
STALE PVs:
|
0
|
STALE PPs:
|
0
|
ACTIVE PVs:
|
1
|
AUTO ON:
|
yes
|
MAX PPs per VG:
|
32512
|
|
|
MAX PPs per PV:
|
1016
|
MAX PVs:
|
32
|
LTG size (Dynamic):
|
256 kilobyte(s)
|
AUTO SYNC:
|
no
|
HOT SPARE:
|
no
|
BB POLICY:
|
relocatable
|
PV RESTRICTION:
|
none
|
INFINITE RETRY:
|
no
|
VIOS upgrades with SDDPCM
A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
Virtual I/O Server support for Power Systems
Updating from VIOS version 3.1.0.00
VIOS Update Release 3.1.4.60 may be applied directly to any VIOS at level 3.1.0.00.
Upgrading from VIOS version 2.2.4 and above
The VIOS must first be upgraded to 3.1.0.00 before the 3.1.4.60 update can be applied. To learn more about how to do that, please read the information provided here.
Before installing the VIOS Update Release 3.1.4.60
Warning: The update may fail if there is a loaded media repository.
Instructions: Checking for a loaded media repository
To check for a loaded media repository, and then unload it, follow these steps.
- To check for loaded images, run the following command:
$ lsvopt
The Media column lists any loaded media.
- To unload media images, run the following commands on all Virtual Target Devices that have loaded images.
$ unloadopt -vtd <file-backed_virtual_optical_device >
- To verify that all media are unloaded, run the following command again.
$ lsvopt
The command output should show No Media for all VTDs.
Instructions: Migrate Shared Storage Pool Configuration
The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for SSP clusters. The VIOS can be updated to Update Release 3.1.4.60 using rolling updates.
Non-disruptive rolling updated to VIOS 3.1 requires all SSP nodes to be at VIOS 2.2.6.31 or later. See detailed instructions in the VIOS 3.1 documentation
The rolling updates enhancement allows the user to apply Update Release 3.1.4.60 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are updated.
To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following conditions are met:
- All VIOS logical partitions must have VIOS Update Release version 2.2.6.31 or later installed.
- All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new SSP capabilities.
Instructions: Verify the cluster is running at the same level as your node.
- Run the following command:
$ cluster -status -verbose - Check the Node Upgrade Status field, and you should see one of the following terms:
UP_LEVEL: This means that the software level of the logical partition is higher than the software level the cluster is running at.
ON_LEVEL: This means the software level of the logical partition and the cluster are the same.
Installing the Update Release
There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.
Instructions: Verifying VIOS update files.
To verify the VIOS update files, follow these steps:
- $ oem_setup_env
- Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl - Verify the link to openssl was created
# ls -alL /usr/bin/openssl /usr/ios/utils/openssl - Verify that both files display similar owner and size
- # exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.
Note: While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.
Version Specific Warning: Version 2.2.2.1, 2.2.2.2, 2.2.2.3, or 2.2.3.1
You must run updateios command twice to get bos.alt_disk_install.boot_images fileset update problem fixed.
Run the following command after the step of "$ updateios –accept –install –dev <directory_name >" completes.
$ updateios –accept –dev <directory_name >
Depending on the VIOS level, one or more of the LPPs below may be reported as "Missing Requisites", and they may be ignored.
MISSING REQUISITES:
X11.loc.fr_FR.base.lib 4.3.0.0 # Base Level Fileset
bos.INed 6.1.6.0 # Base Level Fileset
bos.loc.pc.Ja_JP 6.1.0.0 # Base Level Fileset
bos.loc.utf.EN_US 6.1.0.0 # Base Level Fileset
bos.mls.rte 6.1.x.x # Base Level Fileset
bos.mls.rte 7.2.0.0 # Base Level Fileset
bos.svprint.rte 7.2.0.0 # Base Level Fileset
Warning: If VIOS rules have been deployed.
During update, there have been occasional issues with VIOS Rules files getting overwritten and/or system settings getting reset to their default values.
To ensure that this doesn’t affect you, we recommend making a backup of the current rules file. This file is located here:
/home/padmin/rules/vios_current_rules.xml
First, to capture your current system settings, run this command:
$ rules -o capture
Then, either copy the file to a backup location, or save off a list of your current rules:
$ rules -o list > rules_list.txt
After this is complete, proceed to update as normal. When your update is complete, check your current rules and ensure that they still match what is desired. If not, either overwrite the original rules file with your backup, or proceed to use the ‘rules -o modify’ and/or ‘rules -o add’ commands to change the rules to match what is in your backup file.
Finally, if you’ve failed to back up your rules, and are not sure what the rules should be, you can deploy the recommended VIOS rules by using the following command:
$ rules -o deploy -d
Then, if you wish to copy these new VIOS recommended rules to your current rules file, just run:
$ rules -o capture
Note: This will overwrite any customized rules in the current rules file.
Applying Updates
Warning:
If the target node to be updated is part of a redundant VIOS pair, the VIOS partner node must be fully operational before beginning to update the target node.
Note:
For VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
Warning:
Update to 3.1.4.60 will update ios.database.rte fileset which might throw below errors, and they can be ignored:
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/America/Argentina was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Atlantic was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/America/Indiana was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Chile was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Brazil was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Pacific was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Mexico was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/America/Kentucky was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Arctic was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Europe was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Indian was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezonesets was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/America/North_Dakota was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/US was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Africa was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Canada was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Antarctica was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Etc was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Asia was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/timezone/Australia was not found.
sysck: 3001-022 The file /usr/ios/db/postgres13/share/postgresql/extension was not found.
??
3001-408 The user "vpgadmin" has an invalid lastupdate attribute.
Instructions: Applying updates to a VIOS.
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
$ clstartstop -stop -n <cluster_name > -m <hostname >
- To apply updates from a directory on your local hard disk, follow the steps:
- Create a directory on the Virtual I/O Server.
$ mkdir <directory_name > - Using ftp, transfer the update file(s) to the directory you created.
To apply updates from a remotely mounted file system, and the remote file system is to be mounted read-only, follow the steps:- Mount the remote directory onto the Virtual I/O Server:
$ mount remote_machine_name:directory /mnt
- Mount the remote directory onto the Virtual I/O Server:
- Create a directory on the Virtual I/O Server.
The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the CD/DVD drive, follow the steps:
-
-
- Place the CD-ROM into the drive assigned to VIOS.
- Place the CD-ROM into the drive assigned to VIOS.
-
- Commit previous updates by running the updateios command:
$ updateios -commit
- Verify the updates files that were copied. This step can only be performed if the link to openssl was created.
$ cp <directory_path >/ck_sum.bff /home/padmin
$ chmod 755 </home/padmin>/ck_sum.bff
$ ck_sum.bff <directory_path >
If there are missing updates or incomplete downloads, an error message is displayed.
To see how to create a link to openssl, click here.
- Apply the update by running the updateios command
$ updateios -accept -install -dev <directory_name >
- To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin for padmin to set authorization and establish access to the shutdown command properly.
- If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 3.1.4.60.
$ ioslevel
Post-installation Information and Instructions
Instructions: Checking for an incomplete installation caused by a loaded media repository.
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository data due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.
Running the lsvopt command should show the media images.
Instructions: Recovering from an incomplete installation caused by a loaded media repository.
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device>
- Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp –Or –agX ios.cli.rte –d <device/directory >
To return to the restricted shell:
# exit
- Restart the VIOS.
$ shutdown –restart
- Verify that the Media Repository is operational by running this command:
$ lsrep
Fixes included in this release
|
APAR |
Description |
|
IJ41826 |
SYSTEM CRASH RAS_KRPC_SRV_REGISTER / KRPC_RTEC_ERROR_HANDLER |
|
IJ41832 |
Install images for bos.loc.utf.JA_JP |
|
IJ48895 |
RARE CRASH WHEN STARTING SYSTEM TRACE IN PERFPMR |
|
IJ50324 |
POST VIOSUPGRADE 4.1, 3 FILES HAVE WRONG GROUP/MODE (TRUSTCHK) |
|
IJ50417 |
APP HANG DUE TO RACE CONDITION BETWEEN SOCKET READ AND CLOSE |
|
IJ50472 |
CRASH IN BUILD_MAP_LIST AT BOOT SRC 888-102-300-0C5 |
|
IJ50768 |
INCOMPLETE ARRAY TYPE FREE_SOCK_HASH_TABLE |
|
IJ50975 |
ADD SANITY CHECKS TO NFS READDIR |
|
IJ51151 |
ADD VNODE-RELATED TRACING |
|
IJ51371 |
Incomplete output by "malloc allocation size==X" command in dbx |
|
IJ51416 |
DEFRAGFS -F ON A LARGE JFS2 FILESYSTEM MAY CAUSE CORRUPTION |
|
IJ51602 |
SEA_TRACE LOCKING CONTENTION |
|
IJ51805 |
LPAR HUNG REQUESTING A SNAP OR DUMP ANALYSE |
|
IJ51867 |
POSSIBLE DSI IN ISCSISW_NEXT_WAITING_DEV() |
|
IJ52124 |
SECLDAPCLNTD MEMORY LEAK USING STARTTLS |
|
IJ52195 |
TEMPORARY AUTH / PASSWDEXPIREDX FAILURES FOR LDAP |
|
IJ52314 |
SUMA FAILS WITH CWPKI0022E AND CWPKI0040I ERRORS |
|
IJ52592 |
VNICSTAT NOT WORKING FOR PADMIN AFTER UPDATE |
|
IJ52767 |
PASSIVE VG FAILS TO VARYON AFTER LKU WITH POWERHA |
|
IJ52776 |
mallocdebug log fails to log stacks in monothreaded |
|
IJ52861 |
RMDEV COMMAND MAY HANG WHEN REMOVING FC ADAPTER AFTER EEH EVENT |
|
IJ52871 |
DUPLICATE UDP SOCKETS CAN GET CREATED CAUSING CONNECTION ISSUES |
|
IJ52874 |
INSTFIX FAILS TO LIST EFIXES WITH ABSTRACTS LONGER THAN 64 |
|
IJ52908 |
NMON VOLUME-GROUP MISLEADING WHEN AIX HAS VARYOFF VG & RAW DISKS |
|
IJ52965 |
SECLDAPCLNTD HANG WHEN USING NETCD |
|
IJ52989 |
DR CPU REMOVAL FAILURE WITH EBUSY |
|
IJ53018 |
EMGR_SEC_PATCH MAY FAIL TO INSTALL IFIX |
|
IJ53071 |
SYSTEM CRASH IN SELPOLL() |
|
IJ53107 |
Lock contention issue during relogin to an iSCSI target device |
|
IJ53134 |
DURING HBA FW DOWNLOAD VIOS LOGS FCA_ERR4 WITH 0X3E |
|
IJ53148 |
POTENTIAL CRASH IN BERKLEY PACKET FILTER (BPF) DRIVER |
|
IJ53165 |
FILENAME NOT DISPLAYED IF FULLPATH=ON AND FILE_OPEN EVENT FAILS |
|
IJ53200 |
LDAP FAILED TO SEARCH ERROR WITH CHARACTER |
|
IJ53211 |
REPLACE 'IBM' WITH 'IPS' IN AIX COMMAND OUTPUTS ON IPS HARDWARE |
|
IJ53302 |
SHUTDOWN REPORTS FAILURES DUE TO /DEV/CONSOLE NOT AVAILABLE |
|
IJ53303 |
SGID NOT PRESERVED WHEN IFIX IS INSTALLED IN NIMADM |
|
IJ53304 |
REORGVG TERMINATION MAY HANG THE PASSIVE NODE DURING FAILOVER |
|
IJ53305 |
rest7 testcase failure |
|
IJ53307 |
Debug info to print the process tree in case of checkpoint fail |
|
IJ53308 |
FORCE UNMOUNT OF A J2 FILESYSTEM MAY TAKE LONGER UNDER NFS ENV |
|
IJ53310 |
PORT SPEED IS UNKNOWN FOR FC ADAPTER |
|
IJ53311 |
ADAPTER RATE REPORTING INCORRECT INFO |
|
IJ53312 |
VEA scalibality issue due to lock not being cache aligned |
|
IJ53313 |
aix vscsi client may hang in rare case. |
|
IJ53314 |
VIOS may crash at npiv_init_cmd |
|
IJ53317 |
host -n should give error mesage for new bind.rte |
|
IJ53318 |
ioslevel change for 72Z to 3.1.4.60 and for 73F to 4.1.1.10 |
|
IJ53320 |
LNC2ENTDD NEEDS LSO SECURITY CHECK IN VNIC PATH |
|
IJ53337 |
FC SCSI DRIVER IDENTIFIES ITSELF AS TARGET |
|
IJ53394 |
CRASH IN VFC_FREE_SUB_CMDQ_ELM AFTER ADAPTER RESET |
|
IJ53404 |
FUSER WITH -D OPTION MAY NOT OUTPUT ALL TEMPORARY FILES |
|
IJ53410 |
ISST:Observed crash -> Kernel abend_trap for check_free sta |
|
IJ53411 |
Tampering cert. file doesn't fail emgr_download_ifix execution |
|
IJ53444 |
iso_addr() is not returning proper value. |
|
IJ53445 |
AIX not using drive reported ANATT values |
|
IJ53446 |
COMMANDS LIKE LSNPORTS, LSMAP, FCSTAT, SNAP HANG ON VIOS |
|
IJ53476 |
SYSTEM CRASH IN FASTLO_SETUP |
|
IJ53479 |
epkg not generating secfiles |
|
IJ53486 |
DSI IN DISPATCH() DUE TO INCONSISTENCIES IN THE THREAD STRUCTURE |
|
IJ53510 |
KERNEL EXTENSIONS CODE AND DATA NOT ACCESSIBLE IN DUMPS |
|
IJ53530 |
Random path failure and require lpar reboot |
|
IJ53531 |
failed to download CRL |
|
IJ53533 |
handling return code value of emgr* script |
|
IJ53551 |
CRASH DUE TO INVALID PAGE FAULT IN NFS4_ANY_LOCK |
|
IJ53573 |
Signature verification failed for /usr/sbin/snmpd |
|
IJ53574 |
du command in baselib_sh.sh throws error with XPG_SUS_ENV=ON |
|
IJ53583 |
2025a timezone database update |
|
IJ53632 |
IO MAY FAIL AFTER DYNAMIC TRACKING FAILURE |
|
IJ53634 |
NIMADM TO 7300-03 MAY LOOSE USER FS FROM /ETC/FILESYSTEMS |
|
IJ53645 |
IMPROPER INVOLUNTARY CONTEXT SWITCH COUNT |
|
IJ53663 |
LPPCHK -F REPORTS FILE SIZE AS 0 INCORRECTLY IN AIX 7.3 TL3 |
|
IJ53688 |
Update kernel copyright notice for 2025 |
|
IJ53692 |
Poor network performance because of shared global variable |
|
IJ53708 |
Handle port login failures for ESTALE |
|
IJ53757 |
A potential security issue exists |
|
IJ53811 |
A potential security issue exists |
|
IJ53812 |
A potential security issue exists |
|
IJ53813 |
A potential security issue exists |
|
IJ53851 |
EMGR_CHECK_IFIXES FAILS WITH CRLFILE_NAME CANNOT CREATE |
|
IJ53852 |
NIM ADAPTER_DEF OPERATION FAILS WITH NFSV4 |
|
IJ53865 |
FREELIST_KPROC() CONSUMING HIGH AMOUNT OF CPU |
|
IJ53872 |
NIM TAKEOVER FAILS FOR CLIENT WITH SIT INTERFACE CONFIGURED |
|
IJ53889 |
CONFIG_CONN_PATH DOES NOT SUPPORT JAVA8 |
|
IJ53896 |
SYSTEM CRASH IN RTFREE() WITH CACHED_ROUTE NO OPTION ENABLED |
|
IJ53914 |
A potential security issue exists |
|
IJ53915 |
A potential security issue exists |
|
IJ54003 |
IMPROVE CONFIGURATION TIME FOR NVME ADAPTERS |
|
IJ54017 |
Link Speed status shows "Unknown" for SRIOV VF port |
|
IJ54019 |
FOC73F_SP1:FVTR-DEV:IO failed on RDX after EEH recovery |
|
IJ54031 |
LISTING NIM FILESETS ON MASTER FROM NIM CLIENT BE SLOW OR HANG |
|
IJ54041 |
emgr_check_ifixes doesnot recognize VIOS OS correctly |
|
IJ54055 |
OpenxlC fails while compiling code with -D_LINUX_SOURCE_COMPAT |
|
IJ54057 |
ALT_DISK_COPY "NO SUCH DEVICE OR ADDRESS" WARNING MESSAGE |
|
IJ54061 |
A potential security issue exists |
|
IJ54144 |
dumpcheck command fails in any locals except C |
|
IJ54267 |
NIM MKSYSB MAY FAIL WITH LANG=EN_US.UTF-8 |
|
IJ54274 |
SYNCVG -Q MAY CORE DUMP IF MULTIPLE SYNCVG IS RUN CONCURRENTLY |
|
IJ54282 |
Ports become unavailble for EC3M because of EEH |
|
IJ54348 |
NFSV4 GROUP IS SHOWING AS NOBODY IF IT HAS MANY USERS |
|
IJ54351 |
lgamma setting ERANGE for NaN |
|
IJ54430 |
odmget errors for dsc_key, dsc_keystore in bos.rte.install updt |
|
IJ54443 |
RoCE adapter's NIC VF's uses extra 128M though not required |
|
IJ54449 |
Ctrl+X to dconsole read mode terminates HMC console |
|
IJ54490 |
'snap' command hangs during lvm data collection if VG is locked |
|
IJ54492 |
_LARGE_FILES macros cause conflicts with C++ open |
|
IJ54532 |
UNEXPECTED "USAGE:" MESSAGE FOR 'LSATTR' WHEN COLLECTING 'SNAP' |
|
IJ54539 |
GZIP QOS CREDIT ADD FAILS DUE TO DRSLOT_CHRP_ACC TIMEOUT |
|
IJ54540 |
EFS ENABLEMENT FOR MKSYSB ON TAPE |
|
IJ54612 |
Crash with diag update of invalid secured firmware image |
|
IJ54613 |
NIMADM FAILS MOUNT FILE TYPE RESOURCES WITH NFSV4 |
|
IJ54670 |
MPSTAT REPORTING FAR DISPATCHES WITH SINGLE SRAD |
|
IJ54675 |
Probable system crash at iodone(). |
|
IJ54679 |
A potential security issue exists |
|
IJ54814 |
LKU for AIX TL upgrade failed. |
|
IJ54820 |
ALT_DISK_COPY NOT PRESERVING EFS MOUNT POINTS. |
|
IJ54885 |
error when one Crypt device is Defined and is Available |
|
IJ54923 |
emgr_check_ifixes fails |
|
IJ54998 |
viosupgrade from 72X to 73F might fail due to viosbr restore |
Was this topic helpful?
Document Information
Modified date:
21 July 2025
UID
ibm17240045