Fix Readme
Abstract
Release notes for the 4.1.1.10 VIOS Fix Pack release
Content
VIOS 4.1.1.10 Release Notes
Package Information
PACKAGE: Update Release 4.1.1.10
IOSLEVEL: 4.1.1.10
|
VIOS level is |
The AIX level of the NIM Master level must be equal to or higher than |
|
Update Release 4.1.1.10 |
AIX 7300-03-01 |
General package notes
Be sure to heed all minimum space requirements before installing.
Review the list of fixes included in Update Release 4.1.1.10
To take full advantage of all the functions available in the VIOS, it may be necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you update the VIOS to Update Release 4.1.1.10.
Microcode or system firmware downloads for Power Systems
If the VIOS being updated has filesets installed from the VIOS Expansion Pack, be sure to update those filesets with the latest VIOS Expansion Pack if updates are available.
Update Release 4.1.1.10 updates your VIOS partition to ioslevel 4.1.1.10. To determine if Update Release 4.1.1.10 is already installed, run the following command from the VIOS command line.
$ ioslevel
If Update Release 4.1.1.10 is installed, the command output is 4.1.1.10.
Upgrade to VIOS 4.1.x
- Existing VIOS systems with supported versions of 3.1.x.y can be upgraded to VIOS version 4.1.1.0 (DVD image) or 4.1.1.10 (Flash image) using viosupgrade tool . Although it is recommended to be on VIOS 3.1.4.30 or later SP level before upgrading to VIOS 4.1.x level.
- VIOS systems with SSP configuration must be on 3.1.3.x or later level before upgrading to 4.1.x level or adding 4.1.x nodes into cluster.
- If Active Memory Sharing (AMS) is configured on the VIOS, it should be un-configured before upgrading. You may refer the link on how to un-configure.
- Before upgrading, you may read the viosupgrade blog in PowerVM Community which explains various scenarios.
Note: In an SSP cluster environment, if you're upgrading from version 3.1.4.50 or later, you must upgrade only to version 4.1.1.00 or later. Additionally, ensure that all nodes in the cluster are running version 3.1.4.50 or later before proceeding with the upgrade.
For Customers using NVMe Over Fabric (SAN) as their Boot Disk
Booting from NVMeoF disk may fail if certain fabric errors are returned, hence a boot disk set up with multiple paths is recommended. In case there is a failure to boot, the boot process may continue if you exit from the SMS menu. Another potential workaround is to discover boot LUNs from the SMS menu and then retry boot.
Note
If the Virtual I/O servers are installed on POWER10 systems and configured with “32Gb PCIe4 2-Port FC Adapter, Feature Code(s) EN1J and EN1K”, then the requirement is to update the adapter microcode level to 7710812214105106.070115 before updating the Virtual I/O server to 4.1.1.10 level.
Please refer to the release notes at this link
HIPER issue: If you are upgrading to VIOS 3.1.4.50 or 4.1.1.x or above version please consult this link
4.1.1.10 New Features
VIOS 4.1.1.10 adds the following new features:
viosupgrade enhancements
The major enhancements done in viosupgrade in this release are as follows.
- viosupgrade will now preserve device names for vfchost/vhost adapter devices, fcnvme, nvme, fscsi, iSCSI devices, network adapters, hdisks etc. by default. A new flag -skipdevname is introduced to skip the above device names preservation.
- Enables upgrades on systems that have a mirrored rootvg disk.
- Includes new options, such as the -noprompt flag to answer user prompts in advance in the following scenarios:
- Prompts you to confirm whether to skip logical volumes that cannot be migrated.
- Detects third-party Multi Path input output (MPIO) software and prompts you to confirm whether to proceed with the VIOS upgrade.
- Allows upgrading VIOS by using ISO images in a single step. For example, the -i flag can accept both mksysb and ISO images.
- Retains user account configurations and the home directories automatically.
- Supports all the options available in the standalone version of VIOS in the Network Installation Management (NIM) version.
- Preserves the following security configurations, by default:
- User accounts and groups with login attributes and password.
- Role based access control (RBAC) configurations, such as user roles, authorizations, privileged commands database (privcmds), privileged device database (privdevs), privileged files (privfiles) and domains.
- Login and authentication configurations.
- AIX audit sub system configurations.
- Trusted execution policies and databases.
- IPSec filters and tunnel configurations.
- Lightweight directory access protocol (LDAP) client configurations.
- OpenSSL, OpenSSH and Kerberos configurations.
Notes:
- Some configurations will be restored from the previous version, while some new configurations are activated on the newer version, and remaining other configurations will be merged from old versions to new versions. Configurations for which both previous and current versions are not used will be saved in the directory /usr/ios/security/saveconf/viosbr folder. You can review the configuration files and manually merge them.
- These security configurations will be preserved only if you upgrade from VIOS 3.1.4.60.
viosbr enhancements
In this release, a new flag -skip is introduced which skips restoration of specified configurations.
Use “-skip security_config” to skip restore of all the security configurations.
VFC enhancements
- I/O commands timeout improvements in VFC stack on VIOS. It handles starvation conditions for certain higher IO size workloads ( 256K and above ). It introduces 2 new attributes ( num_local_cmds and bufs_per_cmd ) at VFC host adapter device level on VIOS to manage resources like memory better overall.
- Improve Virtual FC Performance Diagnostics provide support for the VFC client partitions ( "IBM i" in this release of 2024) for collecting I/O response times from the VIOS as a whole subsystem, including the timestamps recorded on different layers in NPIV stack on VIOS and FC adapters.
NFS mounted ISO images support in Virtual Media Library
This release adds support for NFS Mounted ISOs in Virtual Media Library which allows to load ISO images from a centralized NFS server, eliminating need for repeated copying of ISO images across multiple VIOS. This saves storage space and time, maintains consistent images, and supports both NFS V3 and V4, allowing multiple images to be linked into the repository.
The ’mkvopt’ command is enhanced to support new option –nfslink which creates a symbolic link to the specified NFS ISO file, in the repository.
$ mkvopt -name <image_name_in_VML_repository> -file /mnt/<mounted_ISO_file.iso> -nfslink -ro
Refer the PowerVM Community blog for details.
Remove unwanted language message filesets
The updateios command is extended to support below options to remove unwanted language message filesets.
-
- -listlang : This option lists all the installed language message filesets.
- -rmlang : This option should be used to remove language filesets.
- -preserve : This option must be used along with -rmlang to specify the languages to preserve. Rest of the language message filesets will be removed.
Shared Storage Pool enhancements
The cluster -status -verbose command is enhanced to display additional storage pool specific information.
The clutil command is introduced with the -o pingcheck option to verify if the network
communication among the SSP cluster nodes are in the working state.
Live Partition Mobility (LPM) Validation enhancement for NPIV storage
Discrepancies in LUN IDs for disks between source and destination hosts can arise from misconfigured LUN masking at the storage level. For instance, the active and inactive WWPNs of a client’s virtual FC adapter may be configured as separate hosts at the storage system, then can assign the same LUNs to these hosts in different orders. This results in varying LUN IDs associated with each WWPN.
Since such configuration can lead to LUN access issues after LPAR migration, the validation algorithms have been enhanced to detect such misconfigurations and report to the user.
VIOS 4.1.1.0 has virtual ethernet multi queue feature
This will help to drive more network traffic through virtual ethernet trunk adapter. By default, there will be 12 transmit queues and zero receive queues (legacy receive mode). To increase the bandwidth, one can increase the number of receive queues on virtual ethernet adapter in VIOS by tuning queues_rx attribute of the virtual ethernet adapter. However, using multiple Receive queues to drive more traffic will require more CPU resources.
Other enhancements
- Enhancement in alt_root_vg command, to run it in phases. This enhancement allows the alt_root_vg command to separate the cloning phase from the update phase.
- Enhancement in chdev command, a new option -dynupdate is added to support dynamic updates of supported device attributes.
Hardware Requirements
Please check this link for supported hardware.
Known Capabilities and Limitations
The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.
Requirements for Shared Storage Pool
- System requirements per SSP node:
- Minimum CPU: 1 CPU of guaranteed entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 10 GB
- At least 1 fiber-channel attached disk for data, 10GB
Limitations for Shared Storage Pool
Software Installation
- When installing updates for VIOS Update Release 4.1.1.10 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped on the node being upgraded.
SSP Configuration
|
Feature |
Min |
Max |
|
Number of VIOS Nodes in Cluster |
1 |
16* |
|
Number of Physical Disks in Pool |
1 |
1024 |
|
Number of Virtual Disks (LUs) Mappings in Pool |
1 |
8192 |
|
Number of Client LPARs per VIOS node |
1 |
250* |
|
Capacity of Physical Disks in Pool |
10GB |
16TB |
|
Storage Capacity of Storage Pool |
10GB |
512TB |
|
Capacity of a Virtual Disk (LU) in Pool |
1GB |
4TB |
|
Number of Repository Disks |
1 |
1 |
|
Capacity of Repository Disk |
10GB |
1016GB |
|
Number of Client LPARs per Cluster |
1 |
2000 |
*Support for additional VIOS Nodes and LPAR Mappings:
Prerequisites for expanded support:
- Over 16 VIOS Nodes requires that the SYSTEM (metadata) tier contains only SSD storage.
- Over 250 Client LPARs per VIOS requires each VIOS have at least 4 CPUs and 8 GB memory.
Here are the new maximum values for each of these configuration options, if the associated hardware specification has been met:
|
Feature |
Default Max |
High Spec Max |
|
Number of VIOS Nodes in Cluster |
16 |
24 |
|
Number of Client LPARs per VIOS node |
250 |
400 |
Other notes:
- Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
- The Shared Storage Pool cluster name must be less than 63 characters long.
- The Shared Storage Pool pool name must be less than 127 characters long.
- The maximum supported LU size is 4TB, however for high I/O workloads recommendation is to use multiple smaller LUs as it will improve performance. For example, using 16 separate 16GB LUs would yield better performance than a single 256GB LU for applications that perform reads and writes to a variety of storage locations concurrently.
- The size of the /var drive should be greater than or equal to 3GB to ensure proper logging.
Network Configuration
- Uninterrupted network connectivity is required for operation. i.e., the network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
- A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the IBM Knowledge Center.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool capabilities and limitations
- On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.
- When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- Client LPARs are not supported if they use JFS as their filesystem. If JFS is used, there is a risk of data corruption in the event of a network outage. JFS2 and other file systems are unaffected by this issue.
Installation Information
Pre-installation Information and Instructions
Please ensure that your rootvg contains at least 30 GB and that there is at least 4GB free space before you attempt to update to Update Release 4.1.1.10. Run the lsvg rootvg command, and then ensure there is enough free space.
Example:
$ lsvg rootvg
|
|
|
|
VOLUME GROUP:
|
rootvg
|
VG IDENTIFIER:
|
00f6004600004c000000014306a3db3d
|
VG STATE:
|
active
|
PP SIZE:
|
64 megabyte(s)
|
VG PERMISSION:
|
read/write
|
TOTAL PPs:
|
511 (32704 megabytes)
|
MAX LVs:
|
256
|
FREE PPs:
|
64 (4096 megabytes)
|
LVs:
|
14
|
USED PPs:
|
447 (28608 megabytes)
|
OPEN LVs:
|
12
|
QUORUM:
|
2 (Enabled)
|
TOTAL PVs:
|
1
|
VG DESCRIPTORS:
|
2
|
STALE PVs:
|
0
|
STALE PPs:
|
0
|
ACTIVE PVs:
|
1
|
AUTO ON:
|
yes
|
MAX PPs per VG:
|
32512
|
|
|
MAX PPs per PV:
|
1016
|
MAX PVs:
|
32
|
LTG size (Dynamic):
|
256 kilobyte(s)
|
AUTO SYNC:
|
no
|
HOT SPARE:
|
no
|
BB POLICY:
|
relocatable
|
PV RESTRICTION:
|
none
|
INFINITE RETRY:
|
no
|
VIOS upgrades with Third Party Software
When the user upgrades from 3.1.x.y to 4.1.0.00 level and above, third-party software is not packaged with the IBM supplied mksysb image. User needs to install the respective third-party software after the upgrade is complete and run viosupgrade -o rerun to restore the respective devices.
Updating from VIOS version 4.1.0.00
VIOS Update Release 4.1.1.10 may be applied directly to any VIOS at level 4.1.0.00.
Before installing the VIOS Update Release 4.1.1.10
Warning: The update may fail if there is a loaded media repository.
Instructions: Checking for a loaded media repository
To check for a loaded media repository, and then unload it, follow these steps.
- To check for loaded images, run the following command:
$ lsvopt
The Media column lists any loaded media.
- To unload media images, run the following commands on all Virtual Target Devices that have loaded images.
$ unloadopt -vtd <file-backed_virtual_optical_device >
- To verify that all media are unloaded, run the following command again.
$ lsvopt
The command output should show No Media for all VTDs.
Instructions: Migrate Shared Storage Pool Configuration
The Virtual I/O Server (VIOS) Version 3.1.x.y or later, supports rolling updates to release 4.1.1.10 for SSP clusters.
The rolling updates enhancement allows the user to apply Update Release 4.1.1.10 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are updated.
To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following conditions are met:
- All VIOS logical partitions must have VIOS Update Release version 3.1.x.y or later installed.
- All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new SSP capabilities.
Instructions: Verify the cluster is running at the same level as your node.
- Run the following command:
$ cluster -status -verbose - Check the Node Upgrade Status field, and you should see one of the following terms:
UP_LEVEL: This means that the software level of the logical partition is higher than the software level the cluster is running at.
ON_LEVEL: This means the software level of the logical partition and the cluster are the same.
Installing the Update Release
There is a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.
Instructions: Verifying VIOS update files.
To verify the VIOS update files, follow these steps:
- $ oem_setup_env
- Create a link to openssl if required
# ln -s /usr/bin/openssl /usr/ios/utils/openssl - Verify the link to openssl was created
# ls -alL /usr/bin/openssl /usr/ios/utils/openssl - Verify that both files display similar owner and size
- # exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.
Note: While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.
Warning: If VIOS rules have been deployed.
During update, there have been occasional issues with VIOS Rules files getting overwritten and/or system settings getting reset to their default values.
To ensure that this doesn’t affect you, we recommend making a backup of the current rules file. This file is located here:
/home/padmin/rules/vios_current_rules.xml
First, to capture your current system settings, run this command:
$ rules -o capture
Then, either copy the file to a backup location, or save off a list of your current rules:
Note: The "padmin" user is restricted to redirect command output to a file, you must be in the root shell. Use "oem_setup_env" to become a root user.
$ oem_setup_env
# rules -o list > rules_list.txt
After this is complete, proceed to update as normal. When your update is complete, check your current rules and ensure that they still match what is desired. If not, either overwrite the original rules file with your backup, or proceed to use the ‘rules -o modify’ and/or ‘rules -o add’ commands to change the rules to match what is in your backup file.
Finally, if you’ve failed to back up your rules, and are not sure what the rules should be, you can deploy the recommended VIOS rules by using the following command:
$ rules -o deploy -d
Then, if you wish to copy these new VIOS recommended rules to your current rules file, just run:
$ rules -o capture
Note: This will overwrite any customized rules in the current rules file.
Applying Updates
Warning:
If the target node to be updated is part of a redundant VIOS pair, the VIOS partner node must be fully operational before beginning to update the target node.
Note 1:
For VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
Note 2:
Updateios command is suggested to run using “-install” option. This option installs new base-level filesets that are shipped along with the existing fileset updates. For example: enabling resize command in VIOS 4.1.1.10 onwards.
Running updateios command with “-install” flag may lead to few fileset update error messages that can be ignored. See below the list of filesets for which the error message can be ignored:
X11.apps.msmit 7.3.3.0 # AIXwindows msmit Application
bos.msg.ca_ES.mls.rte 7.3.3.0 # Trusted AIX Messages - Catalan
bos.msg.ca_ES.svprint 7.3.3.0 # System V Print Subsystem Msg...
bos.msg.de_DE.mls.rte 7.3.3.0 # Trusted AIX Messages - German
bos.msg.de_DE.svprint 7.3.3.0 # System V Print Subsystem Msg...
bos.msg.en_US.mls.rte 7.3.3.0 # Trusted AIX Messages - U.S. ...
bos.msg.es_ES.mls.rte 7.3.3.0 # Trusted AIX Messages - Spanish
bos.msg.es_ES.svprint 7.3.3.0 # System V Print Subsystem Msg...
bos.msg.fr_FR.mls.rte 7.3.3.0 # Trusted AIX Messages - French
bos.msg.fr_FR.svprint 7.3.3.0 # System V Print Subsystem Msg...
bos.msg.it_IT.mls.rte 7.3.3.0 # Trusted AIX Messages - Italian
bos.msg.it_IT.svprint 7.3.3.0 # System V Print Subsystem Msg...
bos.msg.pt_BR.mls.rte 7.3.3.0 # Trusted AIX Messages - Brazi...
bos.msg.pt_BR.svprint 7.3.3.0 # System V Print Subsystem Msg...
bos.net.tcp.rcmd 7.3.3.0 # TCP/IP Remote Command Client...
bos.net.tcp.rcmd_server 7.3.3.0 # TCP/IP Remote Command Server...
Also, In case you choose to run updateios without “-install” flag, you will not see any of the error messages above but you will lose installation of resize command and “RSCT Software Resource Manager” along with a bunch of language message filesets for locales “it_IT, fr_FR, ca_ES, pt_BR, de_DE, es_ES”. Below is the list of filesets that you will miss:
bos.msg.it_IT.txt.tfs - Text Formatting Services Msgs - Italian
bos.msg.it_IT.net.ipsec - IP Security Messages - Italian
bos.msg.it_IT.alt_disk_inst - Alternate Disk Install Msgs -Italian
bos.msg.it_IT.diag.rte - Hardware Diagnostics Messages - Italian
(Same filesets as above for fr_FR, ca_ES, pt_BR, de_DE, es_ES)
X11.apps.xterm - AIXwindows xterm Application
X11.apps.xdm - AIXwindows xdm Application
X11.apps.custom - AIXwindows Runtime Common Directories
X11.apps.config - AIXwindows Configuration Applications
X11.apps.clients - AIXwindows Client Applications
rsct.msg.EN_US.basic.rte - RSCT Basic Msgs - U.S. English (UTF)
rsct.opt.softwarerm - RSCT Software Resource Manager
rsct.msg.en_US.opt.software - RSCT Software RM Msgs - U.S. English
rsct.msg.EN_US.opt.software - RSCT Software RM Msgs - U.S. English (UTF)
Warning:
Update to VIOS 4.1.1.10 will update ios.database.rte fileset which might throw below errors, and they can be ignored:
3001-408 The user "vpgadmin" has an invalid lastupdate attribute.
Instructions: Applying updates to a VIOS.
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
$ clstartstop -stop -n <cluster_name > -m <hostname >
- To apply updates from a directory on your local hard disk, follow the steps:
- Create a directory on the Virtual I/O Server.
$ mkdir <directory_name > - Using ftp, transfer the update file(s) to the directory you created.
To apply updates from a remotely mounted file system, and the remote file system is to be mounted read-only, follow the steps:- Mount the remote directory onto the Virtual I/O Server:
$ mount remote_machine_name:directory /mnt
- Mount the remote directory onto the Virtual I/O Server:
- Create a directory on the Virtual I/O Server.
The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the CD/DVD drive, follow the steps:
-
-
- Place the CD-ROM into the drive assigned to VIOS.
- Place the CD-ROM into the drive assigned to VIOS.
-
- Commit previous updates by running the updateios command:
$ updateios -commit
- Verify the updates files that were copied. This step can only be performed if the link to openssl was created.
$ cp <directory_path >/ck_sum.bff /home/padmin
$ chmod 755 </home/padmin>/ck_sum.bff
$ ck_sum.bff <directory_path >
If there are missing updates or incomplete downloads, an error message is displayed.
To see how to create a link to openssl, click here.
- Apply the update by running the updateios command
$ updateios -accept -install -dev <directory_name >
- To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin for padmin to set authorization and establish access to the shutdown command properly.
- If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 4.1.1.10.
$ ioslevel
Post-installation Information and Instructions
Instructions: Checking for an incomplete installation caused by a loaded media repository.
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository data due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.
Running the lsvopt command should show the media images.
Instructions: Recovering from an incomplete installation caused by a loaded media repository.
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device>
- Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp –Or –agX ios.cli.rte –d <device/directory >
To return to the restricted shell:
# exit
- Restart the VIOS.
$ shutdown –restart
- Verify that the Media Repository is operational by running this command:
$ lsrep
Content modified in this release
Software Updated
- Two versions of postgres database (version 15.6 and 13.14) are included in base image. Postgres version 13 is required to support mixed mode cluster where node versions 3.1.4.x and 4.1.0.x are part of the cluster.
Content removed from 4.1.0.x release and above
ITM Agents software
ITM (IBM Tivoli Monitoring) filesets are not part of VIOS 4.1.0.00 & above versions. Users need to download and install the ITM software from an external location. The ITM VIOS Premium Agent and ITM CEC Base Agent can be downloaded and installed separately as part of an updated IBM Tivoli Monitoring System P Agents 6.22 Fix Pack 4 or later bundle. Here is a link to the readme file containing information about how to obtain the image and instructions for installing. When these agents are installed in the default directory of /opt/IBM/ITM, they can continue to utilize the cfgsvc, startsvc, and stopsvc commands to configure, start and stop the agent.
AMS
Active Memory Sharing (AMS) feature is removed.
Software Removed
Following filesets that are deemed not necessary are removed.
- bos.net.tcp.rcmd, bos.net.tcp.rcmd_server (A copy is saved under: /usr/sys/inst.images/installp/ppc)
- cas.agent tivoli.tivguid
- rsct.opt.fence.blade rsct.opt.fence.hmc
- sysmgt.cim.providers.metrics sysmgt.cim.providers.osbase
- sysmgt.cim.providers.scc sysmgt.cim.providers.smash
- sysmgt.cim.smisproviders.hba_hdr sysmgt.cim.smisproviders.hhr
- sysmgt.cim.smisproviders.vblksrv sysmgt.cimserver.pegasus.rte
- Java7.jre Java7.sdk Java7_64.jre Java7_64.sdk
- itm.cec.agent itm.premium.rte itm.vios_premium.agent
- bos.net.nfs.server devices.vdevice.IBM.vfc-client.rte
- X11.adt.ext X11.adt.motif X11.apps.clients X11.apps.config X11.apps.custom X11.apps.msmit X11.apps.xdm
- X11.apps.xterm X11.base.xpconfig X11.compat.adt.Motif12 X11.compat.lib.Motif10 X11.compat.lib.Motif114
- X11.compat.lib.X11R3 X11.compat.lib.X11R4 X11.Dt.bitmaps X11.Dt.helpinfo X11.Dt.helpmin X11.Dt.helprun
- X11.Dt.lib X11.Dt.rte X11.Dt.ToolTalk X11.fnt.coreX X11.fnt.deform_JP X11.fnt.fontServer X11.fnt.Gr_Cyr_T1
- X11.fnt.ibm1046 X11.fnt.ibm1046_T1 X11.fnt.iso1 X11.fnt.iso2 X11.fnt.iso3 X11.fnt.iso4 X11.fnt.iso5
- X11.fnt.iso7 X11.fnt.iso8 X11.fnt.iso8_T1 X11.fnt.iso9 X11.fnt.iso_T1 X11.fnt.ksc5601.ttf X11.fnt.ucs.cjk
- X11.fnt.ucs.com X11.fnt.ucs.ttf_CN X11.fnt.ucs.ttf_extb X11.fnt.util X11.loc.en_US.base.lib
- X11.loc.en_US.base.rte X11.loc.en_US.Dt.rte X11.vsm.lib
Fixes included in this release
|
APAR |
Description |
|
IJ41832 |
Install images for ICU4C.adt |
|
IJ53208 |
DEFRAGFS -F ON A LARGE JFS2 FILESYSTEM MAY CAUSE CORRUPTION |
|
IJ53253 |
SHUTDOWN REPORTS FAILURES DUE TO /DEV/CONSOLE NOT AVAILABLE |
|
IJ53254 |
Incomplete output by "malloc allocation size==X" command in dbx |
|
IJ53255 |
PASSIVE VG FAILS TO VARYON AFTER LKU WITH POWERHA |
|
IJ53256 |
INSTFIX FAILS TO LIST EFIXES WITH ABSTRACTS LONGER THAN 64 |
|
IJ53257 |
SGID NOT PRESERVED WHEN IFIX IS INSTALLED IN NIMADM |
|
IJ53258 |
EMGR_SEC_PATCH MAY FAIL TO INSTALL IFIX |
|
IJ53259 |
EXTENDLV WITH MIRROR POOLS MAY ALLOCATE PARTITIONS INCORRECTLY |
|
IJ53260 |
REORGVG TERMINATION MAY HANG THE PASSIVE NODE DURING FAILOVER |
|
IJ53261 |
find command might not work due to mount permissions |
|
IJ53262 |
rest7 testcase failure |
|
IJ53263 |
Config Script Failures Due to Broken Pipe Fix |
|
IJ53264 |
diff command fails with Standard error is empty |
|
IJ53265 |
DEFERD:UNIXv7: SigWait: child exited with code 130 |
|
IJ53266 |
sh: Broken pipe msg displayed during migration. |
|
IJ53268 |
POSSIBLE DSI IN ISCSISW_NEXT_WAITING_DEV() |
|
IJ53269 |
PORT SPEED IS UNKNOWN FOR FC ADAPTER |
|
IJ53270 |
AVOID ACS_SLOCK CONTENTION CAUSING NETWORK DELAYS |
|
IJ53271 |
aix vscsi client may hang in rare case. |
|
IJ53272 |
VIOS may crash at npiv_init_cmd |
|
IJ53273 |
LNC2ENTDD NEEDS LSO SECURITY CHECK IN VNIC PATH |
|
IJ53274 |
SYSTEM CRASH IN SELPOLL() |
|
IJ53275 |
Debug info to print the process tree in case of checkpoint fail |
|
IJ53276 |
FORCE UNMOUNT OF A J2 FILESYSTEM MAY TAKE LONGER UNDER NFS ENV |
|
IJ53277 |
kernel assert, if vsid is invalid in vm_attinfo |
|
IJ53278 |
mallocdebug log fails to log stacks in monothreaded |
|
IJ53279 |
SECLDAPCLNTD HANG WHEN USING NETCD |
|
IJ53283 |
EIO ERRORS FOR LOCK REQUESTS DUE TO BAD_SEQID |
|
IJ53284 |
POTENTIAL CRASH IN BERKLEY PACKET FILTER (BPF) DRIVER |
|
IJ53285 |
MISSING -EXCLUDE FLAG IN BACKUPIOS HELP |
|
IJ53286 |
BACKUPIOS CAN HANG WHEN THE STDERR IS BIGGER THAN 32K |
|
IJ53291 |
host -n should give error mesage for new bind.rte |
|
IJ53292 |
Getting usage error for mkldap with invalid values |
|
IJ53293 |
encr not disabled for hd7 |
|
IJ53294 |
ioslevel change for 72Z to 3.1.4.60 and for 73F to 4.1.1.10 |
|
IJ53297 |
DUPLICATE UDP SOCKETS CAN GET CREATED CAUSING CONNECTION ISSUES |
|
IJ53341 |
Dynamic tuning of the lldp_mode not reflected in Etherchannel |
|
IJ53346 |
THE BOS.LOC.UTF LANGUAGE FILESETS BROKEN DURING TL UPDATE |
|
IJ53353 |
ADAPTER RATE REPORTING INCORRECT INFO |
|
IJ53355 |
RoCE adapter's NIC VF's uses extra 128M though not required |
|
IJ53358 |
REPLACE 'IBM' WITH 'IPS' IN AIX COMMAND OUTPUTS ON IPS HARDWARE |
|
IJ53386 |
OSLEVEL COMMAND MIGHT DISPLAY INCORRECTLY IN 7300-TL3 |
|
IJ53407 |
TE: signature is volatile, hash verification is also skipped |
|
IJ53408 |
RARE CRASH WHEN STARTING SYSTEM TRACE IN PERFPMR |
|
IJ53412 |
SYSTEM CRASH IN XM_BAD_FREE CALLED FROM ASO_THCP_START_STOP_TRC |
|
IJ53415 |
ADD SANITY CHECKS TO NFS READDIR |
|
IJ53416 |
ADD VNODE-RELATED TRACING |
|
IJ53417 |
FC SCSI DRIVER IDENTIFIES ITSELF AS TARGET |
|
IJ53418 |
ISST:Observed crash -> Kernel abend_trap for check_free sta |
|
IJ53419 |
Tampering cert. file doesn't fail emgr_download_ifix execution |
|
IJ53459 |
1MT: Change the way we look for new RQ in setrq |
|
IJ53465 |
tsd,lib.dat on LDAP for updates of .a can lead to hash mismatch |
|
IJ53468 |
checkin ESA Aix for Call Home Connect Cloud(CHCC) files |
|
IJ53469 |
DEV: clutil tool needs to refer to the absolute python path. |
|
IJ53470 |
NIMADM TO 7300-03 MAY LOOSE USER FS FROM /ETC/FILESYSTEMS |
|
IJ53478 |
VIO client I/O may hang or fail |
|
IJ53487 |
AIX not using drive reported ANATT values |
|
IJ53491 |
INCOMPLETE ARRAY TYPE FREE_SOCK_HASH_TABLE |
|
IJ53497 |
iso_addr() is not returning proper value. |
|
IJ53498 |
failed to download CRL |
|
IJ53500 |
Wrong/junk machine SN number going in json request packet |
|
IJ53501 |
handling return code value of emgr* script |
|
IJ53502 |
epkg not generating secfiles |
|
IJ53529 |
LPPCHK -F REPORTS FILE SIZE AS 0 INCORRECTLY IN AIX 7.3 TL3 |
|
IJ53539 |
rmvirprt core dump |
|
IJ53568 |
Signature verification failed for /usr/sbin/snmpd |
|
IJ53569 |
du command in baselib_sh.sh throws error with XPG_SUS_ENV=ON |
|
IJ53570 |
DEFER:FVLI-1FC: /etc/qconfig entries corrupted |
|
IJ53571 |
APP HANG DUE TO RACE CONDITION BETWEEN SOCKET READ AND CLOSE |
|
IJ53629 |
CRASH DUE TO INVALID PAGE FAULT IN NFS4_ANY_LOCK |
|
IJ53635 |
RMDEV COMMAND MAY HANG WHEN REMOVING FC ADAPTER AFTER EEH EVENT |
|
IJ53636 |
DURING HBA FW DOWNLOAD VIOS LOGS FCA_ERR4 WITH 0X3E |
|
IJ53637 |
IO MAY FAIL AFTER DYNAMIC TRACKING FAILURE |
|
IJ53640 |
2025a timezone database update |
|
IJ53641 |
Performance loss on AIX 7.3 TL3 with offload_iodone enabled |
|
IJ53642 |
POSSIBLE STACK CORRUPTION WHEN MLXCENT_HW_TX_DESC_WRITE() CALL |
|
IJ53661 |
lku of powervc managed lpar may remove disk from lpar |
|
IJ53669 |
Update kernel copyright notice for 2025 |
|
IJ53672 |
SNAP SVCOLLECT DO NOT COLLECT VIOSVC.OUT & VIOSVC.ERR |
|
IJ53683 |
SEA stats shows garbage values when there is ctrl channel of it. |
|
IJ53684 |
Probable system crash at xm_bad_free |
|
IJ53686 |
SNAP SVCOLLECT GENERATES PERMISSION DENIED IN SVCOLLECT.ERR |
|
IJ53687 |
SSPDB IS MISSING(SNAP SVCOLLECT IN PADMIN):PG_VERSION NOT EXIS |
|
IJ53690 |
Poor network performance because of shared global variable |
|
IJ53703 |
NIM ADAPTER_DEF OPERATION FAILS WITH NFSV4 |
|
IJ53734 |
ERRORS LOGGED APPLYING VIOS 4.1.1 IOS.ARTEX_PROFILE.RTE |
|
IJ53746 |
Lock contention issue during relogin to an iSCSI target device |
|
IJ53754 |
unexpected ASYNCHRONOUS EVENTS default behavior |
|
IJ53762 |
LPAR HUNG REQUESTING A SNAP OR DUMP ANALYSE |
|
IJ53765 |
tcp_init_window no option displays incorrect unit type |
|
IJ53775 |
PARTITIONS (P9/P10 MODE) CRASH AFTER SMTCTL -M LIMIT |
|
IJ53778 |
nimclient -l lists resources from wrong customer object |
|
IJ53779 |
DSI IN DISPATCH() DUE TO INCONSISTENCIES IN THE THREAD STRUCTURE |
|
IJ53780 |
IMPROPER INVOLUNTARY CONTEXT SWITCH COUNT |
|
IJ53790 |
NMON VOLUME-GROUP MISLEADING WHEN AIX HAS VARYOFF VG |
|
IJ53792 |
A potential security issue exists |
|
IJ53793 |
nimsh subsystem args should not preserve '-c' during reinit |
|
IJ53803 |
ipsec_logd not automatically stopped during LKU |
|
IJ53826 |
KERNEL EXTENSIONS CODE AND DATA NOT ACCESSIBLE IN DUMPS |
|
IJ53832 |
A potential security issue exists |
|
IJ53833 |
A potential security issue exists |
|
IJ53857 |
FUSER WITH -D OPTION MAY NOT OUTPUT ALL TEMPORARY FILES |
|
IJ53858 |
EMGR_CHECK_IFIXES FAILS WITH CRLFILE_NAME CANNOT CREATE |
|
IJ53904 |
COMMANDS LIKE LSNPORTS, LSMAP, FCSTAT, SNAP HANG ON VIOS |
|
IJ53906 |
AIX may crash after io_dma set to 2048 per fcs port. |
|
IJ53907 |
FREELIST_KPROC() CONSUMING HIGH AMOUNT OF CPU |
|
IJ53908 |
CONFIG_CONN_PATH DOES NOT SUPPORT JAVA8 |
|
IJ53918 |
A potential security issue exists |
|
IJ53919 |
A potential security issue exists |
|
IJ53944 |
RR_START failed with EBUSY during |
|
IJ53945 |
SYSTEM CRASH IN RTFREE() WITH CACHED_ROUTE NO OPTION ENABLED |
|
IJ53949 |
NIM TAKEOVER FAILS FOR CLIENT WITH SIT INTERFACE CONFIGURED |
|
IJ54016 |
CRASH IN BUILD_MAP_LIST AT BOOT SRC 888-102-300-0C5 |
|
IJ54018 |
FOC73F_SP1:FVTR-DEV:IO failed on RDX after EEH recovery |
|
IJ54020 |
Link Speed status shows "Unknown" for SRIOV VF port |
|
IJ54042 |
emgr_check_ifixes doesnot recognize VIOS OS correctly |
|
IJ54052 |
OpenxlC fails while compiling code with -D_LINUX_SOURCE_COMPAT |
|
IJ54056 |
ALT_DISK_COPY "NO SUCH DEVICE OR ADDRESS" WARNING MESSAGE |
|
IJ54059 |
A potential security issue exists |
|
IJ54082 |
LKU: INCREASED BLACKOUT DUE TO NFS MOUNT POINT RESTORE |
|
IJ54116 |
Application variable resetting to NULL post LLU in p_option() |
|
IJ54118 |
access to double type variable broken with 64-bit assembler |
|
IJ54127 |
dumpcheck command fails in any locals except C |
|
IJ54133 |
LISTING NIM FILESETS ON MASTER FROM NIM CLIENT BE SLOW OR HANG |
|
IJ54152 |
NIMADM FAILS MOUNT FILE TYPE RESOURCES WITH NFSV4 |
|
IJ54181 |
SYNCVG -Q MAY CORE DUMP IF MULTIPLE SYNCVG IS RUN CONCURRENTLY |
|
IJ54211 |
SYSTEM CRASH IN FASTLO_SETUP |
|
IJ54212 |
NFS4 MOUNT FAILS IN NIMADM SETUP FROM AIX 73 TL 1 LEVEL |
|
IJ54230 |
POTENTIAL CRASH IN ALIGNMENT FAULT HANDLER |
|
IJ54231 |
Reduce the traces in tcp path related to DSS |
|
IJ54284 |
POST VIOSUPGRADE 4.1, 3 FILES HAVE WRONG GROUP/MODE (TRUSTCHK) |
|
IJ54288 |
Ports become unavailble for EC3M because of EEH |
|
IJ54289 |
Potentially higher IO service time with iodone offload |
|
IJ54427 |
NFSV4 GROUP IS SHOWING AS NOBODY IF IT HAS MANY USERS |
|
IJ54429 |
SYSTEM CRASH RAS_KRPC_SRV_REGISTER / KRPC_RTEC_ERROR_HANDLER |
|
IJ54434 |
lgamma setting ERANGE for NaN |
|
IJ54444 |
Signal context not stored properly during LLU |
|
IJ54448 |
NIM MKSYSB MAY FAIL WITH LANG=EN_US.UTF-8 |
|
IJ54450 |
Ctrl+X to dconsole read mode terminates HMC console |
|
IJ54497 |
LI-2E2:ISST:LLU is generating DB2 core :denfallcog03 |
|
IJ54498 |
'snap' command hangs during lvm data collection if VG is locked |
|
IJ54501 |
VIOS snap file have insufficient permissions for padmin user |
|
IJ54503 |
_LARGE_FILES macros cause conflicts with C++ open |
|
IJ54534 |
GZIP QOS CREDIT ADD FAILS DUE TO DRSLOT_CHRP_ACC TIMEOUT |
|
IJ54536 |
DR CPU REMOVAL FAILURE WITH EBUSY |
|
IJ54544 |
EFS ENABLEMENT FOR MKSYSB ON TAPE |
|
IJ54548 |
LDAP FAILED TO SEARCH ERROR WITH CHARACTER |
|
IJ54549 |
FOC73F_SP1:UNIXv7:Standard output not same as file test.714.eso |
|
IJ54550 |
chfs -a logshuffle={INLINE} fails to update LVC |
|
IJ54551 |
UNEXPECTED "USAGE:" MESSAGE FOR 'LSATTR' WHEN COLLECTING 'SNAP' |
|
IJ54569 |
CHROOT() MIGHT LEAK KERNEL HEAP FOR CREDALLOC |
|
IJ54603 |
LPAR got crashed with stack vioent_hw_send with tracelevel >=4 |
|
IJ54608 |
suma -x fails due to newline char in oslevel output' |
|
IJ54662 |
FILENAME NOT DISPLAYED IF FULLPATH=ON AND FILE_OPEN EVENT FAILS |
|
IJ54665 |
MPSTAT REPORTING FAR DISPATCHES WITH SINGLE SRAD |
|
IJ54668 |
LLU got hung in phase1 with DT7 |
|
IJ54669 |
system crashed with pvthread+00D800 |
|
IJ54672 |
Crash with diag update of invalid secured firmware image |
|
IJ54673 |
Probable system crash at iodone(). |
|
IJ54676 |
PERF:LI-2E2:ORA-03114: not connected to ORACLE POST LLU |
|
IJ54677 |
bulk_unmap:check TCE_MIRROR_REMOTE flag for finding remote TCE |
|
IJ54717 |
ASO core dump while trying to query for WLM data |
|
IJ54718 |
SNAP files are not getting removed after sending transaction |
|
IJ54719 |
ASO may core dump when LRU is active. |
|
IJ54720 |
bos.diag.rte.config_u failure during update from 73D to 73F |
|
IJ54722 |
LKU may fail after performing upgradation from 73D/73F to 73H |
|
IJ54724 |
LI-2E2:Debug tools needs to updated with new vapi glue code add |
|
IJ54741 |
unknown millicode error on P10 LKU. |
|
IJ54754 |
A potential security issue exists |
|
IJ54794 |
Handle port login failures for ESTALE |
|
IJ54812 |
LKU for AIX TL upgrade failed. |
|
IJ54870 |
llvupdate command shows success even for failure cases |
|
IJ54871 |
TEMPORARY AUTH / PASSWDEXPIREDX FAILURES FOR LDAP |
|
IJ54872 |
SECLDAPCLNTD MEMORY LEAK USING STARTTLS |
|
IJ54875 |
ALT_DISK_COPY NOT PRESERVING EFS MOUNT POINTS. |
|
IJ54884 |
error when one Crypt device is Defined and is Available |
|
IJ54893 |
emgr_check_ifixes fails |
|
IJ54945 |
Network performance degradation due to locking in sea_trace |
|
IJ54946 |
Cgo programs failing with Segmentation fault |
|
IJ54949 |
A potential security issue exists |
|
IJ54994 |
viosupgrade from 72X to 73F might fail due to viosbr restore |
|
IJ55032 |
System crash at llvupdate thread |
|
IJ55036 |
SRN FFC-601 is not observed after running the vanguard |
Was this topic helpful?
Document Information
Modified date:
21 July 2025
UID
ibm17239827