Fix Readme
Abstract
Release notes for the 3.1.4.41 VIOS Fix Pack release
Content
VIOS 3.1.4.41 Release Notes
Package Information
PACKAGE: Update Release 3.1.4.41
IOSLEVEL: 3.1.4.41
|
VIOS level is |
The AIX level of the NIM Master level must be equal to or higher than |
|
Update Release 3.1.4.41 |
AIX 7200-05-08 |
General package notes
Be sure to heed all minimum space requirements before installing.
Review the list of fixes included in Update Release 3.1.4.41
To take full advantage of all the functions available in the VIOS, it may be necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you update the VIOS to Update Release 3.1.4.41.
Microcode or system firmware downloads for Power Systems
If the VIOS being updated has filesets installed from the VIOS Expansion Pack, be sure to update those filesets with the latest VIOS Expansion Pack if updates are available.
Update Release 3.1.4.41 updates your VIOS partition to ioslevel 3.1.4.41. To determine if Update Release 3.1.4.41 is already installed, run the following command from the VIOS command line.
$ ioslevel
If Update Release 3.1.4.41 is installed, the command output is 3.1.4.41.
Note: The VIOS installation DVDs and the level of VIOS preinstalled on new systems might not contain the latest fixes available. It’s highly recommended that customers that get the physical GA level (i.e., 3.1.4.30) to update to the electronic GA levels (i.e., 3.1.4.41) as soon as possible. Missing fixes might be critical to the proper operation of your system. Update these systems to a current service pack level from Fix Central.
For Customers using NVMe Over Fabric (SAN) as their Boot Disk
Booting from NVMeoF disk may fail if certain fabric errors are returned, hence a boot disk set up with multiple paths is recommended. In case there is a failure to boot, the boot process may continue if you exit from the SMS menu. Another potential workaround is to discover boot LUNs from the SMS menu and then retry boot.
For Customers Using Third Party Java-based Software
This only applies to customers who both use third party Java based software and have run updateios -remove_outdated_filesets to remove Java 7 from their system.
To prevent errant behavior when editing customer’s /etc/environment file, updateios does not make changes to that file when run. If a customer is using software that depends on using Java and having the path to it in your PATH environment variable, the following edit should be made to allow programs that use the PATH environment variable to locate Java 8.
In the /etc/environment file, customers should see:
PATH=[various directories]:/usr/java7_64/jre/bin:/usr/java7_64/bin
To address a potential issue with Java-dependent third party software, this should be converted to:
PATH=[various directories]:/usr/java8_64/jre/bin:/usr/java8_64/bin
ITM Agents Software
ITM (IBM Tivoli Monitoring) filesets continue to be pre-installed as part of VIOS 3.x. The agents can be updated using one of the methods below:
- To update the agent and shared components (e.g. JRE, GSKt) to the latest levels, download the latest image included in the IBM Tivoli Monitoring System P Agents 6.22 Fix Pack 4 bundle . Here is a link to the readme file containing information about how to obtain the image and instructions for installing.
- To update just the agent shared components (e.g JRE, GSKit), the latest ITM service pack can be installed.
LDAP fileset updates
For VIOS partitions originally installed at 3.1.4.30 or later, errors updating the idsldap 6.4 filesets can be safely ignored as version 10.0 of the idsldap filesets are already present on the system.
Please ignore below errors:
installp: APPLYING software for:
idsldap.license64.rte 6.4.0.25
…
…
Error: IBM Security Directory Server License not detected. Install cannot continue.
instal: Failed while executing the idsldap.license64.rte.pre_i script.
0503-464 installp: The installation has FAILED for the "usr" part
of the following filesets:
idsldap.license64.rte 6.4.0.25
installp: Cleaning up software for:
idsldap.license64.rte 6.4.0.25
Please ignore below entries in “Installation Summary” :
idsldap.license64.rte 6.4.0.25 USR APPLY FAILED
idsldap.license64.rte 6.4.0.25 USR CLEANUP SUCCESS
idsldap.cltbase64.rte 6.4.0.25 USR APPLY CANCELED
idsldap.cltbase64.adt 6.4.0.25 USR APPLY CANCELED
idsldap.clt64bit64.rte 6.4.0.25 USR APPLY CANCELED
idsldap.clt32bit64.rte 6.4.0.25 USR APPLY CANCELED
Note
- If the Virtual I/O servers are installed on POWER10 systems and configured with “32Gb PCIe4 2-Port FC Adapter, Feature Code(s) EN1J and EN1K”, then the requirement is to update the adapter microcode level to 7710812214105106.070115 before updating the Virtual I/O server to 3.1.4.41 level.
Please refer to the release notes at this link
3.1.4.41 New Features
VIOS 3.1.4.41 adds the following new features:
VIOS Shared Storage Pool Logging Enhancements
The two major enhancements for VIOS Shared Storage Pool in this release are as follows:
- The creation of dbn.log file within a Shared Storage Pool (SSP).
This file tracks all elections and relinquishes of the Database Node (DBN) role and debugs DBN-related problems easily. - The compression and storage of vio_daemon logs.
The number of logs that can be retained is increased by 15 times with no impact to storage capacity. This is done by compressing old VIOS logs and by tagging them with appropriate date and time information. This reduces the risk of logs that might contain critical information from being overwritten by newer logs.
N_Port ID Virtualization (NPIV) Enhancements: NVMeoF Protocol Support
The NPIV is a standardized method for virtualizing a physical Fibre Channel (FC) port. An NPIV-capable FC host bus adapter (HBA) can have multiple N_Ports, each with a unique identity. The NPIV, coupled with the adapter-sharing capabilities of the Virtual I/O Server (VIOS), allows a physical Fibre Channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER® logical partitions (LPARs) to have virtual fibre channel host bus adapters, each with a dedicated
worldwide port name. Each virtual Fibre Channel HBA has a unique storage area network (SAN) identity similar to that of a dedicated physical HBA.
The Non-Volatile Memory Express over Fabrics (NVMeoF) protocol in the NPIV stack is supported in Virtual I/O Server Version 3.1.4.0. A single virtual adapter provides access to both Small Computer Systems Interface (SCSI) and NVMeoF protocols if the physical adapter can support them. The application, which is running on the client partition and capable of handling the NPIV-NVMeoF protocol, can send I/Os in parallel to SCSI and NVMeoF disks that are coming from a single virtual adapter. The hardware and software requirements for NVMeoF protocol enablement in the NPIV stack are as follows:
- VIOS Version 3.1.4.0, or later
- NPIV-NVMeoF capable client (currently AIX® Version 7.3 Technology Level 01, or later)
- POWER 10 system with firmware version FW 1030, or later
- 32 or 64 GB FC adapters with physical NVMeoF support
VIOS Operating System Monitoring Enhancement
This release adds support for monitoring the VIOS operating system state by the POWER Hypervisor. If the VIOS partition is not responsive (due to certain conditions), then the hypervisor restarts the VIOS partition while it takes the system dump for debugging purpose. This helps to recover the VIOS partition from errors, for example, if the CPU is hijacked by the highest-level interrupt, the system progress is stopped. The ioscli viososmon command is added to understand the hang detection interval and the action when the hang is detected. This support requires POWER firmware version FW 1030, or later and VIOS Version 3.1.4.0, or later.
Support for NFSv4 Mounts on VIOS
The ioscli mount command which previously, by default, only supported the NFSv3 mount of the AIX is updated to support NFSv4 mounting. The changes allow VIOS to be able to invoke commands for the following actions:
- Setting Network File System (NFS) domain using chnfsdom from command line interface (CLI)
The setting of the NFS domain is accomplished by adding a Role-based access control (RBAC) support for the chnfsdom command.
- Invoking the NFSv4 mounting
The ioscli mount command is updated to support invoking the NFSv4 mounting. The current ioscli mount command defaults to NFSv3. You can invoke the “o vers=4” option with the new “-nfsvers <version> <Node: > <Directory> < Directory>” option that is added to the ioscli mount command. The values that are supported for the version are 3 and 4.
Note: The ioscli mount command supports NFS versions that are supported by the AIX mount command.
- Starting the nfsrgyd daemon
For version 4, if the mount is successful, a check is done to see if the nfsrgyd daemon is already started. And if it has not yet started, the nfsrgyd daemon is started.
Hardware Requirements
VIOS 3.1.4.41 can run on any of the following Power Systems:
POWER 8 or later.
Known Capabilities and Limitations
The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.
Requirements for Shared Storage Pool
- Platforms: POWER 8 and later (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)
- System requirements per SSP node:
- Minimum CPU: 1 CPU of guaranteed entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 1 GB
- At least 1 fiber-channel attached disk for data, 10GB
Limitations for Shared Storage Pool
Software Installation
- When installing updates for VIOS Update Release 3.1.4.41 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped on the node being upgraded.
SSP Configuration
|
Feature |
Min |
Max |
|
Number of VIOS Nodes in Cluster |
1 |
16* |
|
Number of Physical Disks in Pool |
1 |
1024 |
|
Number of Virtual Disks (LUs) Mappings in Pool |
1 |
8192 |
|
Number of Client LPARs per VIOS node |
1 |
250* |
|
Capacity of Physical Disks in Pool |
10GB |
16TB |
|
Storage Capacity of Storage Pool |
10GB |
512TB |
|
Capacity of a Virtual Disk (LU) in Pool |
1GB |
4TB |
|
Number of Repository Disks |
1 |
1 |
|
Capacity of Repository Disk |
512MB |
1016GB |
|
Number of Client LPARs per Cluster |
1 |
2000 |
*Support for additional VIOS Nodes and LPAR Mappings:
Prerequisites for expanded support:
- Over 16 VIOS Nodes requires that the SYSTEM (metadata) tier contains only SSD storage.
- Over 250 Client LPARs per VIOS requires each VIOS have at least 4 CPUs and 8 GB memory.
Here are the new maximum values for each of these configuration options, if the associated hardware specification has been met:
|
Feature |
Default Max |
High Spec Max |
|
Number of VIOS Nodes in Cluster |
16 |
24 |
|
Number of Client LPARs per VIOS node |
250 |
400 |
Other notes:
- Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
- The Shared Storage Pool cluster name must be less than 63 characters long.
- The Shared Storage Pool pool name must be less than 127 characters long.
- The maximum supported LU size is 4TB, however, for high I/O workloads it is recommended to use multiple smaller LUs as it will improve performance. For example, using 16 separate 16GB LUs would yield better performance than a single 256GB LU for applications that perform reads and writes to a variety of storage locations concurrently.
- The size of the /var drive should be greater than or equal to 3GB to ensure proper logging.
Network Configuration
- Uninterrupted network connectivity is required for operation. i.e., The network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
- A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the IBM Knowledge Center.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool capabilities and limitations
- On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.
- When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported.
- LPAR clients are not supported if they use JFS as their filesystem. If JFS is used, there is a risk of data corruption in the event of a network outage. JFS2 and other file systems are unaffected by this issue.
Installation Information
Pre-installation Information and Instructions
Please ensure that your rootvg contains at least 30 GB and that there is at least 4GB free space before you attempt to update to Update Release 3.1.4.41. Run the lsvg rootvg command, and then ensure there is enough free space.
Example:
$ lsvg rootvg
|
|
|
|
VOLUME GROUP:
|
rootvg
|
VG IDENTIFIER:
|
00f6004600004c000000014306a3db3d
|
VG STATE:
|
active
|
PP SIZE:
|
64 megabyte(s)
|
VG PERMISSION:
|
read/write
|
TOTAL PPs:
|
511 (32704 megabytes)
|
MAX LVs:
|
256
|
FREE PPs:
|
64 (4096 megabytes)
|
LVs:
|
14
|
USED PPs:
|
447 (28608 megabytes)
|
OPEN LVs:
|
12
|
QUORUM:
|
2 (Enabled)
|
TOTAL PVs:
|
1
|
VG DESCRIPTORS:
|
2
|
STALE PVs:
|
0
|
STALE PPs:
|
0
|
ACTIVE PVs:
|
1
|
AUTO ON:
|
yes
|
MAX PPs per VG:
|
32512
|
|
|
MAX PPs per PV:
|
1016
|
MAX PVs:
|
32
|
LTG size (Dynamic):
|
256 kilobyte(s)
|
AUTO SYNC:
|
no
|
HOT SPARE:
|
no
|
BB POLICY:
|
relocatable
|
PV RESTRICTION:
|
none
|
INFINITE RETRY:
|
no
|
VIOS upgrades with SDDPCM
A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
Virtual I/O Server support for Power Systems
Updating from VIOS version 3.1.0.00
VIOS Update Release 3.1.4.41 may be applied directly to any VIOS at level 3.1.0.00.
Upgrading from VIOS version 2.2.4 and above
The VIOS must first be upgraded to 3.1.0.00 before the 3.1.4.41 update can be applied. To learn more about how to do that, please read the information provided here.
Before installing the VIOS Update Release 3.1.4.41
Warning: The update may fail if there is a loaded media repository.
Instructions: Checking for a loaded media repository
To check for a loaded media repository, and then unload it, follow these steps.
- To check for loaded images, run the following command:
$ lsvopt
The Media column lists any loaded media.
- To unload media images, run the following commands on all Virtual Target Devices that have loaded images.
$ unloadopt -vtd <file-backed_virtual_optical_device >
- To verify that all media are unloaded, run the following command again.
$ lsvopt
The command output should show No Media for all VTDs.
Instructions: Migrate Shared Storage Pool Configuration
The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for SSP clusters. The VIOS can be updated to Update Release 3.1.4.41 using rolling updates.
Non-disruptive rolling updated to VIOS 3.1 requires all SSP nodes to be at VIOS 2.2.6.31 or later. See detailed instructions in the VIOS 3.1 documentation
The rolling updates enhancement allows the user to apply Update Release 3.1.4.41 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are updated.
To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following conditions are met:
- All VIOS logical partitions must have VIOS Update Release version 2.2.6.31 or later installed.
- All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new SSP capabilities.
Instructions: Verify the cluster is running at the same level as your node.
- Run the following command:
$ cluster -status -verbose - Check the Node Upgrade Status field, and you should see one of the following terms:
UP_LEVEL: This means that the software level of the logical partition is higher than the software level the cluster is running at.
ON_LEVEL: This means the software level of the logical partition and the cluster are the same.
Installing the Update Release
There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.
Instructions: Verifying VIOS update files.
To verify the VIOS update files, follow these steps:
- $ oem_setup_env
- Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl - Verify the link to openssl was created
# ls -alL /usr/bin/openssl /usr/ios/utils/openssl - Verify that both files display similar owner and size
- # exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.
Note: While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.
Version Specific Warning: Version 2.2.2.1, 2.2.2.2, 2.2.2.3, or 2.2.3.1
You must run updateios command twice to get bos.alt_disk_install.boot_images fileset update problem fixed.
Run the following command after the step of "$ updateios –accept –install –dev <directory_name >" completes.
$ updateios –accept –dev <directory_name >
Depending on the VIOS level, one or more of the LPPs below may be reported as "Missing Requisites", and they may be ignored.
MISSING REQUISITES:
X11.loc.fr_FR.base.lib 4.3.0.0 # Base Level Fileset
bos.INed 6.1.6.0 # Base Level Fileset
bos.loc.pc.Ja_JP 6.1.0.0 # Base Level Fileset
bos.loc.utf.EN_US 6.1.0.0 # Base Level Fileset
bos.mls.rte 6.1.x.x # Base Level Fileset
Warning: If VIOS rules have been deployed.
During update, there have been occasional issues with VIOS Rules files getting overwritten and/or system settings getting reset to their default values.
To ensure that this doesn’t affect you, we recommend making a backup of the current rules file. This file is located here:
/home/padmin/rules/vios_current_rules.xml
First, to capture your current system settings, run this command:
$ rules -o capture
Then, either copy the file to a backup location, or save off a list of your current rules:
$ rules -o list > rules_list.txt
After this is complete, proceed to update as normal. When your update is complete, check your current rules and ensure that they still match what is desired. If not, either overwrite the original rules file with your backup, or proceed to use the ‘rules -o modify’ and/or ‘rules -o add’ commands to change the rules to match what is in your backup file.
Finally, if you’ve failed to back up your rules, and are not sure what the rules should be, you can deploy the recommended VIOS rules by using the following command:
$ rules -o deploy -d
Then, if you wish to copy these new VIOS recommended rules to your current rules file, just run:
$ rules -o capture
Note: This will overwrite any customized rules in the current rules file.
Applying Updates
Warning:
If the target node to be updated is part of a redundant VIOS pair, the VIOS partner node must be fully operational before beginning to update the target node.
Note:
For VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
Instructions: Applying updates to a VIOS.
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
$ clstartstop -stop -n <cluster_name > -m <hostname >
- To apply updates from a directory on your local hard disk, follow the steps:
- Create a directory on the Virtual I/O Server.
$ mkdir <directory_name > - Using ftp, transfer the update file(s) to the directory you created.
To apply updates from a remotely mounted file system, and the remote file system is to be mounted read-only, follow the steps:- Mount the remote directory onto the Virtual I/O Server:
$ mount remote_machine_name:directory /mnt
- Mount the remote directory onto the Virtual I/O Server:
- Create a directory on the Virtual I/O Server.
The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the CD/DVD drive, follow the steps:
-
-
- Place the CD-ROM into the drive assigned to VIOS.
- Place the CD-ROM into the drive assigned to VIOS.
-
- Commit previous updates by running the updateios command:
$ updateios -commit
- Verify the updates files that were copied. This step can only be performed if the link to openssl was created.
$ cp <directory_path >/ck_sum.bff /home/padmin
$ chmod 755 </home/padmin>/ck_sum.bff
$ ck_sum.bff <directory_path >
If there are missing updates or incomplete downloads, an error message is displayed.
To see how to create a link to openssl, click here.
- Apply the update by running the updateios command
$ updateios -accept -install -dev <directory_name >
- To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin for padmin to set authorization and establish access to the shutdown command properly.
- If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 3.1.4.41.
$ ioslevel
Post-installation Information and Instructions
Instructions: Checking for an incomplete installation caused by a loaded media repository.
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository data due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.
Running the lsvopt command should show the media images.
Instructions: Recovering from an incomplete installation caused by a loaded media repository.
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device>
- Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp –Or –agX ios.cli.rte –d <device/directory >
To return to the restricted shell:
# exit
- Restart the VIOS.
$ shutdown –restart
- Verify that the Media Repository is operational by running this command:
$ lsrep
Fixes included in this release
|
APAR |
Description |
|
IJ41832 |
Install images for bos.loc.utf.JA_JP |
|
IJ43761 |
SECLDAPCLNTD NESTED GROUP HANG, CORE, OR ERRORS WITH ITDS |
|
IJ44713 |
PASSWORD CHANGE ATTEMPT FAILS |
|
IJ45602 |
LDAP SERVER MARKED DOWN DOESN'T RECOVER QUICKLY ENOUGH |
|
IJ46457 |
INCORRECT ERROR ON LDAP USER PASSWORD CHANGE |
|
IJ47166 |
CHDEV TO CHANGE AN SEA FAILS WITH RBAC USER |
|
IJ47358 |
NPIV CLIENT REQUESTED IMPLICIT LOGO, DRIVER SENT EXPLICIT LOGO |
|
IJ47389 |
SECLDAPCLNTD GROUP CACHE NOT ALWAYS WORKING |
|
IJ47750 |
SECLDAPCLNTD SLOW MEMORY LEAK |
|
IJ47782 |
ASO NOT DISABLED AFTER UPDATE FROM VIOS 3.1.4.0 TO 3.1.4.10 |
|
IJ47943 |
Trying to remove fcs adapter on vfc client crashes partition |
|
IJ48303 |
SECLDAPCLNTD GROUP CACHE IMPROVEMENT |
|
IJ48532 |
LOGINS HANG WITH TRUNCATED LASTLOG FILE |
|
IJ48558 |
MAKE SECLDAPCLNTD SERVER CONNECTION TIMEOUT ADJUSTABLE |
|
IJ48719 |
SNMPDV3 LOGS INCORRECT MESSAGE WHEN STOPSRC IS ISSUED |
|
IJ48748 |
EXPR: NOT FOUND ERRORS WHILE RESTORING MULTI VOLUME MKSYSB |
|
IJ48791 |
SYSTEM MAY CRASH WITH LVM SERIALIZED IO ENABLED ON LV |
|
IJ48829 |
ERRORS IN NMON "PEAK KB/S READ+WRITE" COLUMN |
|
IJ48838 |
NIM MKSYSB RESTORE MAY HANG FOR ENTERPRISE_CLOUD EDITION |
|
IJ48881 |
ISSUE FACED WHILE CHANGING THE IKE ENTRIES USING SMITTY TOOL |
|
IJ48961 |
LKU: SURROGATE CRASH IN "IN6_DELMULTI_SOURCE" |
|
IJ49016 |
FCSTAT OUTPUT FOR PEND_CMD'S MAY CONTINUE TO INCREMENT |
|
IJ49035 |
INVALID TZ WILL CAUSE DBN ELECTION INSTABILITY |
|
IJ49041 |
ALT_DISK_COPY UNFORMATTED ERRORS ON EFS MOUNT POINTS |
|
IJ49046 |
MKGROUP OF LOCAL GROUP DOESN'T ALLOW ADDING NIS USERS |
|
IJ49093 |
BIND 9.16.26-OCT-CVE-2023-3341 |
|
IJ49123 |
VIOD_BKPS DIR CONTAINS TOO MANY LOGS |
|
IJ49148 |
AIXPERT COMINETDCONF SCRIPT |
|
IJ49185 |
MULTICAST BIND FAILS ON VLAN |
|
IJ49205 |
VIOS_VFC_HOST WITH ERROR NPIV_ERR_006E RC:EINVAL ALREADY STARTED |
|
IJ49206 |
DSI NPIV_REAL_FCSCSI_ASYNC |
|
IJ49247 |
MLXENTDD DRIVER IS NOT HAVING ENOUGH DMA_SIZE MEMORY |
|
IJ49323 |
COMMAND CHDEV SAVE INCORRECT IPV4 ALIAS NETMASK IN ODM |
|
IJ49324 |
BOS.ESAGENT UPGRADE MAY FAIL IF IBM.ESAGENT WAS ACTIVATED |
|
IJ49325 |
crashed at lock_free_com+000058 |
|
IJ49326 |
hmcauth command failing with c_rsh -e |
|
IJ49328 |
wcsxfrm_l() returns incorrect value when argument 'n' is 0 |
|
IJ49330 |
MLXCENTDD CRASHED AT MLXCENT_CMDQ_RESERVE_MBOX_TYPE |
|
IJ49331 |
VF INTERFACE ATTACHED TO VNIC NOT REMOVED DURING LPM |
|
IJ49335 |
Probable system crash in cvfsc_cmd_recv |
|
IJ49337 |
Rolling upgrade may fail and SSP DB may become unavailable. |
|
IJ49338 |
htxcmdline -createmdt command got hung |
|
IJ49339 |
DBN connection fails after upgrade to VIOS 4.1 |
|
IJ49342 |
RESTORE MAY DISPLAY WRONG FILE TIMESTAMP |
|
IJ49343 |
Exitstatus wasn't found in standard output |
|
IJ49344 |
od -t fails with exit code zero |
|
IJ49345 |
SWITCH.PRT FAILS IF THERE ARE EXISTING JOBS |
|
IJ49347 |
Shell returns non-zero exit status |
|
IJ49353 |
Driver unrecoverable if rmdev,nddctl,entstat run in parallel |
|
IJ49428 |
Error msg handeling for Phase2/3 foralt_disk_copy. |
|
IJ49431 |
SSL handshake error when using proxy server |
|
IJ49432 |
Verify failed, no proper error message |
|
IJ49433 |
vNIC Adapter reset causes ping to fail |
|
IJ49457 |
MEMORY LEAK IN PERFSTAT_PROCESS LIBPERFSTAT API |
|
IJ49492 |
nim -o cust -a live_update=yes hangs |
|
IJ49494 |
nim_master_recover cmd removed alt_mstr |
|
IJ49499 |
SRIOV VF Diagnostic stalled or results in failure |
|
IJ49500 |
Validation failure of yc03w0p13 yc03w |
|
IJ49577 |
KSH CAN DUMP CORE WHEN DISCONNECTING REMOTE LOGIN CONNECTION |
|
IJ49601 |
Microcode installation message |
|
IJ49666 |
IO HANG IN CAVIUM FIBRE ADAPTER AFTER LINK DOWN EVENT. |
|
IJ49704 |
GDLC IN ETHERCHANNEL ENVIRONMENT, MAY NOT WORK PROPERLY. |
|
IJ49729 |
LI #OEP Bar0 changes for honouring odm configuration |
|
IJ49756 |
OFED crash in cm_alloc_response_msg() while interface detached |
|
IJ49815 |
fwupdate on multipath splitter device ret wrong msg |
|
IJ49853 |
nim LU hangs after completion |
|
IJ49870 |
NIM MKSYSB RESTORE HANG DUE TO DATADAEMON CORE DUMP |
|
IJ49883 |
INCORRECT PRINT QUEUE STATUS SHOW |
|
IJ49892 |
SCSI protocol driver name is hardcoded in the LPM log |
|
IJ49893 |
Inactive LPM is failing with LU validation syst |
|
IJ49902 |
Update kernel copyright notice for 2024 |
|
IJ49939 |
LPM adapter fails after an adapter reset on VFC client. |
|
IJ49965 |
A potential security issue exists |
|
IJ50039 |
RXQ KPROCS CAN RUN CONTINUOUSLY DISABLED AT INTPRI3 ON SOME CPUS |
|
IJ50040 |
Crash while upgrading firmware on Mellanox network adapter |
|
IJ50045 |
BOOT OR DUMP FAILURE ON NVME DISKS DUE TO INCORRECT FW PATH NAME |
|
IJ50097 |
NPIV CLIENT REQ IMPLICIT LOGO, QL DRIVER SENT EXPLICIT LOGO |
|
IJ50098 |
/proc filesystem missing after OS migration |
|
IJ50101 |
NETISR.H INCLUDES FILE SYS/LIBSYSP.H WHICH IS NOT SHIPPED |
|
IJ50228 |
Linker asked to preserve internal global warning from libLTO |
|
IJ50229 |
PVID load up criteria missing in datadaemon |
|
IJ50236 |
Kbd/mouse not detected with KVM switch |
|
IJ50269 |
DURING LKU RESTORATION OF THE TCPTR TABLE MAY CAUSE A HANG |
|
IJ50407 |
NIMHTTP WITH SECURE NIMSH FAILS |
|
IJ50409 |
mksysb to "client" successful parallel migration gen |
|
IJ50410 |
nimadm cmd help changes for bootlist and reboot flags |
|
IJ50411 |
Wrong FFC Code For an SRN-802 |
|
IJ50422 |
SAVEBASE CAN FAIL WITH 4K DISKS IN ROOTVG AND LARGE ODM |
|
IJ50438 |
errpt while promoting data from a 4K device when it is cached. |
|
IJ50501 |
vnic kproc names should not have space characters |
|
IJ50517 |
sysdumpdev -e hung looping in aixdiskpcmke:getPathDataSize |
|
IJ50598 |
FTPD IS INCONSISTENT WHEN "READWRITE:" STATEMENT IS INVALID |
|
IJ50599 |
POSSIBLE CONTINUOUS FCA_ERR6 0X2F AND FCA_ERR2 0X27 ERRPT ERRORS |
|
IJ50602 |
A potential security issue exists |
|
IJ50627 |
updateios failed for 3.1.4.40 with full FS |
|
IJ50663 |
restore memory leaks |
|
IJ50664 |
Improper error msg for wrong country code |
|
IJ50665 |
Commands such as lsmpio, lspath probably will hang |
|
IJ50703 |
NIM hangs even after LKU is completed successfully |
|
IJ50716 |
LLDPCTL ADD FAILING ON VIOS 3.1.4.21 |
|
IJ50722 |
LSPV CORE DUMPS IN ODM_GET_FIRST() |
|
IJ50770 |
ike cmd=list db shows incorrect Remote IP Address |
|
IJ50772 |
AIX NFSv4 Client to Linux NFS Server- fail to copy |
|
IJ50870 |
DSI during varyoffvg when space reclamation is in progress. |
|
IJ50939 |
SNAP DO NOT COLLECT POSTGRES SSP DB IN VIOS 3.1.4.31 /4.1.0.X |
|
IJ51182 |
ioslevel change for 72Z to 3.1.4.41 and for 73D to 4.1.0.21 |
|
IJ51184 |
IO fails before recovery_wait time with cablepul |
|
IJ51214 |
crashed in ct_hook1 |
Was this topic helpful?
Document Information
Modified date:
03 June 2024
UID
ibm17155271