IBM Support

ML1020_118 / FW1020.60 Release Notes

Fix Readme


Abstract

This document provides information about the installation of Licensed Machine or Licensed Internal Code, which is sometimes referred to generically as microcode or firmware.

This package provides firmware for IBM Power System S1022 (9105-22A), IBM Power System S1024 (9105-42A), IBM Power System S1022S (9105-22B), IBM Power System S1014 (9105-41B), IBM Power System L1022 (9786-22H), and IBM Power System L1024 (9786-42H) servers only.

Release notes for ML1020_118 / FW1020.60.

Read the following important information prior to installing this Service Pack.

Service Pack Summary: Concurrent Service Pack.

This service pack addresses a HIPER (High Impact/non-Pervasive) Data issue. Please see the Firmware Information section for details .

For systems with Power Linux partitions, support was added for a new Linux secure boot key. The support for the new secure boot key for Linux partitions may cause secure boot for Linux to fail if the Linux OS for SUSE or RHEL distributions does not have a secure boot key update. The affected Linux distributions as follows need the Linux fix level that includes "Key for secure boot signing grub2 builds ppc64le" : 1) SLES 15 SP4 - The GA for this Linux level includes the secure boot fix. 2) RHEL 8.5- This Linux level has no fix. The user must update to RHEL: 8.6 or RHEL 9.0. 3) RHEL 8.6 4) RHEL 9.0. Please note for this firmware level, any Linux OS partition not updated to a secure boot fix level will fail to secure boot.

The HMC must be at a prerequisite level of HMC 1020.02 (September Monthly PTF) or 1021 (HMC 1020 SP1) before installing FW1020.10 or later service packs. This level will fix the HMC so that it will show any deferred defects in the service pack being installed.

Content

Minimum HMC Code Level

This section is intended to describe the "Minimum HMC Code Level" required by the System Firmware to complete the firmware installation process. When installing the System Firmware, the HMC level must be equal to or higher than the "Minimum HMC Code Level" before starting the system firmware update.  If the HMC managing the server targeted for the System Firmware update is running a code level lower than the "Minimum HMC Code Level" the firmware update will not proceed.

The Minimum HMC Code levels for this firmware for HMC x86,  ppc64 or ppc64le are listed below.

NOTE: The HMC must be at a prerequisite level of HMC 1020.02 (September Monthly PTF) or 1021 (HMC 1020 SP1) before installing FW1020.10 or later service packs.  This level will fix the HMC so that it will show any deferred defects in the service pack being installed.

x86 -  This term is used to reference the legacy HMC that runs on x86/Intel/AMD hardware for the Virtual HMC that can run on the Intel hypervisors (KVM, XEN, VMWare ESXi).

  • The Minimum HMC Code level for this firmware is:  HMC V10R1M1020.2 (PTF MF70256)
ppc64 or ppc64le - describes the Linux code that is compiled to run on Power-based servers or LPARS (Logical Partitions)
  • The Minimum HMC Code level for this firmware is: HMC V10R1M1020.2 (PTF MF70257)

The Minimum HMC level supports the following HMC models:
HMC models: 7063-CR1 and 7063-CR2
x86 - KVM, XEN, VMWare ESXi (6.0/6.5)
ppc64le - vHMC on PowerVM (POWER8,POWER9, and POWER10 systems)


For information concerning HMC releases and the latest PTFs,  go to the following URL to access Fix Central.

For specific fix level information on key components of IBM Power Systems running the AIX, IBM i and Linux operating systems, we suggest using the Fix Level Recommendation Tool (FLRT).

NOTES:

   - You must be logged in as hscroot in order for the firmware installation to complete correctly.
   - Systems Director Management Console (SDMC) does not support this System Firmware level.

Important Information

Concurrent firmware update of certain SR-IOV adapters needs AIX/VIOS fix
If the adapter firmware level in this service pack is concurrently applied, AIX and VIOS VFs may become failed.   Certain levels of AIX and VIOS do not properly handle concurrent SR-IOV updates and can leave the virtual resources in a DEAD state.  Please review the following document for further details:  https://www.ibm.com/support/pages/node/6997885.

For systems in a PEP 2.0 pool, concurrently updated to this level, the Power Hypervisor inaccurately returns a value that is interpreted by HMC as 'Throttling is active' when queried. For details and resolution please refer to https://www.ibm.com/support/pages/node/6985221

NovaLink levels earlier than "NovaLink 1.0.0.16 Feb 2020 release" with partitions running certain SR-IOV capable adapters is NOT supported at this firmware release
NovaLink levels earlier than "NovaLink 1.0.0.16 Feb 2020 release" do not support IO adapter FCs EC2R/EC2S, EC2T/EC2U, EC66/EC67 with FW1010 and later. 

Concurrent Firmware Updates

Concurrent system firmware update is supported on HMC Managed Systems only.

Ensure that there are no RMC connections issues for any system partitions prior to applying the firmware update.  If there is a RMC connection failure to a partition during the firmware update, the RMC connection will need to be restored and additional recovery actions for that partition will be required to complete partition firmware updates.

Memory Considerations for Firmware Upgrades

Firmware Release Level upgrades and Service Pack updates may consume additional system memory.
Server firmware requires memory to support the logical partitions on the server. The amount of memory required by the server firmware varies according to several factors.
Factors influencing server firmware memory requirements include the following:

  •     Number of logical partitions
  •     Partition environments of the logical partitions
  •     Number of physical and virtual I/O devices used by the logical partitions
  •     Maximum memory values given to the logical partitions

Generally, you can estimate the amount of memory required by server firmware to be approximately 8% of the system installed memory. The actual amount required will generally be less than 8%. However, there are some server models that require an absolute minimum amount of memory for server firmware, regardless of the previously mentioned considerations.

Additional information can be found at:
https://www.ibm.com/docs/en/power10/9105-42A?topic=resources-memory

SBE Updates

Power10 servers contain SBEs (Self Boot Engines) and are used to boot the system.  SBE is internal to each of the Power10 chips and used to "self boot" the chip.  The SBE image is persistent and is only reloaded if there is a system firmware update that contains a SBE change.  If there is a SBE change and system firmware update is concurrent, then the SBE update is delayed to the next IPL of the CEC which will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  If there is a SBE change and the system firmware update is disruptive, then SBE update will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  During the SBE update process, the HMC or op-panel will display service processor code C1C3C213 for each of the SBEs being updated.  This is a normal progress code and system boot should be not be terminated by the user. Additional time estimate can be between 12-20 minutes per drawer or up to 48-80 minutes for maximum configuration.

The SBE image is not updated with this service pack. Prior SBE update was in 1020.50.

Firmware Information

Use the following examples as a reference to determine whether your installation will be concurrent or disruptive.

For systems that are not managed by an HMC, the installation of system firmware is always disruptive.

Note: The concurrent levels of system firmware may, on occasion, contain fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can be installed concurrently, but will not be activated until the next IPL. Partition-Deferred fixes can be installed concurrently, but will not be activated until a partition reactivate is performed. Deferred and/or Partition-Deferred fixes, if any, will be identified in the "Firmware Update Descriptions" table of this document. For these types of fixes (Deferred and/or Partition-Deferred) within a service pack, only the fixes in the service pack which cannot be concurrently activated are deferred.

Note: The file names and service pack levels used in the following examples are for clarification only, and are not necessarily levels that have been, or will be released.

System firmware file naming convention:

01MLxxxx_yyy_zzz

  • xxx is the release level
  • yyy is the service pack level
  • zzz is the last disruptive service pack level

NOTE: Values of service pack and last disruptive service pack level (yyy and zzz) are only unique within a release level (xxx). For example, 01ML1010_040_040 and 01ML1020_040_040 are different service packs.

An installation is disruptive if:

  • The release levels (xxx) are different.     

            Example: Currently installed release is 01ML1010_040_040, new release is 01ML1020_050_050.

  • The service pack level (yyy) and the last disruptive service pack level (zzz) are the same.     

            Example: ML1020_040_040 is disruptive, no matter what level of ML1020 is currently installed on the system.

  • The service pack level (yyy) currently installed on the system is lower than the last disruptive service pack level (zzz) of the service pack to be installed.

            Example: Currently installed service pack is ML1010_040_040 and new service pack is ML1020_050_045.

An installation is concurrent if:

The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or higher than the last disruptive service pack level (zzz) of the service pack to be installed.

Example: Currently installed service pack is ML1010_040_040, new service pack is ML1010_041_040.

Firmware Information and Description

Filename 01ML1020_118_079.img
    Size 278557856
    Checksum 51166
    md5sum dc5052091e37651e1c26dd899bd48e9e
Filename 01ML1020_118_079.tar
    Size 131768320
    Checksum 55118
    md5sum 21d67f72d7c2a89f3e42350d517a2ce6
Note: The Checksum can be found by running the AIX sum command against the rpm file (only the first 5 digits are listed).
ie: sum    01ML1020_118_079.img
 
ML1020
For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url:
https://www.ibm.com/support/pages/node/6555136
 
ML1020_118_079
/ FW1020.60
Impact: Data  Severity: HIPER
System Firmware changes that affect all systems
  • HIPER/Non-Pervasive: For all Power10 Firmware levels:
    Power10 servers with an I/O adapter in SRIOV shared mode, and an SRIOV virtual function assigned to an active Linux partition assigned 8GB or less of platform memory, may have undetected data loss or data corruption when Dynamic Platform Optimizer (DPO), memory guard recovery or memory mirroring defragmentation is performed.
    Power10 servers installed with platform firmware level FW1040:
    Power10 services with an I/O adapter in SRIOV shared mode, and an SRIOV virtual function assigned to an active Linux partition either installed with a version of Linux prior to SLES 15 SP 4 or RHEL 9.2 and assigned 8GB or less of platform memory, or installed with SLES 15 SP4 or RHEL 9.2 and assigned 256GB or less of platform memory, may have undetected data loss or data corruption when Dynamic Platform Optimizer (DPO), memory guard recovery or memory mirroring defragmentation is performed.
  • A problem was fixed for transitioning an IO adapter from dedicated to SR-IOV shared mode. When this failure occurs, an SRC B4000202 is logged. This problem may occur if an IO adapter is transitioned between dedicated and SR-IOV shared mode multiple times on a single platform IPL.
  • A problem was fixed where SRC B7006A74 and SRC B7006A75 events for EMX0, NED24, and ENZ0 I/O expansion drawers are incorrectly called out as serviceable events. This fix logs SRC B7006A74 and SRC B7006A75 events as informational.
  • A change was made to ensure all SRC B7006A32 errors are reported as serviceable events. Any system with a drawer could be impacted.  These errors occur when the PCIe link from the expansion drawer to the cable adapter in the system unit is degraded to a lower speed. After applying this fix, the next system IPL may generate serviceable events for these degraded links which were previously not reported as serviceable events.
  • A firmware problem was fixed for Electronic Service Agent (ESA) reporting a system as HMC-managed when the system is not HMC-managed. This may impact ESA functionality for systems which are not HMC-managed.
  • A problem was fixed that would cause an LPM to fail due to an insufficient memory for firmware error while deleting a partition on the source system.
  • A problem was fixed for a scenario in which not all of system memory will be assigned to logical partitions following the IPL (Initial Program Load) of the system. The problem can occur following an system IPL when all system memory had previously been assigned to logical partitions. As a workaround, any available memory can be assigned to the logical partitions through DLPAR (Dynamic Logical Partitioning) or by activating partitions with profiles with the desired memory configuration.
  • A problem was fixed for a rare problem creating and offloading platform system dumps. An SRC B7000602 will be created at the time of the failure. The fix allows for platform system dumps to be created and offloaded normally.
  • A problem was fixed where virtual serial numbers may not all be populated on a system properly when an activation code to generate them is applied. This results in some virtual serial numbers being incorrect or missing.
  • A problem was fixed for an intermittent issue preventing all Power Enterprise Pool mobile resources from being restored after a server power on when both processor and memory mobile resources are in use. Additionally, a problem was fixed where Power Enterprise Pools mobile resources were being reclaimed and restored automatically during server power on such that resource assignments were impacted. The problem only impacts systems utilizing Power Enterprise Pools 1.0 resources.
  • A problem was fixed that prevents dumps (primarily SYSDUMP files) greater than or equal to 4GB (4294967296 bytes) in size from being offloaded successfully to AIX or Linux operating systems. This problem primarily affects larger dump files such as SYSDUMP files but could affect any dump that reaches or exceeds 4GB (RSCDUMP, BMCDUMP, etc.). The problem only occurs for systems which are not HMC-managed where dumps are offloaded directly to the OS. A side effect of an attempt to offload such a dump is the continuous writing of the dump file to the OS until the configured OS dump space is exhausted which will potentially affect the ability to offload any subsequent dumps. The resulting dump file will not be valid and can be deleted to free dump space.
  • A change was made to remove boot-time support for graphics adapters with feature code EC42 and EC51. If the graphics adapter is installed in the system, it will no longer be available for LPAR boot time support. No access to the SMS menu or Restricted OF Prompt (ROFP) will be possible. As a workaround, the SMS menu and ROFP can be accessed by connecting to a partition console via HMC or ASMI.
  • A problem was fixed where the target system would terminate with a B700F103 during LPM (Logical Partition Migration). The problem only occurs if there are low amounts of free space on the target system.
  • A problem was fixed for Logical Partition Migration (LPM) to better handle errors reading/writing data to the VIOS which can lead to a VIOS and/or Hypervisor hang. The error could be encountered if the VIOS crashes during LPM.
  • A problem was fixed for partitions configured to use shared processor mode and set to capped potentially not being able to fully utilize their assigned processing units. To mitigate the issue if it is encountered, the partition processor configuration can be changed to uncapped.
  • A problem was fixed for possible intermittent shared processor LPAR dispatching delays. The problem only occurs for capped shared processor LPARs or uncapped shared processor LPARS running within their allocated processing units. The problem is more likely to occur when there is a single shared processor in the system. An SRC B700F142 informational log may also be produced.
  • A problem was fixed for a possible system hang during a Dynamic Platform Optimization (DPO), memory guard recovery or memory mirroring defragmentation operation. The problem only occurs if the operation is performed while an LPAR is running in POWER9 processor compatibility mode.
  • A problem was fixed where Portuguese language option will be displayed on BMC ASMI which is not supported. If selected, it will display the Brazilian Portuguese translations which are supported. The fix removes the Portuguese language option on the BMC ASMI. If customers are using that language, they should select the Brazilian Portuguese option instead when logging into the GUI.
  • A problem was fixed where a power supply fault LED was not activated when a faulty or a missing power supply is detected on the system. An SRC 10015FF will be logged.
  • A problem was fixed with the type of dump generated when control transitions to the host and the host fails to load in the initial stages of the IPL. The fix adds functionality to precisely determine which booting subsystem failed and capture the correct dump.
  • A problem was fixed where the enclosure fault LED was not activated if a faulty or missing power supply is detected on the system. An SRC 110015FF/110015F6 will be logged.
  • A problem was fixed in an internal error handling path that resulted in an SRC of BD802002. This SRC means an invalid error log is logged / sent by host to BMC.
  • [PSIRT PUBLIC] A problem was fixed where the BMC's HTTPS server offers the deprecated MAC CBC algorithms. The fix removes the CBC MAC algorithms from those offered by the BMC's HTTPS server.
  • A problem was fixed with the hardware deconfiguration page of the BMC ASM GUI where "Pel ID" column renamed to "Event ID" since that column displays the event id, not the Pel Id.
  • A problem was fixed where PCIe Topology table displayed via BMC ASMI was missing an entry for one of the devices.
  • A problem was fixed when BMC was running slow or busy, performing disruptive code update operations in a continuous loop from HMC causes VMI certificate exchange request to timeout and HMC status for the system changes to No connection state. The fix increases operation timeout to 30 seconds in BMC web server code to avoid VMI certificate operations failure from the HMC and hence avoid the No connection state on the HMC.
  • A problem was fixed where during firmware update from FW1030 to FW1050 release, eth1 IPv6 link local & SLAAC address become disabled, since IPv6 is not supported on FW1030 firmware release and IPv6 Link Local address & SLAAC address remains disabled after the code update to FW1050. As a workaround, enable IPv6 SLAAC configuration on eth1 manually using BMC GUI or HMC. -or- do a factory reset of BMC to get the system with default IPv6 SLAAC setting as enabled.
  • A problem has been fixed during BMC reset reload, where the power supply fault LED deactivates for the faulty power supply.
  • A problem was fixed where the customer was able to perform search and filter operations when there were no entries.  The problem only occurs when there are no entries in the PCIe topology page.
  • A problem was fixed where the horizontal scroll bar was missing on the Notices page.
  • A problem was fixed when changing the hostname, the BMC gets logged out even if it fails to update. The problem occurs when making changes to hostname from the Network page.
  • A problem was fixed where a user is unable to generate a CSR when filling optional field-Challenge password values on the BMC GUI page (Login -> Security and Access -> Certificates -> Click on Generate CSR).
  • A problem was fixed where the total DIMM capacity calculation is incorrect, hence it will be displayed as 0 GB on BMC ASM GUI (Inventory and LED menu -> System Component-> Total System memory). Once the fix is applied concurrently, the system must be powered off. Once at powered off state, use ASMI -> Operations -> Reboot BMC. After the BMC is rebooted, the display will be corrected.
  • A problem was fixed where the enclosure and FRU fault LEDs turned on due to error and did not turn off even after the fault has been fixed.
  • A problem was fixed where the customer is not getting a proper error message when resource dump is triggered at system power off state. The problem only occurs when the system is not at least in PHYP standby mode.
  • A problem has been fixed where in case of an AC cycle power LED will not blink when BMC reaches standby stage for the first time.
  • A problem was fixed for the PowerRestorePolicy of “AlwaysOff” to make it effective such that when the system loses power, it does not automatically power on when power is restored. This problem of automatic power on occurs every time the system loses power with “AlwaysOff” set as the power restore policy in the eBMC ASMI.
  • A problem was fixed where the user will see an error when trying to upload an ACF certificate.
  • A problem was fixed where HMC status goes to No-connection state when the number of connections between an HMC and BMC exceeded the maximum number of connections allowed between the HMC and BMC.
  • A problem was fixed where an unauthorized LDAP user will not get an error message while logging in.
  • A problem was fixed where admin user is not navigated to ASMI overview page when the user tries to login and the service login certificate has expired.
  • A problem was fixed where during a checkstop, an extra hardware or hostboot dump is created as the watchdog timer is triggered during dump collection. This is fixed by disabling the watchdog during checkstop dump collection.
  • A problem was fixed where BMC ASM GUI didn't display an error message when a user entered the frequency cap value beyond the allowed range.
  • A problem was fixed where the user login with service/admin was unable to replace Service login certificate. This issue occurs whenever the user tries to replace the certificate.
  • A problem was fixed with the feature to schedule host power on/off's inband through the OS. If a time was scheduled in the future to power on the host and the BMC happened to be rebooted during that scheduled time, the power on would occur fine but a BD554001 may be incorrectly logged.
  • A problem was fixed where the physical LED was not lit during the HMC guided FRU repair operation.
  • A problem was fixed where BMC-generated CSRs do not display the correct CSR version.
  • A problem was fixed where a read-only user will not always be shown an "unauthorized" message when performing restricted actions. An example is when a read-only user tries to trigger a restricted action from the GUI.
  • A problem was fixed in which an informational message was added for the user each time they performed a power operation.
  • A problem was fixed where BMC network connection stops working.  This fix detects and corrects BMC NCSI timeout conditions. If the condition is detected, the BMC ethernet link is reset and the network connection is restored.
  • A change adds support for additional key algorithms (ssh-ed25519 and ecdsa-sha2-nistp384).
  • The Common Vulnerabilities and Exposures number for this problem is CVE-2023-45857. This problem can occur when the web browser has an active BMC session and the browser visits a malicious website. To avoid the problem, do one or both of these: log out of BMC sessions when access is not needed, and do not use the same browser to access both the BMC and other web sites. A security problem was fixed for CVE-2023-45857.
  • A problem was fixed where BMC will not go to quiesce/error state during the reloading of network configurations.
  • A problem was fixed where some DMA data transfers between the host processor and the BMC do not complete successfully. This issue can be identified with a Platform Event Log having reference code BC8A1E07.
  • A problem was fixed where after changing mode between Manual and NTP using ASMI, the customer receives a success message but it continues to use the previous mode until ASMI GUI page is refreshed.
  • A problem was fixed where a code update can fail if some files do not get transferred correctly between hypervisor and BMC.
  • A problem was fixed where ASMI was displaying "Reload the browser page to get the updated content" message even when no power operation was confirmed by the user.
  • A problem was fixed where replacing the processor chip likely will not resolve the issue reported by logs with SRC B111E504 and Hex Word 8 in the range of 04D9002B to 04D90032. Instead the recommended service action is to contact next level support.
  • A problem was fixed where a bad core is not guarded and repeatedly causes system to crash. The SRC requiring service has the format BxxxE540. The problem can be avoided by replacing or manually guarding the bad hardware.
  • An enhancement was made related to vNIC failover performance. The performance benefit will be gained when a vNIC client unicast MAC address is unchanged during the failover. The performance benefit is not very significant but a minor one compared to overall vNIC failover performance.
  • A change was made for certain SR-IOV adapters to move up to the latest level of adapter firmware. No specific adapter problems were addressed at this new level.
  • This change updates the adapter firmware to 16.35.2000 for Feature codes EC67,EC66 and CCIN 2CF3.
    If these adapter firmware levels are concurrently applied, AIX and VIOS VFs may become failed. Certain levels of AIX and VIOS do not properly handle concurrent SR-IOV updates and can leave the virtual resources in a DEAD state. Please review the following document for further details:  SR-IOV backing device goes into dead state after SR-IOV adapter firmware update.  A re-IPL of the system instead of concurrently updating the SR-IOV adapter firmware would also work to prevent a VF failure. 
    Update instructions: Updating the SR-IOV adapter firmware .
  • A problem was fixed where service for a processor FRU was requested when no service is actually required. The SRC requiring service has the format BxxxE504 with a PRD Signature description matching (OCC_FIR[45]) PPC405 cache CE. The problem can be ignored unless the issue is persistently reported on subsequent IPLs. Then, hardware replacement may be required.
  • A problem was fixed where the publication description for some of the System Reference Codes (SRCs) starting with BC8A05xx (Ex: BC8A0513) contains incorrect description text.

How to Determine The Currently Installed Firmware Level

You can view the server's current firmware level on the Advanced System Management Interface (ASMI) Overview page under the System Information section in the Firmware Information panel. Example: (ML1020_079)

Downloading the Firmware Package

Follow the instructions on Fix Central. You must read and agree to the license agreement to obtain the firmware packages.

Note: If your HMC is not internet-connected you will need to download the new firmware level to a USB flash memory device or ftp server.

Installing the Firmware

The method used to install new firmware will depend on the release level of firmware which is currently installed on your server. The release level can be determined by the prefix of the new firmware's filename.

Example: MHxxx_yyy_zzz

Where xxx = release level

  • If the release level will stay the same (Example: Level ML1020_040_040 is currently installed and you are attempting to install level ML1020_041_040) this is considered an update.
  • If the release level will change (Example: Level ML1020_040_040 is currently installed and you are attempting to install level ML1030_050_050) this is considered an upgrade.

Instructions for installing firmware updates and upgrades can be found at https://www.ibm.com/docs/en/power10/9105-42A?topic=9105-42A/p10eh6/p10eh6_updates_sys.htm

IBM i Systems:

For information concerning IBM i Systems, go to the following URL to access Fix Central: 
https://www.ibm.com/support/fixcentral/

Choose "Select product", under Product Group specify "System i", under Product specify "IBM i", then Continue and specify the desired firmware PTF accordingly.

HMC and NovaLink Co-Managed Systems (Disruptive firmware updates only):
A co-managed system is managed by HMC and NovaLink, with one of the interfaces in the co-management master mode.
Instructions for installing firmware updates and upgrades on systems co-managed by an HMC and Novalink is the same as above for a HMC managed systems since the firmware update must be done by the HMC in the co-management master mode.  Before the firmware update is attempted, one must be sure that HMC is set in the master mode using the steps at the following IBM KnowledgeCenter link for NovaLink co-managed systems:

https://www.ibm.com/docs/en/power10/9105-42A?topic=environment-powervm-novalink


Then the firmware updates can proceed with the same steps as for the HMC managed systems except the system must be powered off because only a disruptive update is allowed.   If a concurrent update is attempted, the following error will occur: " HSCF0180E Operation failed for <system name> (<system mtms>).  The operation failed.  E302F861 is the error code:"
https://www.ibm.com/docs/en/power10/9105-42A?topic=9105-42A/p10eh6/p10eh6_updates_sys.htm

Firmware History

The complete Firmware Fix History (including HIPER descriptions)  for this Release level can be reviewed at the following url:
https://www.ibm.com/support/pages/node/6910163

[{"Type":"MASTER","Line of Business":{"code":"LOB68","label":"Power HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"SSZ0S2","label":"IBM Power S1014 (9105-41B)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB68","label":"Power HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"SSE1FSG","label":"IBM Power S1022 (9105-22A)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB68","label":"Power HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"SST50ER","label":"IBM Power S1022s (9105-22B)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB68","label":"Power HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"SSBPSUB","label":"IBM Power S1024 (9105-42A)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB68","label":"Power HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"SSM8OVD","label":"IBM Power L1022 (9786-22H)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB68","label":"Power HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"SSZY7N","label":"IBM Power L1024 (9786-42H)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
23 May 2024

UID

ibm17151130