IBM Support

ML1030_060 / FW1030.20 Release Notes

Fix Readme


Abstract

This document provides information about the installation of Licensed Machine or Licensed Internal Code, which is sometimes referred to generically as microcode or firmware.

This package provides firmware for IBM Power System S1022 (9105-22A), IBM Power System S1024 (9105-42A), IBM Power System S1022S (9105-22B), IBM Power System S1014 (9105-41B), IBM Power System L1022 (9786-22H), and IBM Power System L1024 (9786-42H) servers only.

Release notes for ML1030_060 / FW1030.20.

Read the following important information prior to installing this Service Pack.

Service Pack Summary: Deferred Service Pack.

This service pack addresses a HIPER (High Impact/Pervasive) Data issue. Please see the Description file for details.


When installing the FW1030 firmware to a system that previously was at a FW1020 level, you need to complete the upgrade then perform the upgrade again consecutively. This to ensure both the T (temporary, also known as current) and P (permanent, also known as backup) are equal. This was previously known as the "Accept" process that was performed automatically. This will be addressed in a newer release/PTF of the HMC to automatically do the accept. Improvements in FW1030.00 and higher to the eBMC will cause any IPL from a backup side that is at a FW1020 firmware level to fail. An AC power cycle would be required to recover from this condition.

For systems with Power Linux partitions, support was added for a new Linux secure boot key. The support for the new secure boot key for Linux partitions may cause secure boot for Linux to fail if the Linux OS for SUSE or RHEL distributions does not have a secure boot key update. The affected Linux distributions as follows need the Linux fix level that includes "Key for secure boot signing grub2 builds ppc64le" : 1) SLES 15 SP4 - The GA for this Linux level includes the secure boot fix. 2) RHEL 8.5- This Linux level has no fix. The user must update to RHEL: 8.6 or RHEL 9.0. 3) RHEL 8.6 4) RHEL 9.0. Please note for this firmware level, any Linux OS partition not updated to a secure boot fix level will fail to secure boot.

Content

Minimum HMC Code Level

This section is intended to describe the "Minimum HMC Code Level" required by the System Firmware to complete the firmware installation process. When installing the System Firmware, the HMC level must be equal to or higher than the "Minimum HMC Code Level" before starting the system firmware update.  If the HMC managing the server targeted for the System Firmware update is running a code level lower than the "Minimum HMC Code Level" the firmware update will not proceed.

The Minimum HMC Code levels for this firmware for HMC x86,  ppc64 or ppc64le are listed below.

x86 -  This term is used to reference the legacy HMC that runs on x86/Intel/AMD hardware for the Virtual HMC that can run on the Intel hypervisors (KVM, XEN, VMWare ESXi).

ppc64 or ppc64le - describes the Linux code that is compiled to run on Power-based servers or LPARS (Logical Partitions)

  • The Minimum HMC Code level for this firmware is: HMC VR2M1030 (PTF MF70433).
  • The Minimum HMC Code level for vHMC is: HMC V10R2M1030. Download of the Power Hardware Management Virtual Appliance (vHMC) install images for x86 hypervisors and PowerVM are available at the Entitled Systems Support site (ESS).

The Minimum HMC level supports the following HMC models:
HMC models: 7063-CR1 and 7063-CR2
x86 - KVM, XEN, VMWare ESXi (6.0/6.5)
ppc64le - vHMC on PowerVM (POWER8,POWER9, and POWER10 systems)


For information concerning HMC releases and the latest PTFs,  go to the following URL to access Fix Central.

For specific fix level information on key components of IBM Power Systems running the AIX, IBM i and Linux operating systems, we suggest using the Fix Level Recommendation Tool (FLRT).

NOTES:

   - You must be logged in as hscroot in order for the firmware installation to complete correctly.
   - Systems Director Management Console (SDMC) does not support this System Firmware level.

Important Information

Feature exclusion between FW1040.00 and FW1030.20:
The following feature is supported only with firmware level FW1040.00 and HMC 10.2.1040 and is excluded from FW1030.20:
- NED24 NVMe Expansion Drawer (#ESR0)
The following features are supported with firmware level FW1030.20 and HMC 10.2.1040 but are excluded from FW1040.00:
- PCIe3 12 Gb x8 SAS Tape HBA adapter(#EJ2B/#EJ2C)
- PCIe4 32 Gb 4-port optical FC adapter (#EN2L/#EN2M)
- PCIe4 64 Gb 2-port optical FC adapter (#EN2N/#EN2P)
- Mixed DDIMM support for the Power E1050 server (#EMCM)
- 100 V power supplies support for the Power S1022s server (#EB3R)

For systems in a PEP 2.0 pool, concurrently updated to this level, the Power Hypervisor inaccurately returns a value that is interpreted by HMC as 'Throttling is active' when queried. For details and resolution please refer to https://www.ibm.com/support/pages/node/6985221

FW1030 needs to be installed twice when upgrading from FW1020 firmware levels
When installing the FW1030 firmware to a system that previously was at a FW1020 level,  you need to complete the upgrade then perform the upgrade again consecutively. This to ensure both the T (temporary, also known as current) and P (permanent, also known as backup) are equal. This was previously known as the "Accept" process that was performed automatically. This will be addressed in a newer release/PTF of the HMC to automatically do the accept.
Improvements in FW1030.00 and higher to the eBMC will cause any IPL from a backup side that is at a FW1020 firmware level to fail. An AC power cycle would be required to recover from this condition.

Concurrent Firmware Updates

Concurrent system firmware update is supported on HMC Managed Systems only.

Ensure that there are no RMC connections issues for any system partitions prior to applying the firmware update.  If there is a RMC connection failure to a partition during the firmware update, the RMC connection will need to be restored and additional recovery actions for that partition will be required to complete partition firmware updates.

Memory Considerations for Firmware Upgrades

Firmware Release Level upgrades and Service Pack updates may consume additional system memory.
Server firmware requires memory to support the logical partitions on the server. The amount of memory required by the server firmware varies according to several factors.
Factors influencing server firmware memory requirements include the following:

  •     Number of logical partitions
  •     Partition environments of the logical partitions
  •     Number of physical and virtual I/O devices used by the logical partitions
  •     Maximum memory values given to the logical partitions

Generally, you can estimate the amount of memory required by server firmware to be approximately 8% of the system installed memory. The actual amount required will generally be less than 8%. However, there are some server models that require an absolute minimum amount of memory for server firmware, regardless of the previously mentioned considerations.

Additional information can be found at:
https://www.ibm.com/docs/en/power10/9105-42A?topic=resources-memory

SBE Updates

Power10 servers contain SBEs (Self Boot Engines) and are used to boot the system.  SBE is internal to each of the Power10 chips and used to "self boot" the chip.  The SBE image is persistent and is only reloaded if there is a system firmware update that contains a SBE change.  If there is a SBE change and system firmware update is concurrent, then the SBE update is delayed to the next IPL of the CEC which will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  If there is a SBE change and the system firmware update is disruptive, then SBE update will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  During the SBE update process, the HMC or op-panel will display service processor code C1C3C213 for each of the SBEs being updated.  This is a normal progress code and system boot should be not be terminated by the user. Additional time estimate can be between 12-20 minutes per drawer or up to 48-80 minutes for maximum configuration.

The SBE image is updated with this service pack.

Firmware Information

Use the following examples as a reference to determine whether your installation will be concurrent or disruptive.

For systems that are not managed by an HMC, the installation of system firmware is always disruptive.

Note: The concurrent levels of system firmware may, on occasion, contain fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can be installed concurrently, but will not be activated until the next IPL. Partition-Deferred fixes can be installed concurrently, but will not be activated until a partition reactivate is performed. Deferred and/or Partition-Deferred fixes, if any, will be identified in the "Firmware Update Descriptions" table of this document. For these types of fixes (Deferred and/or Partition-Deferred) within a service pack, only the fixes in the service pack which cannot be concurrently activated are deferred.

Note: The file names and service pack levels used in the following examples are for clarification only, and are not necessarily levels that have been, or will be released.

System firmware file naming convention:

01MLxxxx_yyy_zzz

  • xxx is the release level
  • yyy is the service pack level
  • zzz is the last disruptive service pack level

NOTE: Values of service pack and last disruptive service pack level (yyy and zzz) are only unique within a release level (xxx). For example, 01ML1010_040_040 and 01ML1020_040_040 are different service packs.

An installation is disruptive if:

  • The release levels (xxx) are different.     

            Example: Currently installed release is 01ML1010_040_040, new release is 01ML1020_050_050.

  • The service pack level (yyy) and the last disruptive service pack level (zzz) are the same.     

            Example: ML1020_040_040 is disruptive, no matter what level of ML1020 is currently installed on the system.

  • The service pack level (yyy) currently installed on the system is lower than the last disruptive service pack level (zzz) of the service pack to be installed.

            Example: Currently installed service pack is ML1010_040_040 and new service pack is ML1020_050_045.

An installation is concurrent if:

The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or higher than the last disruptive service pack level (zzz) of the service pack to be installed.

Example: Currently installed service pack is ML1010_040_040, new service pack is ML1010_041_040.

Firmware Information and Description

Filename 01ML1030_060_026.img
Size 293668944
Checksum 40242
md5sum 3206c444d2963cf759ada5860e402427
Filename 01ML1030_060_026.tar
Size 138547200
Checksum 06889
md5sum e4e2dd0d2d68f866ddf045c6e59da245
Note: The Checksum can be found by running the AIX sum command against the rpm file (only the first 5 digits are listed).
ie: sum    01ML1030_060_026.img
 
ML1030
For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url:
https://www.ibm.com/support/pages/node/6555136
 
ML1030_060_026 / FW1030.20
 
Impact: Data  Severity: HIPER
New features and functions
  • DEFERRED:  A change was made to the processor/memory interface settings which improve its long-term resiliency and avoid system maintenance due to degradation of the interface.  The settings are applied during IPL of the system.  If the firmware is applied concurrently, then the settings will take effect during the next system reboot.  Aside from improving resiliency, the new settings have no affect on the operation of the system.  This change updates the Self-Boot Engine (SBE).
  • DEFERRED: Support added for calculating system power wattage limits based on the power supply CCIN, input voltage, and the number of power supplies.
  • Support for a PCIe4 32Gb 4-port Optical Fibre Channel Adapter with Feature Codes #EN2L/#EN2M and CCIN 2CFC. This adapter supports boot on IBM Power.
  • Support for 2x low-line 100-127V/200-240V 1000-watt AC titanium power supplies with Feature Code #EB3R.  The titanium power supply can be configured in a one-plus-one for a server configuration to provide redundancy. This feature applies only to the IBM Power System S1022s (9105-22B) model.
  • Support for a PCIe4 64Gb 2-port Optical Fibre Channel Adapter with Feature Codes #EN2N/#EN2P and CCIN 2CFD. This adapter supports boot on IBM Power.
  • Support for a PCIe3 SAS Tape HBA Adapter with Feature codes #EJ2B/#EJ2C and CCIN 57F2.  The adapter supports external SAS tape drives such as the LTO-7, LTO-8, and LTO-9, available in the IBM 7226-1U3 Multimedia drawers or standalone tape units such as the TS2270, TS2280 single External Tape Drive, TS2900, TS3100, TS3200, or TS4300.
  • Support was added for a new Power 10 8 BC processor with CCIN 5C8E.  If a system utilizing this new 8C configuration were to end up with firmware code prior to FW1030.20 (such as in the case of a BMC FRU replacement) the following would happen:
    • The PowerVM hypervisor IPL could complete, but the system would have 1 proc and minimal memory available
    • An A7004733 SRC would be posted to the panel and a corresponding PEL logged
    • This feature is only applicable to the IBM Power System L1022 (9786-22H).

System firmware changes that affect all systems
  • HIPER/Pervasive:  AIX logical partitions that own virtual I/O devices or SR-IOV virtual functions may have data incorrectly written to platform memory or an I/O device, resulting in undetected data loss when Dynamic Platform Optimizer (DPO), predictive memory deconfiguration occurs, or memory mirroring defragmentation is performed.
    •      In addition, for model 9105-42A, 9105-41B, and 9876-42H servers with more than 6 NVME drives plugged into a single NVME backplane (feature code EJ1Y) and assigned to a single AIX, Linux, or IBM i partition, these may have data incorrectly written to platform memory or an I/O device resulting in undetected data loss when Dynamic Platform Optimizer (DPO), predictive memory deconfiguration occurs, or memory mirroring defragmentation is performed.  To mitigate the risk of this issue, please install the latest FW1030 service pack (FW1030.20 or later).
  • HIPER/Non-Pervasive: If a partition with dedicated maximum processors set to 1 is shutting down or in a failed state while another partition is activating or DLPAR adding a processor, the system may terminate with SRC B700F103, B700F105, or B111E504 or undetected partition data corruption may occur if triggered by:
      - Partition DLPAR memory add
      - Partition activation
      - Dynamic Platform Optimization (DPO)
      - Memory guard
      - Memory mirroring defragmentation
      - Live Partition Mobility (LPM)
  • HIPER/Pervasive: A security problem was fixed for systems running vTPM 2.0 for vulnerabilities CVE-2023-1017 and CVE-2023-1018.  These vulnerabilities can allow a denial of service attack or arbitrary code execution on the vTPM 2.0 device.
  • Security problems were fixed for the eBMC ASMI GUI for security vulnerabilities CVE-2022-4304 (attacker who can send a high volume of requests to the eBMC and has large amounts of processing power can retrieve a plaintext password) and CVE-2022-4450 (the administrator can crash web server when uploading an HTTPS certificate).  For CVE-2022-4304, the vulnerability is exposed whenever the eBMC is on the network.  For CVE-2022-4450, the vulnerability is exposed if the eBMC administrator uploads a malicious certificate. The Common Vulnerabilities and Exposures issue numbers for these problems are CVE-2022-4304 and CVE-2022-4450.
  • A security problem was fixed for the Virtualization Management Interface (VMI) for vulnerability CVE-2022-4304 that could allow a remote attacker to recover a ciphertext across a network in a Bleichenbacher-style attack.
  • A problem was fixed to prevent Virtualization Management Interface (VMI) platform error logs from being truncated in the User Data section of the log.  The truncation is intermittent and only occurs if the length of the platform error log User Data is not 16-byte aligned.
  • A problem was fixed for the Virtualization Management Interface (VMI) for the HMC being unable to ping VMI and going to the "No Connection" state.  This is a rare problem that can occur in the network router between the HMC and VMI is reporting that it supports an MTU lower than 1500.  In this case, the VMI firewall will improperly filter out the ping (ICMP) response due to destination unreachable and fragmentation not allowed. A workaround to this problem is to have the router between the HMC and VMI send packets with an MTU of 1500.
  • A problem was fixed for a possible incomplete state for the HMC-managed system with SRCs B17BE434 and B182953C logged, with the PowerVM hypervisor hung.  This error can occur if a system has a dedicated processor partition configured to not allow processor sharing while active.
  • A problem was fixed to allow core recovery to handle recoverable processor core errors without thresholding in the hypervisor.  The thresholding can cause a system checkstop and an unnecessary guard of a core.  Core recovery was also changed to not threshold a processor core recoverable error with FIR bit (EQ_CORE_FIR[37]) set if LSU_HOLD_OUT_REG7[4:5] has a non-zero value.
  • A problem was fixed for a possible unexpected SRC BD70E510 with a core checkstop for an OCMB/DIMM failure with no DIMM callout.  This is a low-frequency failure that only occurs when memory mirroring is disabled and an OCMB gets a PMIC fail.  IBM support would be needed to determine if an OCMB was at fault for the checkstop.  If an 'EQ_CORE_FIR(8)[14] MCHK received while ME=0 - non-recoverable' checkstop is seen that does not analyze to a root cause, MC_DSTL_FIR bits 0, 1, 4, and 5 could be checked in the log to determine if an OCMB was at fault.
  • A problem was fixed for partitions using SLES 15 SP4 and SP5 not being able to boot if Secure Boot is Enabled and Enforced for the Linux Operating System, with SRC BA540010 reported. If the OS Secure Boot setting is Enabled and Log Only, the partition will boot, but the error log BA540020 will be generated at every boot.  With the fix, a new SLES Secure Boot key certificate has been added to the Partition Firmware code.
  • A problem was fixed for not being able to delete an eBMC ASMI ACF file, so the eBMC administrator is unable to prevent service login.  This can happen if the administrator previously installed an ACF file and now wants to delete it.  As a workaround, a Redfish patch can be done to patch an empty string into the ACFFile property:  PATCH /redfish/v1/AccountService/Accounts/service -- JSON data: { Oem.IBM.ACF.ACFFile: ""}.
  • A problem was fixed for a fan rotor fault SRC 110076F0 that can occur intermittently.  This is a rare error message that is triggered by a check for fan RPM speed levels that had thresholds for errors that were too restrictive.  This pertains only to the IBM Power System S1024(9105-42A), S1014 (9105-41B), and L1024 (9786-42H) models.
  • A problem was fixed for the eBMC ASMI "Hardware status-> PCIe hardware topology" option to show the second remote port location that is expected.  This error is frequently found on the eBMC PCIe topology page to have a missing remote port for the second cable.
  • A problem was fixed for a missing error log in the case of a VPD mismatch.  This is a rare problem that can occur whenever there is a mismatch in certain keywords whose default value is other than blank.  This mismatch and missing error log could happen after a manual update to the VPD values.
  • A problem was fixed for the eBMC Critical health status to be updated with Critical health for both processors of a DCM when there is a callout for the DCM, instead of just showing one processor with the Critical health.  This pertains only to the IBM Power System S1022(9105-22A) and S1024 (9105-42A) models.
  • A problem was fixed for the eBMC ASMI "Hardware status -> PCIe hardware topology" page not showing the I/O slots for the PCIe3 expansion drawer.  This can occur if a different PCIe3 chassis was connected to the system earlier in the same location.  As a workaround, the HMC can be used to view the correct information in its PCIe topology view.
  • A problem was fixed for the eBMC not notifying the PowerVM hypervisor of LED state changes for the System Attention Indicator (SAI).  This can create an inconsistent SAI state between the eBMC and the hypervisor such that the hypervisor could return an incorrect physical SAI state to an OS in a non-partitioned system environment.  
  • A problem was fixed for the eBMC Redfish interface not throwing an error when given an out-of-range MAC address to assign for the network adapter. The eBMC truncates the bytes of the MAC address and applies it to the network interface.  This happens anytime an out-of-range MAC address is given by Redfish.
  • A problem was fixed for resource assignment for memory not being optimal when less than two processors are available.  As a workaround, the HMC command "optmem" can be run to optimally assign resources.  Although this fix applies concurrently, a re-IPL of the system would need to be done to correct the resource placement, or the HMC command "optmem" can be run.
  • A problem was fixed for unexpected vNIC failovers that can occur if all vNIC backing devices are in LinkDown status.  This problem is very rare that only occurs if both vNIC server backing devices are in LinkDown, causing vNIC failovers that bounce back and forth in a loop until one of the vNIC backing devices comes to Operational status.
  • A problem was fixed for an HMC lpar_netboot error for a partition with a VNIC configuration.  The lpar_netboot logs show a timeout due to a missing value.  As a workaround, doing the boot manually in SMS works.  The lpar_netboot could also work as long as broadcast bootp is not used, but instead use lpar_netboot with a standard set of parameters that include Client, Server, and Gateway IP addresses.
  • A problem was fixed for an SR-IOV adapter virtual function (VF) not being accessible by the OS after a reboot or immediate restart of the logical partition (LPAR) owning the VF.  This can happen for SR-IOV adapters located in PCIe3 expansion drawers as they are not being fully reset on the shutdown of a partition.  As a workaround, do not do an immediate restart of an LPAR - leave the LPAR shut down for more than a minute so that the VF can quiesce before restarting the LPAR.
  • A problem was fixed for a timeout occurring for an SR-IOV adapter firmware LID load during an IPL, with SRC B400FF04 logged.  This problem can occur if a system has a large number of SR-IOV adapters to initialize.  The system recovers automatically when the boot completes for the SR-IOV adapter. With the fix, the SR-IOV adapter firmware LID load timeout value has been increased from 30 to 120 seconds.
  • A problem was fixed for an SR-IOV virtual function (VF) failing to configure for a Linux partition.  This problem can occur if an SR-IOV adapter that had been in use on prior activation of the partition was removed and then replaced with an SR-IOV adapter VF with a different  capacity.  As a workaround, the partition with the failure can be rebooted
  • A problem was fixed for a performance issue after PEP 2.0 throttling or usage of the optmem HMC command. This issue can be triggered by the following scenario for Power Enterprise Pools 2.0 (PEP 2.0), also known as Power Systems Private Cloud with Shared Utility Capacity:
    • Due to a PEP 2.0 budget being reached or an issue with licensing for the pool, the CPU resources may be restricted (throttled)
    • At the start of the next month, after a change in the budget limit or after correction of the licensing issue, the CPU resources will be returned to the server (un-throttled)
    • At this point in time, the performance of the PEP 2.0 pool may not return to the level of performance before throttling.
    • As a workaround, partitions and VIOS can be restarted to restore the performance to the expected levels.  Although this fix applies concurrently, a restart of partitions or VIOS would need to be done to correct the system performance if it has been affected.
  • A problem was fixed for missing countdown expiration messages after a renewal of PEP 2.0.
    • Power Enterprise Pools 2.0 (PEP 2.0), also known as Power Systems Private Cloud with Shared Utility Capacity, normally has automatic renewal, but if this does not occur for some reason, expiration of PEP 2.0 should be warned by countdown messages before expiration and by daily messages after expiration.  As a workaround, the CMC appliance can be examined to see the current status of the PEP 2.0 subscription.
  • A problem was fixed for Power Systems Private Cloud with Shared Utility Capacity (formerly known as Power Enterprise Pools 2.0 (PEP 2.0)) for a "Throttled" indicator that is missing on the HMC. PEP 2.0 throttling occurs if PEP 2.0 expiration has occurred.  This is a rare event as most customers have automatic PEP 2.0 renewal and those that do not are notified prior to expiration that their PEP 2.0 is about to expire.  Also, the throttling causes a performance degradation that should be noticeable.
  • A problem was fixed for an erroneous notification from the HMC that a PEP 2.0 workload is being throttled.
    •     Any system with Power Enterprise Pools 2.0 (PEP 2.0) enabled, also known as Power Systems Private Cloud with Shared Utility Capacity, may get a false throttle notification if the FW1030.10 firmware level had been activated concurrently.  As a workaround, customers can call IBM service to get a renewal key which will clear the throttle indicator.
  • A problem was fixed for a concurrent firmware update failure with the HMC message "HSCF0230E An error occurred applying the new level of firmware" issued.  This is an infrequent error that can occur if the last active partition is powered off during a code update.  As a workaround, avoid powering off partitions during a code update.
  • A problem was fixed for a NovaLink installation failure.  This problem could occur after deleting a partition with a vTPM or deleting a vTPM.  As a workaround, after deleting a partition with a vTPM or deleting a vTPM, re-IPL the system.  This will remove the stale PowerVM hypervisor AMC adapter causing the problem.
  • A problem was fixed for incorrect SRC callouts being logged for link train failures on Cable Card to Drawer PCIe link. SRC B7006A32 is being logged for link train failure, where actually SRC B7006AA9 should be logged.  And SRC B7006A32 is calling out cable card/PHB/planar when it should be B7006AA9 calling out the cable card/cables/drawer module.  Every link train failure on Cable Card to Drawer PCIe link can cause this issue.
  • An AP activation code was added as a method to resolve a failed IPL with SRC A7004713 for a mismatched system serial number (SN).  The new AP Activation code can be used to clear the System SN.  This problem should be rare to have a mismatched SN.  A workaround to this problem is to perform a genesis IPL.
  • A problem was fixed for a failed Chassis Management Card (CMC) not reporting an SRC B7006A95 and not powering off the I/O drawer.  This error will happen whenever there is a problem with the CMC card.
  • A problem was fixed for incomplete descriptions for the display of devices attached to the FC adapter in SMS menus.  The FC LUNs are displayed using this path in SMS menus:  "SMS->I/O Device Information -> SAN-> FCP-> <FC adapter>".  This problem occurs if there are LUNs in the SAN that are not OPEN-able, which prevents the detailed descriptions from being shown for that device.
  • A problem was fixed for the HMC Repair and Verify (R&V) procedure failing during concurrent maintenance of the #EMX0 Cable Card. This problem can occur if a partition is IPLed after a hardware failure before attempting the R&V operation.   As a workaround, the R&V can be performed with the affected partition powered off or the system powered off.
  • A problem was fixed for the eBMC ASMI for incorrectly showing the system fan information under I/O Expansion Chassis.  These should only be shown under the System Chassis.
  • A problem was fixed for the eBMC ASMI for showing many blank settings under VET capabilities.  The blank settings have been updated with names where possible.
  • A problem was fixed for a flood 110015F0 power supply SRCs logged with no evidence of a power issue.  These false errors are infrequent and random.
  • A problem was fixed for the eBMC ASMI network page showing duplicate static DNS values if these were set multiple times.  This always occurs if the same DNS server's IPs are set multiple times.
  • A problem was fixed for an errant BC101765 after replacing a primary boot processor with a field spare.  If a faulty primary boot processor is replaced by a field spare having FW1030.00 Self-Boot Engine firmware or later, the host firmware may report a BC101765 SRC during IPL with a hardware callout erroneously implicating the newly replaced processor. Generally, the problem is likely benign if it surfaces on only the first IPL after a primary boot processor replacement.  Additionally, remote attestation can be employed when the system is fully booted to verify the expected TPM measurements.  A boot after observing this failure should work correctly.  
  • A problem was fixed for an internal Redfish error that will occur on the eBMC if an attempt is made to add an existing static IP address.  With the fix, the Redfish will return successfully if a request to made to add a static IP that already exists.
  • A problem was fixed for an SRC not being logged if the system power supplies are connected incorrectly to two different AC levels. This should be a rare error that only happens when the system is wired incorrectly.

How to Determine The Currently Installed Firmware Level

You can view the server's current firmware level on the Advanced System Management Interface (ASMI) Overview page under the System Information section in the Firmware Information panel. Example: (ML1020_079)

Downloading the Firmware Package

Follow the instructions on Fix Central. You must read and agree to the license agreement to obtain the firmware packages.

Note: If your HMC is not internet-connected you will need to download the new firmware level to a USB flash memory device or ftp server.

Installing the Firmware

The method used to install new firmware will depend on the release level of firmware which is currently installed on your server. The release level can be determined by the prefix of the new firmware's filename.

Example: MHxxx_yyy_zzz

Where xxx = release level

  • If the release level will stay the same (Example: Level ML1020_040_040 is currently installed and you are attempting to install level ML1020_041_040) this is considered an update.
  • If the release level will change (Example: Level ML1020_040_040 is currently installed and you are attempting to install level ML1030_050_050) this is considered an upgrade.

Instructions for installing firmware updates and upgrades can be found at https://www.ibm.com/docs/en/power10/9105-42A?topic=9105-42A/p10eh6/p10eh6_updates_sys.htm

IBM i Systems:

For information concerning IBM i Systems, go to the following URL to access Fix Central: 
https://www.ibm.com/support/fixcentral/

Choose "Select product", under Product Group specify "System i", under Product specify "IBM i", then Continue and specify the desired firmware PTF accordingly.

HMC and NovaLink Co-Managed Systems (Disruptive firmware updates only):
A co-managed system is managed by HMC and NovaLink, with one of the interfaces in the co-management master mode.
Instructions for installing firmware updates and upgrades on systems co-managed by an HMC and Novalink is the same as above for a HMC managed systems since the firmware update must be done by the HMC in the co-management master mode.  Before the firmware update is attempted, one must be sure that HMC is set in the master mode using the steps at the following IBM KnowledgeCenter link for NovaLink co-managed systems:

https://www.ibm.com/docs/en/power10/9105-42A?topic=environment-powervm-novalink


Then the firmware updates can proceed with the same steps as for the HMC managed systems except the system must be powered off because only a disruptive update is allowed.   If a concurrent update is attempted, the following error will occur: " HSCF0180E Operation failed for <system name> (<system mtms>).  The operation failed.  E302F861 is the error code:"
https://www.ibm.com/docs/en/power10/9105-42A?topic=9105-42A/p10eh6/p10eh6_updates_sys.htm

Firmware History

The complete Firmware Fix History (including HIPER descriptions)  for this Release level can be reviewed at the following url:
https://www.ibm.com/support/pages/node/6910163

[{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSZ0S2","label":"IBM Power S1014 (9105-41B)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSE1FSG","label":"IBM Power S1022 (9105-22A)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SST50ER","label":"IBM Power S1022s (9105-22B)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSBPSUB","label":"IBM Power S1024 (9105-42A)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSM8OVD","label":"IBM Power L1022 (9786-22H)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSZY7N","label":"IBM Power L1024 (9786-42H)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
25 May 2023

UID

ibm16992631