IBM Support

ML1040_027 / FW1040.10 Release Notes

Fix Readme


Abstract

This document provides information about the installation of Licensed Machine or Licensed Internal Code, which is sometimes referred to generically as microcode or firmware.

This package provides firmware for IBM Power System S1022 (9105-22A), IBM Power System S1024 (9105-42A), IBM Power System S1022S (9105-22B), IBM Power System S1014 (9105-41B), IBM Power System L1022 (9786-22H), and IBM Power System L1024 (9786-42H) servers only.

Release notes for ML1040_027 / FW1040.10.

At the time of general availability, FW1040 is required to support the NED24 NVMe Expansion Drawer. FW1040 is a limited, interim release, and is different from IBM's normal releases in the following manner:
- Normal releases have service pack support for 2 years after the release. However, FW1040 is only supported from GA until the next major FW version becomes available.
- Only a single service pack for FW1040 is planned near the release of the next major FW version.
- Clients who install FW1040 must perform a disruptive upgrade once the next major FW version becomes available.
- IBM manufacturing installs FW1040 for customer orders with NED24 NVMe Expansion Drawers included. However, FW1030 is installed for customer orders without NED24 NVMe Expansion Drawers.
- IBM is not planning to release IBM i PTFs for FW1040. Support will continue for the FW1030 stream ONLY until the next firmware release is published.

Read the following important information prior to installing this Release level.

Service Pack Summary: Concurrent Service Pack



When installing the FW1030 firmware to a system that previously was at a FW1020 level, you need to complete the upgrade then perform the upgrade again consecutively. This to ensure both the T (temporary, also known as current) and P (permanent, also known as backup) are equal. This was previously known as the "Accept" process that was performed automatically. This will be addressed in a newer release/PTF of the HMC to automatically do the accept. Improvements in FW1030.00 and higher to the eBMC will cause any IPL from a backup side that is at a FW1020 firmware level to fail. An AC power cycle would be required to recover from this condition.

For systems with Power Linux partitions, support was added for a new Linux secure boot key. The support for the new secure boot key for Linux partitions may cause secure boot for Linux to fail if the Linux OS for SUSE or RHEL distributions does not have a secure boot key update. The affected Linux distributions as follows need the Linux fix level that includes "Key for secure boot signing grub2 builds ppc64le" : 1) SLES 15 SP4 - The GA for this Linux level includes the secure boot fix. 2) RHEL 8.5- This Linux level has no fix. The user must update to RHEL: 8.6 or RHEL 9.0. 3) RHEL 8.6 4) RHEL 9.0. Please note for this firmware level, any Linux OS partition not updated to a secure boot fix level will fail to secure boot.

Content

Minimum HMC Code Level

This section is intended to describe the "Minimum HMC Code Level" required by the System Firmware to complete the firmware installation process. When installing the System Firmware, the HMC level must be equal to or higher than the "Minimum HMC Code Level" before starting the system firmware update.  If the HMC managing the server targeted for the System Firmware update is running a code level lower than the "Minimum HMC Code Level" the firmware update will not proceed.

The Minimum HMC Code levels for this firmware for HMC x86,  ppc64 or ppc64le are listed below.

x86 -  This term is used to reference the legacy HMC that runs on x86/Intel/AMD hardware for the Virtual HMC that can run on the Intel hypervisors (KVM, XEN, VMWare ESXi).

ppc64 or ppc64le - describes the Linux code that is compiled to run on Power-based servers or LPARS (Logical Partitions)

  • The Minimum HMC Code level for this firmware is: HMC V10R21040 (PTF MF70893/ MF70894)
  • PTF MF70893 HMC V10 R2M1040.0 – for vHMC for x86_64 hypervisors (5765-VHX) 
  • PTF MF70894 HMC V10 R2M1040.0 – for 7063 Hardware or vHMC for PowerVM (5765-HMB) 
  • Download of the Power Hardware Management Virtual Appliance (vHMC) install images for x86 hypervisors and PowerVM are available at the Entitled Systems Support site (ESS).

The Minimum HMC level supports the following HMC models:
HMC models: 7063-CR1 and 7063-CR2
x86 - KVM, XEN, VMWare ESXi (6.0/6.5)
ppc64le - vHMC on PowerVM (POWER8,POWER9, and POWER10 systems)


For information concerning HMC releases and the latest PTFs,  go to the following URL to access Fix Central.

For specific fix level information on key components of IBM Power Systems running the AIX, IBM i and Linux operating systems, we suggest using the Fix Level Recommendation Tool (FLRT).

NOTES:

   - You must be logged in as hscroot in order for the firmware installation to complete correctly.
   - Systems Director Management Console (SDMC) does not support this System Firmware level.

Important Information

FW1030 needs to be installed twice when upgrading from FW1020 firmware levels
When installing the FW1030 firmware to a system that previously was at a FW1020 level,  you need to complete the upgrade then perform the upgrade again consecutively. This to ensure both the T (temporary, also known as current) and P (permanent, also known as backup) are equal. This was previously known as the "Accept" process that was performed automatically. This will be addressed in a newer release/PTF of the HMC to automatically do the accept.
Improvements in FW1030.00 and higher to the eBMC will cause any IPL from a backup side that is at a FW1020 firmware level to fail. An AC power cycle would be required to recover from this condition.

Concurrent Firmware Updates

Concurrent system firmware update is supported on HMC Managed Systems only.

Ensure that there are no RMC connections issues for any system partitions prior to applying the firmware update.  If there is a RMC connection failure to a partition during the firmware update, the RMC connection will need to be restored and additional recovery actions for that partition will be required to complete partition firmware updates.

Memory Considerations for Firmware Upgrades

Firmware Release Level upgrades and Service Pack updates may consume additional system memory.
Server firmware requires memory to support the logical partitions on the server. The amount of memory required by the server firmware varies according to several factors.
Factors influencing server firmware memory requirements include the following:

  •     Number of logical partitions
  •     Partition environments of the logical partitions
  •     Number of physical and virtual I/O devices used by the logical partitions
  •     Maximum memory values given to the logical partitions

Generally, you can estimate the amount of memory required by server firmware to be approximately 8% of the system installed memory. The actual amount required will generally be less than 8%. However, there are some server models that require an absolute minimum amount of memory for server firmware, regardless of the previously mentioned considerations.

Additional information can be found at:
https://www.ibm.com/docs/en/power10/9105-42A?topic=resources-memory

SBE Updates

Power10 servers contain SBEs (Self Boot Engines) and are used to boot the system.  SBE is internal to each of the Power10 chips and used to "self boot" the chip.  The SBE image is persistent and is only reloaded if there is a system firmware update that contains a SBE change.  If there is a SBE change and system firmware update is concurrent, then the SBE update is delayed to the next IPL of the CEC which will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  If there is a SBE change and the system firmware update is disruptive, then SBE update will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  During the SBE update process, the HMC or op-panel will display service processor code C1C3C213 for each of the SBEs being updated.  This is a normal progress code and system boot should be not be terminated by the user. Additional time estimate can be between 12-20 minutes per drawer or up to 48-80 minutes for maximum configuration.

The SBE image is only updated with this service pack if the starting firmware level is less than 1040.00.

Firmware Information

Use the following examples as a reference to determine whether your installation will be concurrent or disruptive.

For systems that are not managed by an HMC, the installation of system firmware is always disruptive.

Note: The concurrent levels of system firmware may, on occasion, contain fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can be installed concurrently, but will not be activated until the next IPL. Partition-Deferred fixes can be installed concurrently, but will not be activated until a partition reactivate is performed. Deferred and/or Partition-Deferred fixes, if any, will be identified in the "Firmware Update Descriptions" table of this document. For these types of fixes (Deferred and/or Partition-Deferred) within a service pack, only the fixes in the service pack which cannot be concurrently activated are deferred.

Note: The file names and service pack levels used in the following examples are for clarification only, and are not necessarily levels that have been, or will be released.

System firmware file naming convention:

01MLxxxx_yyy_zzz

  • xxx is the release level
  • yyy is the service pack level
  • zzz is the last disruptive service pack level

NOTE: Values of service pack and last disruptive service pack level (yyy and zzz) are only unique within a release level (xxx). For example, 01ML1010_040_040 and 01ML1020_040_040 are different service packs.

An installation is disruptive if:

  • The release levels (xxx) are different.     

            Example: Currently installed release is 01ML1010_040_040, new release is 01ML1020_050_050.

  • The service pack level (yyy) and the last disruptive service pack level (zzz) are the same.     

            Example: ML1020_040_040 is disruptive, no matter what level of ML1020 is currently installed on the system.

  • The service pack level (yyy) currently installed on the system is lower than the last disruptive service pack level (zzz) of the service pack to be installed.

            Example: Currently installed service pack is ML1010_040_040 and new service pack is ML1020_050_045.

An installation is concurrent if:

The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or higher than the last disruptive service pack level (zzz) of the service pack to be installed.

Example: Currently installed service pack is ML1010_040_040, new service pack is ML1010_041_040.

Firmware Information and Description

Filename 01ML1040_027b_021.img
Size 237709504
Checksum 26272
md5sum 545add77567d98e7dd76fef78aef1aa0
Filename 01ML1040_027_021.tar
Size 137134080
Checksum 32846
md5sum 3ff086ef7aa581e502038b4d6d752d40
Note: The Checksum can be found by running the AIX sum command against the rpm file (only the first 5 digits are listed).
ie: sum    01ML1040_027_021.img
 
ML1040
For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url:
https://www.ibm.com/support/pages/node/6555136
 
ML1040_027_021 / FW1040.10
Impact: Availability  Severity: SPE
System firmware changes that affect all systems
  • A problem was fixed that causes slot power on processing to occur a second time when the slot is already powered on.  The second slot power-on can occur in certain cases and is not needed.  There is a potential for this behavior to cause a failure in older adapter microcode.
  • A problem was fixed for SRC B7006A99 being logged as a Predictive error calling out cable hardware when no cable replacement is needed.  This SRC does not have an impact on PCIe function and will be logged as Informational to prevent unnecessary service actions for the non-functional error.
  • A problem was fixed for possible performance degradation in a partition when doing Nest Accelerator (NX) GZIP hardware compression.  The degradation could occur if the partition falls back to software-based GZIP compression if a new Virtual Accelerator Switchboard (VAS) window allocation becomes blocked.  Only partitions running in Power9 processor compatibility mode are affected.
  • A problem was fixed for inconsistencies in the link status LED to help with the service of faulty cables using the link activity lights.  With the fix, LEDs are now “all or none”.  If one lane or more is active in the entire link where the link spans both cables, then both link activity LEDs are activated.  If zero lanes are active (link train fail), then the link activity LEDs are off.
  • A problem was fixed for an FRU Exchange of an ESM, with one ESM removed from the enclosure, that fails when attempting to power off an NVME drive slot controlled by the remaining enclosure. While the power light did go out on the drive (indicating power was removed), the operation timed out because the OS status page never reflected a powered-off status.
  • A problem was fixed for an I/O drawer that is powered off during concurrent maintenance not showing the correct state of LED indicators on the HMC or eBMC ASMI displays.  These indicators are not accessible, but they will show as present.  As a workaround, the I/O drawer can be powered back on and the LEDs will again show the correct state.
  • A problem was fixed for an extra IFL (Integrated Facility for Linux) proc resource being available during PEP 2.0 throttling. This issue can be triggered by the following scenario for Power Enterprise Pools 2.0 (PEP 2.0), also known as Power Systems Private Cloud with Shared Utility Capacity: PEP 2.0 throttling has been engaged and there are IFL processors being used in the environment.
  • A problem was fixed for an AC power loss on the NED24 NVMe Expansion Drawer (feature code #ESR0) not being recovered when AC is restored.  The error log for the links going down to the expansion drawer did not contain sufficient data to determine that the cause of the links down was an AC/Loss on the expansion drawer.
  • A problem was fixed for being unable to make configuration changes for partitions, except to reduce memory to the partitions, when upgrading to a new firmware release.  This can occur on systems with SR-IOV adapters in shared mode that are using most or all the available memory on the system, not leaving enough memory for the PowerVM hypervisor to fit.  As a workaround, configuration changes to the system to reduce memory usage could be made before upgrading to a new firmware release.
  • A problem was fixed for an incorrect SRC B7005308 "SRIOV Shared Mode Disabled" error log being reported on an IPL after relocating an SRIOV adapter. This error log calls out the old slot where the SRIOV adapter was before being relocated.  This error log occurs only if the old slot is not empty.  However, the error log can be ignored as the relocation works correctly.
  • A problem was fixed for an SR-IOV virtual function (VF) failing to configure for a Linux partition.  This problem can occur if an SR-IOV adapter that had been in use on prior activation of the partition was removed and then replaced with an SR-IOV adapter VF with a different capacity.  As a workaround, the partition with the failure can be rebooted.
  • A problem was fixed for missing countdown expiration messages after a renewal of PEP 2.0. Power Enterprise Pools 2.0 (PEP 2.0), also known as Power Systems Private Cloud with Shared Utility Capacity, normally has automatic renewal, but if this does not occur for some reason, expiration of PEP 2.0 should be warned by countdown messages before expiration and by daily messages after expiration.  As a workaround, the CMC appliance can be examined to see the current status of the PEP 2.0 subscription.
  • A problem was fixed to detect a missing CXP cable during an IPL or concurrent maintenance operation on an I/O drawer and fail the cable card IPL.  Without the fix, the I/O drawer is allowed to IPL with a missing hardware cable.
  • A problem was fixed for long-running operations to the TPM causing an SRC B7009009.  For single TPM systems, if the error occurs during a concurrent firmware update, the update will fail, and all future firmware update or Live Partition Mobility (LPM) operations will fail.  If the error occurs during an LPM, it will be aborted and the LPM must be retried.  If the TPM is set to a failed state, the system must be rebooted to retry concurrent firmware updates.
  • A problem was fixed for a Live Partition Mobility (LPM) migration hang that can occur during the suspended phase.  The migration can hang if an error occurs during the suspend process that is ignored by the OS.  This problem rarely happens as it requires an error to occur during the LPM suspend.  To recover from the hang condition, IBM service can be called to issue a special abort command, or, if an outage is acceptable, the system or VIOS partitions involved in the migration can be rebooted.
  • A problem was fixed for a possible shared processor partition becoming unresponsive or having reduced performance. This problem only affects partitions using shared processors.  As a workaround, partitions can be changed to use dedicated processors.  If a partition is hung with this issue, the partition can be rebooted to recover.
  • A problem was fixed for a bad format of a PEL reported by SRC BD802002.  In this case, the malformed log will be a Partition Firmware created SRC of BA28xxxx (RTAS hardware error), BA2Bxxxx (RTAS non-hardware error), or BA188001 (EEH Temp error) log.  No other log types are affected by this error condition.  This problem occurs anytime one of the affected SRCs is created by Partition Firmware.  These are hidden informational logs used to provide supplemental FFDC information so there should not be a large impact on system users by this problem.
  • A problem was fixed for DLPAR removes of embedded I/O (such as integrated USB) that fail.  An SRC BA2B000B hidden log will also be produced because of the failure.  This error does not impact DLPAR remove of slot based (hot-pluggable) I/O.  Any attempt to DLPAR remove of embedded I/O will trigger the issue and result in a DLPAR failure.
  • A problem was fixed for a boot failing from the SMS menu if a network adapter has been configured with VLAN tags.  This issue can be seen when a VLAN ID is used during a boot from the SMS menu and if the external network environment, such as a switch, triggers incoming ARP requests to the server.  This problem can be circumvented by not using the VLAN ID from the SMS menu.  After the install and boot, VLAN can be configured from the OS.
  • A problem was fixed for an errant BC101765 after replacing a primary boot processor with a field spare.  If a faulty primary boot processor is replaced by a field spare having FW1030.00 Self-Boot Engine firmware or later, the host firmware may report a BC101765 SRC during IPL with a hardware callout erroneously implicating the newly replaced processor. Generally, the problem is likely benign if it surfaces on only the first IPL after a primary boot processor replacement.  Additionally, remote attestation can be employed when the system is fully booted to verify the expected TPM measurements.  A boot after observing this failure should work correctly.  
  • A problem was fixed to correct the output of the Linux “lscpu” command to list actual physical sockets, chips, and cores.
  • A problem was fixed for a system checkstop that can occur after a concurrent firmware update.  The failing SRC identifies failure as “EQ_L3_FIR[25] Cache inhibited op in L3 directory”.  This problem occurs only rarely.
  • A problem was fixed for VPD Keyword (KW) values having hexadecimal values of 0 not being displayed by the vpd-tool.
  • A problem was fixed for a system checkstop SRC during an IPL not appearing on the physical OP panel.  The OP panel shows the last progress code for an IPL, not the checkstop exception SRC.  As a workaround, the checkstop SRC does display correctly as a PEL in the eBMC ASMI error log.
  • A problem was fixed for a PCIe card getting hot when the system fans were not running at a high enough speed.  This problem can occur when the system has a PCIe4 32Gb 4-port Optical Fibre Channel Adapter with Feature Codes #EN2L/#EN2M and CCIN 2CFC installed.
  • A problem was fixed for an SRC not being logged if the system power supplies are connected incorrectly to two different AC levels. This should be a rare error that only happens when the system is wired incorrectly.
  • A problem was fixed for an incorrect message for a “lamp test still running” written to the journal on every eBMC boot.  This message can be ignored: “[Date and Time] … phosphor-ledmanager[326]: Lamp test is still running. Cannot force stop the lamp test. Asserted is set back to true.”
  • A problem was fixed for the eBMC not allowing a request to create a resource dump, even though the dump manager allows the resource dump.  This problem occurs whenever the PowerVM hypervisor is not in its standby or running state.
  • A problem was fixed for the eBMC ASMI PCIe hardware topology page not listing the NED24 NVMe Expansion Drawer (Feature Code #ESR0) I/O slots under the cable card.
  • A problem was fixed for the eBMC and OP panel showing a different operating mode after the system was placed in “Manual” mode using the eBMC ASMI.  This occurs after an OS is installed in manual mode that is set by the eBMC GUI.  When the system is shut down, the eBMC GUI shows “Manual” mode but the OP panel shows the system has gone back to “Normal” mode.
  • A problem was fixed for the eBMC ASMI and Redfish providing an incorrect total memory capacity of the system.  As a workaround, the HMC shows the correct value for the installed memory.
  • A problem was fixed for the PowerRestorePolicy of “AlwaysOff” to make it effective such that when the system loses power, it does not automatically power on when power is restored.  This problem of automatic power on occurs every time the system loses power with “AlwaysOff” set as the power restore policy in the eBMC ASMI.
  • A problem was fixed for an eBMC firmware update failure using bmcweb with the HMC message "HSCF0230E An error occurred applying the new level of firmware" issued.  This is an infrequent error that can occur if the eBMC runs out of memory from doing detailed audit logging.
  • A problem was fixed for a hardware FRU that has been deconfigured with a guard record showing up as operational again on the eBMC ASMI GUI after a reboot of the eBMC or a disruptive code update.  The FRU operational status is corrected after the system IPL is complete and the guarded FRU is deconfigured again by the host.
  • A problem was fixed for the System Attention Indicator (SAI) on the HMC GUI possibly having incorrect information about an eBMC FRU.  This can happen if a fault occurs in an eBMC FRU and the eBMC fails to send the signal to the HMC to turn the SAI on.  Or if a faulty FRU has been replaced and the eBMC fails to send the signal to HMC, the SAI indication on the HMC GUI will not get turned off.   As a workaround, the state of the SAI LED is correctly shown in the eBMC ASMI “Hardware status -> Inventory and LEDs-> System Indicators” page section.
  • A problem was fixed for an attempted change in hostname in the eBMC ASMI GUI on the Network page not logging out and the hostname is not changing.  The GUI should log out and the hostname should be changed on the next login.
  • A problem was fixed for a flood 110015F0 power supply SRCs logged with no evidence of a power issue.  These false errors are infrequent and random.
  • A problem was fixed for an unsuccessful login having an entry in the audit log for both the failure and then an incorrect additional log for a success.  This occurs each time there is a failed login attempt.  As a workaround when reviewing the audit log, ignore a successful login entry that occurs immediately after a failed login entry to avoid confusion.
  • A problem was fixed for the eBMC ASMI for showing many blank settings under VET capabilities.  The blank settings have been updated with names where possible.
  • A problem was fixed for a factory reset changing the IBM i IPL to “D mode” as the default.  The fix changes the IBM i IPL default after a factory reset to “A mode” to match the behavior of the Power9 systems.
  • A problem was fixed for an internal Redfish error that will occur on the eBMC if an attempt is made to add an existing static IP address.  With the fix, the Redfish will return successfully if a request to made to add a static IP that already exists.
  • A problem was fixed for power supply output voltages being reported incorrectly in the eBMC ASMI GUI and from Redfish commands.  The output voltages always display incorrectly.
System firmware changes that affect certain systems
  • A problem was fixed for some NVME slot visual indicators failing to turn on from the OS.  This affects NVME slots for the IBM Power System S1014 (9105-41B) system only.

How to Determine The Currently Installed Firmware Level

You can view the server's current firmware level on the Advanced System Management Interface (ASMI) Overview page under the System Information section in the Firmware Information panel. Example: (ML1020_079)

Downloading the Firmware Package

Follow the instructions on Fix Central. You must read and agree to the license agreement to obtain the firmware packages.

Note: If your HMC is not internet-connected you will need to download the new firmware level to a USB flash memory device or ftp server.

Installing the Firmware

The method used to install new firmware will depend on the release level of firmware which is currently installed on your server. The release level can be determined by the prefix of the new firmware's filename.

Example: MHxxx_yyy_zzz

Where xxx = release level

  • If the release level will stay the same (Example: Level ML1020_040_040 is currently installed and you are attempting to install level ML1020_041_040) this is considered an update.
  • If the release level will change (Example: Level ML1020_040_040 is currently installed and you are attempting to install level ML1030_050_050) this is considered an upgrade.

Instructions for installing firmware updates and upgrades can be found at https://www.ibm.com/docs/en/power10/9105-42A?topic=9105-42A/p10eh6/p10eh6_updates_sys.htm

IBM i Systems:

For information concerning IBM i Systems, go to the following URL to access Fix Central: 
https://www.ibm.com/support/fixcentral/

Choose "Select product", under Product Group specify "System i", under Product specify "IBM i", then Continue and specify the desired firmware PTF accordingly.

Please note:  There is no PTF support for FW1040 service packs.  The firmware can be updated using other methods including HMC, USB, or DVD."

HMC and NovaLink Co-Managed Systems (Disruptive firmware updates only):
A co-managed system is managed by HMC and NovaLink, with one of the interfaces in the co-management master mode.
Instructions for installing firmware updates and upgrades on systems co-managed by an HMC and Novalink is the same as above for a HMC managed systems since the firmware update must be done by the HMC in the co-management master mode.  Before the firmware update is attempted, one must be sure that HMC is set in the master mode using the steps at the following IBM KnowledgeCenter link for NovaLink co-managed systems:

https://www.ibm.com/docs/en/power10/9105-42A?topic=environment-powervm-novalink


Then the firmware updates can proceed with the same steps as for the HMC managed systems except the system must be powered off because only a disruptive update is allowed.   If a concurrent update is attempted, the following error will occur: " HSCF0180E Operation failed for <system name> (<system mtms>).  The operation failed.  E302F861 is the error code:"
https://www.ibm.com/docs/en/power10/9105-42A?topic=9105-42A/p10eh6/p10eh6_updates_sys.htm

Firmware History

The complete Firmware Fix History (including HIPER descriptions)  for this Release level can be reviewed at the following url:
https://www.ibm.com/support/pages/node/6910163

[{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSZ0S2","label":"IBM Power S1014 (9105-41B)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSE1FSG","label":"IBM Power S1022 (9105-22A)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SST50ER","label":"IBM Power S1022s (9105-22B)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSBPSUB","label":"IBM Power S1024 (9105-42A)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSM8OVD","label":"IBM Power L1022 (9786-22H)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSZY7N","label":"IBM Power L1024 (9786-42H)"},"ARM Category":[{"code":"a8m0z000000bpKLAAY","label":"Firmware"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
15 September 2023

UID

ibm17031014