IBM Power S1014 server provides optimized and cost-effective performance and scale for businesses in pursuit of IT excellence
IBM Japan Hardware Announcement JG22-0031July 12, 2022
Table of contents | ||||||||||||||||||||
|
Modifications made to the Description, Limitations, and Terms and conditions sections.
At a glance
IBM® Power® servers are already the most reliable and secure in their class. Now, the new IBM Power S1014 (9105-41B) technology-based server extends that leadership and introduces the essential scale-out hybrid cloud platform, uniquely architected to help clients securely and efficiently scale core operational and AI applications anywhere in a hybrid cloud. Clients can encrypt all data simply without management overhead or performance impact and drive insights faster with AI at the point of data. Clients can also gain workload deployment flexibility and agility with a single hybrid cloud currency while doing more work.
Power S1014 features include:
- IBM Power10 processors with up to four or eight total cores per server
- In-core AI inferencing and machine learning with Matrix Math Accelerator (MMA) feature
- Up to 1.0 TB of system memory distributed across 8 DDR4 differential dual inline memory module (DDIMM) slots
- Transparent Memory Encryption with no additional management setup and no performance impact
- Five PCIe slots with four PCIe Gen5 capable, all with concurrent maintenance
- Up to 16 NVMe U.2 flash bays provides up to 102.4 TB of high-speed storage
- Optional internal RDX drive
- 1+1 redundant hot-plug 200--240 volt AC titanium power supplies in each enclosure supporting a rack configuration, or
- 2+2 redundant hot-plug 100--127/200--240 volt AC titanium power supplies in each enclosure supporting a rack or tower/desk configuration
- IBM PowerVM®-integrated virtualization with minimum processing overhead
The Power S1014 supports:
- IBM AIX®, IBM i, Linux®, and VIOS environments
- IBM Power Expert Care service tiers
- IBM i Solution Edition for Power S1014
- IBM i Express Edition for Power S1014
Overview
Security, operational efficiency, and real-time intelligence to respond quickly to market changes are now nonnegotiable for IT. In an always-on environment of constant change, you need to automate and accelerate critical operations, while ensuring 24x7 availability and staying ahead of cyberthreats. You need applications and data to be enterprise-grade everywhere, but without adding complexity and cost.
The Power S1014 (9105-41B) server can modernize your applications and infrastructure with a frictionless hybrid cloud experience to provide the agility you need for the unpredictability of today's business. The Power S1014 can help you:
- Run workloads where you need them with efficient scaling and consistent pay-for-use consumption across public and private clouds
- Use memory encryption at the processor level designed to support Zero Trust security approach to hybrid cloud
- Accelerate insights from data through AI inferencing directly in the core
- Consolidate workloads with scalability and performance that can reduce energy consumption
The Power S1014 server can help deliver business agility by extending mission-critical workloads across a hybrid cloud with increased flexibility.
- Respond faster to business demands: The Power10 processor delivers new levels of performance as compared to IBM Power9 for the same workloads without increasing energy or carbon footprint, enabling more efficient scaling.
- Protect data from core to cloud: Power10 provides end-to-end security with a transparent memory encryption at the processor level--without management overhead or performance impact. Power10 can also help you to stay ahead of future threats with support for post-quantum cryptography and fully homomorphic encryption.
- Streamline insights and automation: Power10 leverages the enhanced in-core AI inferencing capability built into every server with no additional specialized hardware required. You can extract insights from your most sensitive data where it resides, eliminating the time and risk of data movement.
- Maximize availability and reliability: The Power10 processor ensures your business stays up and running with inherent advanced recovery and self-healing features for infrastructure redundancy and disaster recovery in IBM Cloud®.
Power servers are delivering results for clients all over the globe, from new digital services for banks and real-time decision-making in manufacturing to operational efficiency for engineering and electronics. See how Power servers are contributing to IBM client success in IBM Case Studies.
Key requirements
An IBM i, AIX, Linux, or VIOS operating system is required. See the Software requirements section for details.
Planned availability date
- July 22, 2022, except for feature EM6Y
- November 18, 2022, for feature EM6Y
Availability within a country is subject to local legal requirements.
Description
The Power S1014 (9105-41B) server is a high-performance, flexible, one-socket, 4U system that provides massive scalability and flexibility. It delivers extreme density in an energy-efficient design with superior reliability and resiliency. The Power S1014 server brings a secure environment that balances mission-critical traditional workloads and modernization applications to deliver a frictionless hybrid cloud experience.
Power S1014 feature summary
- One entry single-chip processor module per
system server:
- 3.0--3.90 GHz, four-core Power10 processor (#EPG0).
- 3.0--3.90 GHz, eight-core Power10 processor (#EPG2).
- MMA feature helps to perform in-core AI inferencing and machine learning where data resides.
- Up to 1 TB of system memory distributed across 8 DDIMM slots per system server. DDIMMs are extremely high-performance, high-reliability, intelligent, and dynamic random access memory (DRAM) devices.
- DDR4 DDIMM memory cards:
- 32 GB (2 x 16 GB), (#EM6N).
- 64 GB (2 x 32 GB), (#EM6W).
- 128 GB (2 x 64 GB), (#EM6X).
- 256 GB (2 x 128 GB), (#EM6Y).
- PCIe slots:
- One x16 Gen4 or x8 Gen5 full-height, half-length slot.
- Three x8 Gen5 full-height, half-length slots (with x16 connectors).
- One x8 Gen4 full-height, half-length slot (with x16 connectors).
- All PCIe slots are concurrently maintainable.
- Integrated:
- System management using an Enterprise Baseboard Management Controller (eBMC).
- EnergyScale technology.
- Redundant hot-swap cooling.
- Redundant hot-swap AC power supplies.
- Up to two HMC 1 GbE RJ45 ports.
- One rear USB 3.0 port.
- One front USB 3.0 port.
- One internal USB 3.0 Port for RDX.
- Nineteen-inch rack-mounting hardware (4U).
- Optional PCIe I/O expansion drawer with PCIe
slots -- Rack configuration only:
- One PCIe Gen3 I/O Expansion Drawer (#EMX0).
- I/O drawer holds one six-slot PCIe fanout modules (#EMXH).
- Fanout module attaches to the system node through a PCIe optical or copper cable adapter (#EJ2A).
PowerVM
PowerVM, which delivers industrial-strength virtualization for AIX and Linux environments on Power processor-based systems, provides a virtualization-oriented performance monitor, and performance statistics are available through the HMC. These performance statistics can be used to understand the workload characteristics and to prepare for capacity planning.
Processor modules
The Power10 processor is the compute engine for the next generation of Power systems and successor to the current IBM Power9 processor. It offers superior performance on applications such as MMA facility to accelerate computation-intensive kernels, matrix multiplication, convolution, and discrete Fourier transform. To efficiently accelerate MMA operations, the Power10 processor core implements a dense math engine (DME) microarchitecture that effectively provides an accelerator for cognitive computing, machine learning, and AI inferencing workloads.
A maximum of one Power10 processor is allowed. The following defines the allowed quantities of processor activation entitlements:
- One four-core, typical 3.0 to 3.90 Ghz (max) processor (#EPG0) requires that four processor activation codes be ordered. A maximum of four processor activations (#EPFT) are allowed.
- One eight-core, typical 3.0 to 3.90 Ghz (max) processor (#EPG2) requires that eight processor activation codes be ordered. A maximum of eight processor activation code features (#EPF6) are allowed.
Enhanced Workload Optimized Frequency for optimum performance: This mode can dynamically optimize the processor frequency at any given time based on CPU utilization and operating environmental conditions. For a description of this feature and other power management options available for this server, see the IBM EnergyScale for Power10 Processor-Based Systems website.
MMA
The Power10 processor core inherits the modular architecture of the Power9 processor core, but the redesigned and enhanced microarchitecture significantly increases the processor core performance and processing efficiency. The peak computational throughput is markedly improved by new execution capabilities and optimized cache bandwidth characteristics. Extra matrix math acceleration engines can deliver significant performance gains for machine learning, particularly for AI inferencing workloads.
Memory
The Power S1014 server uses the next-generation DDIMMs, which are high-performance, high-reliability, high-function memory cards that contain a buffer chip, intelligence, and 2666 MHz or 3200 MHz DRAM memory. DDIMMs are placed in DDIMM slots in the server system.
- A minimum 32 GB of memory is required with one processor module. All Memory DIMMs must be ordered in pairs.
- Each DIMM feature code delivers two physical Memory DIMMs.
Plans for future memory upgrades should be taken into account when deciding which memory feature size to use at the time of initial system order.
To assist with the plugging rules, two DDIMMs are ordered using one memory feature number. Select from:
- 32 GB (2 x 16 GB) DDIMMs, 3200 MHz, 8 Gb DDR4 Memory (#EM6N)
- 64 GB (2 x 32 GB) DDIMMs, 3200 MHz, 8 Gb DDR4 Memory (#EM6W)
- 128 GB (2 x 64 GB) DDIMMs, 3200 MHz, 16 Gb DDR4 Memory (#EM6X)
- 256 GB (2 x 128 GB) DDIMMs, 2666 MHz, 16 Gb DDR4 Memory (#EM6Y)
Power S1014 Capacity Backup (CBU) for IBM i
The Power S1014 CBU designation enables you to temporarily transfer IBM i processor license entitlements and IBM i user license entitlements purchased for a primary machine to a secondary CBU-designated system for high availability (HA) and disaster recovery (DR) operations. Temporarily transferring these resources instead of purchasing them for your secondary system may result in significant savings. Processor activations cannot be transferred.
The CBU specify feature 0444 is available only as part of a new server purchase. Certain system prerequisites must be met, and system registration and approval are required before the CBU specify feature can be applied on a new server. Standard IBM i terms and conditions do not allow either IBM i processor license entitlements or IBM i user license entitlements to be transferred permanently or temporarily. These entitlements remain with the machine they were ordered for. When you register the association between your primary and on-order CBU system, you must agree to certain terms and conditions regarding the temporary transfer.
After a new CBU system is registered as a pair with the proposed primary system and the configuration is approved, you can temporarily move your optional IBM i processor license entitlement and IBM i user license entitlements from the primary system to the CBU system when the primary system is down or while the primary system processors are inactive. The CBU system can then support failover and role swapping for a full range of test, DR, and HA scenarios. Temporary entitlement transfer means that the entitlement is a property transferred from the primary system to the CBU system and may remain in use on the CBU system as long as the registered primary and CBU system are in deployment for the high availability or disaster recovery operation. The intent of the CBU offering is to enable regular role-swap operations.
Before you can temporarily transfer IBM i processor license entitlements from the registered primary system, you must have more than one IBM i processor license on the primary machine and at least one IBM i processor license on the CBU server. To be in compliance, the CBU will be configured in a such a manner that there will be no out-of-compliance messages prior to a failover. An activated processor must be available on the CBU server to use the transferred entitlement. You can then transfer any IBM i processor entitlements above the minimum one, assuming the total IBM i workload on the primary system does not require the IBM i entitlement you would like to transfer during the time of the transfer. During this temporary transfer, the CBU system's internal records of its total number of IBM i processor license entitlements are not updated, and you may see IBM i license noncompliance warning messages from the CBU system. These warning messages in this situation do not mean you are not in compliance. Prior to a temporary transfer, the CBU will be configured in such a manner that there will be no out of compliance warning messages.
Before you can temporarily transfer IBM i user entitlements, you must have more than the minimum number of IBM i user entitlements on a primary server. You can then transfer any IBM i user entitlements above the minimum, assuming the total IBM i users on the primary system do not require the IBM i entitlement you want to transfer during the time of the transfer.
The servers with P20 or higher software tiers do not have user entitlements that can be transferred, and only processor license entitlements can be transferred.
For a Power S1014 (with 8 cores) CBU, which is in the P10 software tier, the following are eligible primary systems:
- Power S1024 (9105-42A) with 48, 32, 24, or 12 cores
- Power S1022 (9105-22A) with 40, 32, 24, or 12 cores
- Power S1022s (9105-22B) with 16 or 8 cores
- Power S1014 (9105-41B) with 8 cores
- Power S924 (9009-42G)
- Power S924 (9009-42A)
- Power S922 (9009-22A)
- Power S922 (9009-22G) with minimum of 8 cores
- Power S914 (9009-41A) with minimum of 6 cores
- Power S914 (9009-41G) with minimum of 6 cores
The primary machine must be in the same enterprise as the CBU system. The IBM i Solution Editions are not eligible for CBU status.
For a Power S1014 (with 4 cores) CBU, which is in the P05 software tier, the following are eligible primary systems:
- Power S1014 (9105-41B) with maximum of 4 cores
- Power S914 (9009-41A) with maximum of 4 cores
- Power S914 (9009-41G) with maximum of 4 cores
Power S1014 software (SW) tiers for IBM i on 9105-41B
- The four-core processor server (#EPG0, QPRCFEAT EPG0) is IBM i SW tier P05.
- The eight-core processor server (#EPG2, QPRCFEAT EPG2) is IBM i SW tier P10.
During the temporary transfer, the CBU system's internal records of its total number of IBM i processor entitlements are not updated, and you may see IBM i license noncompliance warning messages from the CBU system. Prior to a temporary transfer, the CBU will be configured in such a manner that there are no out of compliance warning messages.
If your primary or CBU machine is sold or discontinued from use, any temporary entitlement transfers must be returned to the machine on which they were originally acquired. For CBU registration, terms and conditions, and further information, see the IBM Power Systems: Capacity BackUp website.
Four-core Power S1014 processor
The four-core Power S1014 server offers clients running AIX, IBM i, or Linux an entry-level server based on Power10 technology. It uses a typical 3.0 to 3.90 GHz (max) Power10 Processor Card (#EPG0) with processor core activation feature (#EPFT). All four processor cores must be activated, but factory deconfiguration feature 2319 is supported. The chargeable feature EPFT is used for these activations. The four-core Power S1014 server with IBM i operating system supports a maximum system memory of 64 GB. The four-core Power S1014 server has five PCIe slots, four Gen5 capable.
There is no upgrade to increase the cores on this feature. This server supports AIX, IBM i, and Linux, but it is especially attractive to IBM i clients with its P05 software tier.
If IBM i is selected as an operating system
The Capacity Backup option for IBM i (#0444) is supported. The four-core S1014 server supports a maximum of 6.4 TB of NVMe storage using two to eight mirrored NVMe PCIe devices and no SAS drives are allowed. This is true with the storage backplane option #EJ1Y. SAS drives located in feature code I/O drawers such as the EXP24SX (#ESLS) are not supported. Attachment to SANs is supported.
Maximum NVMe (U.2 and add-in card) capacities are restricted as showed in the table below:
PCIe NVMe device capacity | System maximum | Notes |
---|---|---|
800 GB | 8 | Mixing with other NVMe devices is allowed in pairs but cannot exceed the maximum capacity of 3.2 TB mirrored (total capacity 6.4 TB). |
1.6 TB | 4 | Mixing with other NVMe devices is allowed in pairs but cannot exceed the maximum capacity of 3.2 TB mirrored (total capacity 6.4 TB). |
3.2 TB | 2 | Mixing with other NVMe devices is not allowed. |
The following NVMe devices for IBM i are supported in the NVMe bays of the four-core Power S1014 system unit:
NVMe PCIe devices
- 800 GB (#ES3A) - PCIe4 NVMe U.2 Enterprise module for IBM i
- 800 GB (#ES1K) - PCIe4 NVMe U.2 Enterprise module for IBM i
- 1.6 TB (#ES3C) - PCIe4 NVMe U.2 Enterprise module for IBM i
- 1.6 TB (#EC6V) - PCIe3 x8 NVMe Flash Adapter for IBM i
- 1.6 TB (#EC7K) - PCIe4 x8 NVMe Flash Adapter for IBM i
- 1.6 TB (#ES1F) - PCIe4 NVMe U.2 Enterprise module for IBM i
- 3.2 TB (#ES3E) - PCIe4 NVMe U.2 Enterprise module for IBM i
- 3.2 TB (#ES1H) - PCIe4 NVMe U.2 Enterprise module for IBM i
- 3.2 TB (#EC6X) - PCIe3 x8 NVMe Flash Adapter for IBM i
- 3.2 TB (#EC7M) - PCIe4 x8 NVMe Flash Adapter for IBM i
Other NVMe PCIe devices or SAS drives not in the list above are not supported.
The PCIe Expansion Drawer (#EMX0) and EXP24SX SAS Storage Enclosures (#ESLS) are not allowed with the four-core processor card (#EPG0) configuration Power S1014 server.
The CBU specify feature (#0444) is supported with the four-core processor card (#EPG0) in IBM i environments. With its P05 software, it can be paired with an Power9 server with P05 or P10 software tier.
When AIX, Linux, or VIOS are selected and no IBM i is selected
If AIX, Linux, or VIOS is the only operating system on the system, then all orderable and supported SAS drives and NVMe devices for AIX or Linux are allowed.
IBM i Solution Edition for Power S1014
The IBM i Solution Edition is designed to help you take advantage of the combined experience and expertise of IBM and Independent Software Vendors (ISVs) in building business value with your IT investments. A qualifying purchase of software, maintenance, services, or training for a participating ISV solution is required when purchasing an IBM i Solution Edition.
The Power S1014 Solution Edition feature 4928 supports four-core configurations. For a list of participating ISVs, registration form, and additional details, see the IBM i Solution Editions website.
IBM i Express Edition for Power S1014
IBM i clients acquiring a new four-core Power S1014 server can choose to use an IBM i Express Edition. New edition is similar to the edition provided with Power9 servers and offer specific licensing advantages that further improve the price and performance of the Power S1014 server when running IBM i. Feature EU2C can be included with a new four-core Power S1014 server.
Titanium power supply
Titanium power supplies are designed to meet the latest efficiency regulations.
- Four titanium power supplies supporting a rack or tower/desk: 2+2 1200 watt 100--127/200--240 volt , or
- Two titanium power supplies supporting a rack: 1+1 1600 watt 200--240 volt
Redundant fans
Redundant fans are standard.
Power cords
Four power cords or two power cords are required. The Power S1014 server supports power cord 4.3-meter (14-foot), drawer to wall/IBM PDU (250V/10A) in the base shipment group. See the feature listing for other options.
PCIe slots
The Power S1014 server has up to sixteen U.2 NVMe devices and up to five PCIe hot-plug slots with concurrent maintenance, providing excellent configuration flexibility and expandability. For more information about PCIe slots, see the rack-integrated system with I/O expansion drawer section below.
With one Power10 processor, five PCIe slots are available:
- One PCIe x16 Gen4 or x8 Gen5, full height, half length slot
- Three PCIe x8 (x16 connector) Gen5, full height, half length slots
- One PCIe x8 (x16 connector) Gen4, full height, half length slot
The x16 slots can provide up to twice the bandwidth of x8 slots because they offer twice as many PCIe lanes. PCIe Gen5 slots can support up to twice the bandwidth of a PCIe Gen4 slot, and PCIe Gen4 slots can support up to twice the bandwidth of a PCIe Gen3 slot, assuming an equivalent number of PCIe lanes.
At least one PCIe Ethernet adapter is required on the server by IBM to ensure proper manufacture, test, and support of the server. One of the x8 PCIe slots is used for this required adapter.
These servers are smarter about energy efficiency when cooling the PCIe adapter environment. They sense which IBM PCIe adapters are installed in their PCIe slots and, if an adapter requires higher levels of cooling, they automatically speed up fans to increase airflow across the PCIe adapters. Note that faster fans increase the sound level of the server. Higher wattage PCIe adapters include the PCIe3 SAS adapters and SSD/flash PCIe adapters (#EJ10, #EJ14, and #EJ0J).
NVMe drive slots, RDX bay, and storage backplane options
NVMe SSDs, in the 15-millimeter carrier U.2 2.5-inch form factor, are used for internal storage in the Power S1014 system. The Power S1014 supports up to 16 NVMe U.2 devices when two storage backplanes with eight NVMe U.2 drive slots (#EJ1Y) are ordered. Both the 7-millimeter and 15-millimeter NVMe are supported in the 15-millimeter carrier. The Power S1014 also supports an internal RDX drive attached through the USB controller.
Cable management arm
A folding arm is attached to the server's rails at the rear of the server. The server's power cords and the cables from the PCIe adapters or integrated ports run through the arm and into the rack. The arm enables the server to be pulled forward on its rails for service access to PCIe slots, memory, processors, and so on without disconnecting the cables from the server. Approximately 1 meter (3 feet) of cord or cable length is needed for the arm.
Integrated I/O ports
There are two HMC ports, one USB 3.0 port internal only for RDX attach, and two USB 3.0 ports. The two HMC ports are RJ45, supporting 1 Gb Ethernet connections. The eBMC USB 2.0 port can be used for communication to an Uninterrupted Power Supply (UPS) or code update.
Rack-integrated system with I/O expansion drawer
Regardless of the rack-integrated system to which the PCIe Gen3 I/O expansion drawer is attached, if the expansion drawer is ordered as factory integrated, the PDUs in the rack will placed horizontally by default to enhance cable management.
Expansion drawers complicate the access to vertical PDUs if located at the same height. IBM recommends accommodating PDUs horizontally on racks containing one or more PCIe Gen3 I/O expansion drawers.
After the rack with expansion drawers is delivered, you can rearrange the PDUs from horizontal to vertical. However, the configurator will continue to consider the PDUs as being placed horizontally for the matter of calculating the free space still available in the rack.
Vertical PDUs can be used only if CSRP (#0469) is on the order. When specifying CSRP, you must provide the locations where the PCIe Gen3 I/O expansion drawers should be placed. Note that you must avoid locating the drawers adjacent to vertical PDU locations EIA 6 through 16 and 21 through 31.
The I/O expansion drawer can be migrated from a Power9 to a Power10 processor-based system. Only I/O cards supported on Power10 in the I/O expansion drawer are allowed. Clients migrating the I/O expansion drawer configuration might have one or two PCIe3 6-slot fanout modules (#EMXH) installed in the rear of the I/O expansion drawer.
For a 4U server configuration with one processor module, up to one I/O expansion drawer (#EMX0) and one fanout module (#EMXH) connected to one PCIe x16 to CXP Converter Card Adapter (#EJ2A) are supported. The other PCIe module bay must be populated by a filler module.
Limitations:
- Mixing of prior PCIe3 fanout modules (#EMXF or #EMXG) with PCIe3 fanout modules (#EMXH) in the same I/O expansion drawer is not allowed.
- PCIe x16 to CXP Converter Card Adapter (#EJ2A) requires one PCIe3 x16 slot in system unit plus a pair of optical cables (such as feature ECCX or feature ECCY) or copper cables (such as feature ECCS).
RDX docking station
The RDX docking station accommodates RDX removable disk cartridges of any capacity. The disk is in a protective rugged cartridge enclosure that plugs into the docking station. The docking station holds one removable rugged disk drive or cartridge at a time. The rugged removable disk cartridge and docking station performs saves, restores, and backups similar to a tape drive. This docking station can be an excellent entry capacity and performance option.
EXP24SX SAS storage enclosure
The EXP24SX is a storage expansion enclosure with 24 2.5-inch SFF SAS bays. It supports up to 24 hot-plug HDDs or SSDs in only 2 EIA of space in a 19-inch rack. The EXP24SX SFF bays use SFF Gen2 (SFF-2) carriers or trays.
The EXP24SX drawer feature ESLS is supported on the Power10 scale-out servers by AIX, IBM i, Linux, and VIOS.
With AIX, Linux, or VIOS, the EXP24SX can be ordered with four sets of 6 bays (mode 4), two sets of 12 bays (mode 2), or one set of 24 bays (mode 1). With IBM i, only one set of 24 bays (mode 1) is supported. It is possible to change the mode setting in the field using software commands along with a specifically documented procedure.
Important: When changing modes, a skilled, technically qualified person should follow the special documented procedures. Improperly changing modes can potentially destroy existing RAID sets, prevent access to existing data, or allow other partitions to access another partition's existing data. Hire an expert to assist if you are not familiar with this type of reconfiguration work.
Four mini-SAS HD ports on the EXP24SX are attached to PCIe Gen3 SAS adapters or attached to an integrated SAS controller in a Power10 scale-out server. The following PCIe3 SAS adapters support the EXP24SX:
- PCIe3 RAID SAS Adapter Quad-Port 6 Gb x8 (#EJ0J)
- PCIe3 12 GB Cache RAID Plus SAS Adapter Quad-Port 6 Gb x8 (#EJ14)
- PCIe3 LP RAID SAS Adapter Quad-Port 6 Gb x8 (#EJ0M)
Earlier-generation PCIe2 or PCIe1 SAS adapters are not supported with the EXP24SX.
The attachment between the EXP24SX and the PCIe3 SAS adapters or integrated SAS controllers is through SAS YO12 or X12 cables. X12 and YO12 cables are designed to support up to 12 Gb SAS. The PCIe Gen3 SAS adapters support up to 6 Gb throughput. The EXP24SX has been designed to support up to 12 Gb throughput if future SAS adapters support that capability. All ends of the YO12 and X12 cables have mini-SAS HD narrow connectors. Cable options are:
- X12 cable: 3-meter copper (#ECDJ), 4.5-meter optical (#ECDK), 10-meter optical (#ECDL)
- YO12 cables: 1.5-meter copper (#ECDT), 3-meter copper (#ECDU)
- 1M 100 GbE Optical Cable QSFP28 (AOC) (#EB5K)
- 1.5M 100 GbE Optical Cable QSFP28 (AOC) (#EB5L)
- 2M 100 GbE Optical Cable QSFP28 (AOC) (#EB5M)
- 3M 100 GbE Optical Cable QSFP28 (AOC) (#EB5R)
- 5M 100 GbE Optical Cable QSFP28 (AOC) (#EB5S)
- 10M 100 GbE Optical Cable QSFP28 (AOC) (#EB5T)
- 15M 100 GbE Optical Cable QSFP28 (AOC) (#EB5U)
- 20M 100 GbE Optical Cable QSFP28 (AOC) (#EB5V)
- 30M 100 GbE Optical Cable QSFP28 (AOC) (#EB5W)
- 50M 100 GbE Optical Cable QSFP28 (AOC) (#EB5X)
An AA12 cable interconnecting a pair of PCIe3 12 GB cache adapters (two #EJ14) is not attached to the EXP24SX. These higher-bandwidth cables could support 12 Gb throughput if future adapters support that capability. Copper feature ECE0 is 0.6 meters long, feature ECE3 is 3 meters long, and optical AA12 feature ECE4 is 4.5 meters long.
One no-charge specify code is used with each EXP24SX I/O Drawer (#ESLS) to communicate to IBM configurator tools and IBM Manufacturing which mode setting, adapter, and SAS cable are needed. With this specify code, no hardware is shipped. The physical adapters, controllers, and cables must be ordered with their own chargeable feature numbers. There are more technically supported configurations than are represented by these specify codes. IBM Manufacturing and IBM configurator tools such as e-config only understand and support EXP24SX configurations represented by these specify codes.
Specify code | Mode | Adapter/Controller | Cable to drawer | Environment |
EJW0 | Mode 1 | CEC SAS Ports | 2 YO12 cables | AIX/IBM i/Linux/VIOS |
EJW1 | Mode 1 | One (unpaired) #EJ0J/#EJ0M | 1 YO12 cable | AIX/IBM i/Linux/VIOS |
EJW2 | Mode 1 | Two (one pair) #EJ0J/#EJ0M | 2 YO12 cables | AIX/IBM i/Linux/VIOS |
EJW3 | Mode 2 | Two (unpaired) #EJ0J/#EJ0M | 2 X12 cables | AIX/Linux/VIOS |
EJW4 | Mode 2 | Four (two pair) #EJ0J/#EJ0M | 2 X12 cables | AIX/Linux/VIOS |
EJW5 | Mode 4 | Four (unpaired) EJ0J/EJ0M | 2 X12 cables | AIX/Linux/VIOS |
EJW6 | Mode 2 | One (unpaired) #EJ0J/#EJ0M | 2 YO12 cables | AIX/Linux/VIOS |
EJW7 | Mode 2 | Two (unpaired) #EJ0J/#EJ0M | 2 YO12 cables | AIX/Linux/VIOS |
EJWF | Mode 1 | Two (one pair) #EJ14 | 2 Y012 cables | AIX/IBM i/Linux/VIOS |
EJWG | Mode 2 | Two (one pair) #EJ14 | 2 X12 cables | AIX/Linux/VIOS |
EJWJ | Mode 2 | Four (two pair) #EJ14 | 2 X12 cables | AIX/Linux/VIOS |
All of the above EXP24SX specify codes assume a full set of adapters and cables able to run all the SAS bays configured. The following specify codes communicate to IBM Manufacturing a lower-cost partial configuration is to be configured where the ordered adapters and cables can run only a portion of the SAS bays. The future MES addition of adapters and cables can enable the remaining SAS bays for growth. The following specify codes are used:
Specify code | Mode | Adapter/Controller | Cable to drawer | Environment |
EJWA (1/2 of EJW7) | Mode 2 | One (unpaired) #EJ0J/#EJ0M | 1 YO12 cables | AIX/Linux/VIOS |
EJWB (1/2 of EJW4) | Mode 2 | Two (one pair) #EJ0J/#EJ0M | 1 X12 cable | AIX/Linux/VIOS |
EJWC (1/4 of EJW5) | Mode 4 | One (unpaired) #EJ0J/#EJ0M | 1 X12 cable | AIX/Linux/VIOS |
EJWD (1/2 of EJW5) | Mode 4 | Two (unpaired) #EJ0J/#EJ0M | 1 X12 cable | AIX/Linux/VIOS |
EJWE (3/4 of EJW5) | Mode 4 | Three (unpaired) #EJ0J/#EJ0M | 2 X12 cables | AIX/Linux/VIOS |
EJWH (1/2 of EJWJ) | Mode 2 | Two (one pair) #EJ14 | 1 X12 cable | AIX/Linux/VIOS |
An EXP24SX drawer in mode 4 can be attached to two or four SAS controllers and provide a great deal of configuration flexibility. For example, if using unpaired feature EJ0J adapters, these EJ0J adapters could be in the same server in the same partition, same server in different partitions, or even different servers.
An EXP24SX drawer in mode 2 has similar flexibility. If the I/O drawer is in mode 2, then half of its SAS bays can be controlled by one pair of PCIe3 SAS adapters, such as a 12 GB write-cache adapter pair (#EJ14), and the other half can be controlled by a different PCIe3 SAS 12 GB write cache adapter pair or by zero-write-cache PCIe3 SAS adapters.
Note that for simplicity, IBM configurator tools such as e-config assume that the SAS bays of an individual I/O drawer are controlled by one type of SAS adapter. As a client, you have more flexibility than e-config understands.
A maximum of 24 2.5-inch SSDs or 2.5-inch HDDs is supported in the EXP24SX 24 SAS bays. There can be no mixing of HDDs and SSDs in the same mode 1 drawer. HDDs and SSDs can be mixed in a mode 2 or mode 4 drawer, but they cannot be mixed within a logical split of the drawer. For example, in a mode 2 drawer with two sets of 12 bays, one set could hold SSDs and one set could hold HDDs, but you cannot mix SSDs and HDDs in the same set of 12 bays.
The indicator feature EHS2 helps IBM Manufacturing understand where SSDs are placed in a mode 2 or a mode 4 EXP24SX drawer. On one mode 2 drawer, use a quantity of one feature EHS2 to have SSDs placed in just half the bays, and use two EHS2 features to have SSDs placed in any of the bays. Similarly, on one mode 4 drawer, use a quantity of one, two, three, or four EHS2 features to indicate how many bays can have SSDs. With multiple EXP24SX orders, IBM Manufacturing will have to guess which quantity of feature ESH2 is associated with each EXP24SX. Consider using CSP (feature 0456) to reduce guessing.
Two-and-a-half-inch small form factor (SFF) SAS HDDs and SSDs are supported in the EXP24SX. All drives are mounted on Gen2 carriers or trays and thus named SFF-2 drives.
The EXP24SX drawer has many high-reliability design points:
- SAS drive bays that support hot swap
- Redundant and hot-plug-capable power and fan assemblies
- Dual line cords
- Redundant and hot-plug enclosure service modules (ESMs)
- Redundant data paths to all drives
- LED indicators on drives, bays, ESMs, and power supplies that support problem identification
- Through the SAS adapters or controllers, drives that can be protected with RAID and mirroring and hot-spare capability
Order two ESLA features for AC power supplies. The enclosure is shipped with adjustable depth rails and can accommodate 19-inch rack depths from 59.5--75 centimeters (23.4--29.5 inches). Slot filler panels are provided for empty bays when initially shipped from IBM.
PCIe Gen3 I/O drawer cabling option
A copper cabling option (#ECCS) is available for the scale-out servers. The cable option offers a much lower-cost connection between the server and the PCIe Gen3 I/O drawer fanout modules. The currently available Active Optical Cable (AOC) offers much longer length cables, providing rack placement flexibility. Plus, AOC cables are much thinner and have tighter bend radius and thus are much easier to cable in the rack.
The 3M Copper CXP Cable Pair (#ECCS) has the same performance and same reliability, availability, and serviceability (RAS) characteristics as the AOC cables. One copper cable length of 3 meters is offered. Note that the cable management arm of the scale-out servers requires about 1 meter of cable.
Like the AOC cable pair, the copper pair is cabled in the same manner. One cable attaches to the top CXP port in the PCIe adapter in the x16 PCIe slot in the server system unit and then attaches to the top CXP port in the fanout module in the I/O drawer. Its cable pair attaches to the bottom CXP port of the same PCIe adapter and to the bottom CXP port of the same fanout module. Note that the PCIe adapter providing the CXP ports on the server was named a PCIe3 "Optical" Cable Adapter. In hindsight, this naming was unfortunate as the adapter's CXP ports are not unique to optical. But at the time, optical cables were the only connection option planned.
Copper and AOC cabling can be mixed on the same server. However, they cannot be mixed on the same PCIe Gen3 I/O drawer or mixed on the same fanout module.
Copper cables have the same operating system software prerequisites as AOC cables.
Racks
The Power S1014 server is designed to fit a standard 19-inch rack. IBM Development has tested and certified the system in the IBM Enterprise Rack (7965-S42). The 7965-S42 rack is a two-meter enterprise rack that provides 42U or 42 EIA of space. You can choose to place the server in other racks if you are confident those racks have the strength, rigidity, depth, and hole pattern characteristics required. You should work with IBM Service to determine the appropriateness of other racks.
It is highly recommended that the Power S1014 server be ordered with an IBM 42U Enterprise Rack (7965-S42). An initial system order is placed in a 7965-S42 rack. This is done to ease and speed client installation, provide a more complete and higher quality environment for IBM Manufacturing system assembly and testing, and provide a more complete shipping package.
Recommendation: The 7965-S42 rack has optimized cable routing, so all 42U may be populated with equipment.
The 7965-S42 rack does not need 2U on either the top or bottom for cable egress.
With the two-meter 7965-S42 rack, a rear rack extension of 12.7 centimeters (5 inches) feature ECRK provides space to hold cables on the side of the rack and keep the center area clear for cooling and service access.
Recommendation: Include the above extensions when approximately more than 16 I/O cables per side are present or may be added in the future; when using the short-length, thinner SAS cables; or when using thinner I/O cables, such as Ethernet. If you use longer-length, thicker SAS cables, fewer cables will fit within the rack.
SAS cables are most commonly found with multiple EXP24SX SAS Drawers (#ESLS) driven by multiple PCIe SAS adapters. For this reason, it is good practice to keep multiple EXP24SX drawers in the same rack as the PCIe I/O drawer or in a separate rack close to the PCIe I/O drawer, using shorter, thinner SAS cables. The feature ECRK extension can be good to use even with smaller numbers of cables because it enhances the ease of cable management with the extra space it provides.
Multiple service personnel are required to manually remove or insert a system node drawer into a rack, given its dimensions and weight and content.
Recommendation: To avoid any delay in service, obtain an optional lift tool (#EB3Z). A lighter, lower-cost lift tool is FC EB3Z3 (lift tool) and EB4Z3 (angled shelf kit for lift tool). The EB3Z lift tool provides a hand crank to lift and position a server up to 400 pounds. Note that a single system node can weigh up to 86.2 kilograms (190 pounds).
3 Features EB3Z and EB4Z are not available to order in Albania, Bahrain, Bulgaria, Croatia, Egypt, Greece, Jordan, Kuwait, Kosovo, Montenegro, Morocco, Oman, UAE, Qatar, Saudi Arabia, Serbia, Slovakia, Slovenia, Taiwan, and Ukraine.
High-function (switched and monitored) PDUs plus
Hardware:
- IEC 62368-1 and IEC 60950 safety standard
- A new product safety approval
- No China 5000-meter altitude or tropical restrictions
- Detachable inlet for 3-phase delta-wired PDU with 30A, 50A, and 60A wall plugs
- IBM Technology and Qualification approved components, such as anti-sulfur resistors (ASRs)
- Ethernet 10/100/1000 Mb/s
Software:
- Internet Protocol (IP) version 4 and IPv6 support
- Secure Shell (SSH) protocol command line
- Ability to change passwords over a network
PDU description | 208 V 3-phase delta | 200 V--240 V 1-phase or 3-phase wye |
High-Function 12xC13 | #ECJQ/#ECJP | #ECJN/#ECJM |
High-Function 9xC19 | #ECJL/#ECJK | #ECJJ/#ECJG |
These PDUs can be mounted vertically in rack-side pockets or they can be mounted horizontally. If mounted horizontally, they each use one EIA (1U) of rack space. See feature EPTH for horizontal mounting hardware, which is used when IBM Manufacturing doesn't automatically factory-install the PDU. Two RJ45 ports on the front of the PDU enable you to monitor each receptacle's electrical power usage and to remotely switch any receptacle on or off.
Recommendation: The PDU is shipped with a generic PDU password. IBM strongly urges you to change it upon installation.
Existing and new high-function (switched and monitored) PDUs have the same physical dimensions. New high-function (switched and monitored) PDUs can be supported in the same racks as existing PDUs. Mixing of PDUs in a rack on new orders is not allowed.
Also, all factory-integrated orders must have the same PDU line cord.
The PDU features ECJQ/ECJP and ECJL/ECJK with the Amphenol inlet connector require new PDU line cords:
- #ECJ5 - 4.3-meter (14-foot) PDU to Wall 3PH/24A 200--240V Delta-wired Power Cord
- #ECJ7 - 4.3-meter (14-foot) PDU to Wall 3PH/48A 200--240V Delta-wired Power Cord
No pigtail (like #ELC0) is available because an Amphenol male inline connector is unavailable.
The PDU features ECJJ/ECJG and ECJN/ECJM with the UTG624-7SKIT4/5 inlet connector use the existing PDU line cord features 6653, 6667, 6489, 6654, 6655, 6656, 6657, 6658, 6491, or 6492.
Reliability, Availability, and Serviceability
Reliability, fault tolerance, and data correction
The reliability of systems starts with components, devices, and subsystems that are designed to be highly reliable. During the design and development process, subsystems go through rigorous verification and integration testing processes. During system manufacturing, systems go through a thorough testing process to help ensure the highest level of product quality.
The Power10 processor-based scale-out systems come with the following RAS characteristics:
- Power10 processor RAS
- Open Memory Interface, DDIMMs RAS
- Enterprise BMC service processor for system management and Service
- AMM for Hypervisor
- NVMe drives concurrent maintenance
- PCIe adapters concurrent maintenance
- Redundant and hot-plug cooling
- Redundant and hot-plug power
- Light path enclosure and FRU LEDs
- Service and FRU labels
- Client or IBM install
- Proactive support and service -- call home
- Client or IBM service
Service processor
Power10 scale-out 2S-4S systems come with a redesigned service processor based on a Baseboard Management Controller (BMC) design with firmware that is accessible through open-source industry standard APIs, such as Redfish. An upgraded ASMI web browser user interface preserves the required RAS functions while allowing the user to perform tasks in a more intuitive way.
Diagnostic monitoring of recoverable error from the processor chipset is performed on the system processor itself, while the fatal diagnostic monitoring of the processor chipset is performed by the service processor. It runs on its own power boundary and does not require resources from a system processor to be operational to perform its tasks.
The service processor supports surveillance of the connection to the HMC and to the system firmware (hypervisor). It also provides several remote power control options, environmental monitoring, reset, restart, remote maintenance, and diagnostic functions, including console mirroring. The BMC service processors menus (ASMI) can be accessed concurrently during system operation, allowing nondisruptive abilities to change system default parameters, view and download error logs, check system health.
Redfish, an industry standard for server management, enables the Power servers to be managed individually or in a large data center. Standard functions such as inventory, event logs, sensors, dumps, and certificate management are all supported with Redfish. In addition, new user management features support multiple users and privileges on the BMC via Redfish or ASMI. User management via LDAP is also supported. The Redfish events service provides a means for notification of specific critical events such that actions can be taken to correct issues. The Redfish telemetry service provides access to a wide variety of data (eg. power consumption, ambient, core, DIMM and I/O temperatures, etc) that can be streamed on periodic intervals.
Mutual surveillance
The service processor monitors the operation of the firmware during the boot process and also monitors the hypervisor for termination. The hypervisor monitors the service processor and reports a service reference code when it detects surveillance loss. In the PowerVM environment, it will perform a reset/reload if it detects the loss of the service processor.
Environmental monitoring functions
The Power family does ambient and over temperature monitoring and reporting. It also adjusts fan speeds automatically based on those temperatures.
Memory subsystem RAS:
The Power10 scale-out system introduces a new 2U tall DDIMM, which has new open CAPI memory interface known as OMI for resilient and fast communication to the processor. This new memory subsystem design delivers solid RAS as described below.
Power10 processor functions
As in Power9, the Power10 processor has the ability to do processor instruction retry for some transient errors and core-contained checkstop for certain solid faults. The fabric bus design with CRC and retry persists in Power10 where a CRC code is used for checking data on the bus and has an ability to retry a faulty operation.
Cache availability
The L2/L3 caches in the Power10 processor in the memory buffer chip are protected with double-bit detect, single-bit correct error detection code (ECC). In addition, a threshold of correctable errors detected on cache lines can result in the data in the cache lines being purged and the cache lines removed from further operation without requiring a reboot in the PowerVM environment.
Modified data would be handled through Special Uncorrectable Error handling. L1 data and instruction caches also have a retry capability for intermittent errors and a cache set delete mechanism for handling solid failures.
Special Uncorrectable Error handling
Special Uncorrectable Error (SUE) handling prevents an uncorrectable error in memory or cache from immediately causing the system to terminate. Rather, the system tags the data and determines whether it will ever be used again. If the error is irrelevant, it will not force a check stop. When and if data is used, I/O adapters controlled by an I/O hub controller would freeze if data were transferred to an I/O device, otherwise, termination may be limited to the program/kernel or if the data is not owned by the hypervisor.
PCI extended error handling
PCI extended error handling (EEH)-enabled adapters respond to a special data packet generated from the affected PCI slot hardware by calling system firmware, which will examine the affected bus, allow the device driver to reset it, and continue without a system reboot. For Linux, EEH support extends to the majority of frequently used devices, although some third-party PCI devices may not provide native EEH support.
Uncorrectable error recovery
When the auto-restart option is enabled, the system can automatically restart following an unrecoverable software error, hardware failure, or environmentally induced (AC power) failure.
Serviceability
The purpose of serviceability is to efficiently repair the system while attempting to minimize or eliminate impact to system operation. Serviceability includes system installation, MES (system upgrades/downgrades), and system maintenance/repair. Depending upon the system and warranty contract, service may be performed by the client, an IBM representative, or an authorized warranty service provider.
The serviceability features delivered in this system help provide a highly efficient service environment by incorporating the following attributes:
- Design for SSR setup, install, and service
- Error Detection and Fault Isolation (ED/FI)
- First Failure Data Capture (FFDC)
- Light path service indicators
- Service and FRU labels available on the system
- Service procedures documented in IBM Documentation or available through the HMC
- Automatic reporting of serviceable events to IBM through the Electronic Service Agent Call Home application
Service environment
In the PowerVM environment, the HMC is a dedicated server that provides functions for configuring and managing servers for either partitioned or full-system partition using a GUI or command-line interface (CLI) or REST API. An HMC attached to the system enables support personnel (with client authorization) to remotely, or locally to the physical HMC that is in proximity of the server being serviced, log in to review error logs and perform remote maintenance if required.
The Power10 processor-based systems support several service environments:
- Attachment to one or more HMCs or vHMCs is a supported option by the system with PowerVM. This is the default configuration for servers supporting logical partitions with dedicated or virtual I/O. In this case, all servers have at least one logical partition.
- No HMC. There are two service strategies for non-HMC systems.
-
- Full-system partition with PowerVM: A single partition owns all the server resources and only one operating system may be installed. The primary service interface is through the operating system and the service processor.
- Partitioned system with NovaLink: In this configuration, the system can have more than one partition and can be running more than one operating system. The primary service interface is through the service processor.
Service interface
Support personnel can use the service interface to communicate with the service support applications in a server using an operator console, a graphical user interface on the management console or service processor, or an operating system terminal. The service interface helps to deliver a clear, concise view of available service applications, helping the support team to manage system resources and service information in an efficient and effective way. Applications available through the service interface are carefully configured and placed to give service providers access to important service functions.
Different service interfaces are used, depending on the state of the system, hypervisor, and operating environment. The primary service interfaces are:
- LEDs
- Operator panel
- BMC Service Processor menu
- Operating system service menu
- Service Focal Point on the HMC or vHMC with PowerVM
In the light path LED implementation, the system can clearly identify components for replacement by using specific component-level LEDs and can also guide the servicer directly to the component by signaling (turning on solid) the enclosure fault LED, and component FRU fault LED. The servicer can also use the identify function to blink the FRU-level LED. When this function is activated, a roll-up to the blue enclosure identify will occur to identify an enclosure in a rack. These enclosure LEDs will turn on solid and can be used to follow the light path from the enclosure and down to the specific FRU in the PowerVM environment.
First Failure Data Capture and error data analysis
First Failure Data Capture (FFDC) is a technique that helps ensure that when a fault is detected in a system, the root cause of the fault will be captured without the need to re-create the problem or run any sort of extending tracing or diagnostics program. For the vast majority of faults, a good FFDC design means that the root cause can also be detected automatically without servicer intervention.
FFDC information, error data analysis, and fault isolation are necessary to implement the advanced serviceability techniques that enable efficient service of the systems and to help determine the failing items.
In the rare absence of FFDC and Error Data Analysis, diagnostics are required to re-create the failure and determine the failing items.
Diagnostics
General diagnostic objectives are to detect and identify problems so they can be resolved quickly. Elements of IBM's diagnostics strategy include:
- Provide a common error code format equivalent to a system reference code with PowerVM, system reference number, checkpoint, or firmware error code.
- Provide fault detection and problem isolation procedures. Support remote connection ability to be used by the IBM Remote Support Center or IBM Designated Service.
- Provide interactive intelligence within the diagnostics with detailed online failure information while connected to IBM's back-end system.
Automatic diagnostics
The processor and memory FFDC technology is designed to perform without the need for re-create diagnostics nor require user intervention. Solid and intermittent errors are designed to be correctly detected and isolated at the time the failure occurs. Runtime and boot-time diagnostics fall into this category.
Standalone diagnostics
As the name implies, standalone or user-initiated diagnostics requires user intervention. The user must perform manual steps, including:
- Booting from the diagnostics CD, DVD, USB, or network
- Interactively selecting steps from a list of choices
Concurrent maintenance
The determination of whether a firmware release can be updated concurrently is identified in the readme information file that is released with the firmware. An HMC is required for the concurrent firmware update with PowerVM. In addition, concurrent maintenance of PCIe adapters and NVMe drives are supported with PowerVM. Power supplies, fans, and op panel LCD are hot pluggable.
Service labels
Service providers use these labels to assist them in performing maintenance actions. Service labels are found in various formats and positions and are intended to transmit readily available information to the servicer during the repair process. Following are some of these service labels and their purpose:
- Location diagrams: Location diagrams are located on the system hardware, relating information regarding the placement of hardware components. Location diagrams may include location codes, drawings of physical locations, concurrent maintenance status, or other data pertinent to a repair. Location diagrams are especially useful when multiple components such as DIMMs, processors, fans, adapter cards, and power supplies are installed.
- Remove/replace procedures: Service labels that contain remove/replace procedures are often found on a cover of the system or in other spots accessible to the servicer. These labels provide systematic procedures, including diagrams detailing how to remove or replace certain serviceable hardware components.
- Arrows: Numbered arrows are used to indicate the order of operation and the serviceability direction of components. Some serviceable parts such as latches, levers, and touch points need to be pulled or pushed in a certain direction and in a certain order for the mechanical mechanisms to engage or disengage. Arrows generally improve the ease of serviceability.
QR labels
QR labels are placed on the system to provide access to key service functions through a mobile device. When the QR label is scanned, it will go to a landing page for Power10 processor-based systems which contains each MTM service functions of interest while physically located at the server. These include things such as installation and repair instructions, reference code look up, and so on.
Packaging for service
The following service features are included in the physical packaging of the systems to facilitate service:
- Color coding (touch points): Blue-colored touch points delineate touchpoints on service components where the component can be safely handled for service actions such as removal or installation.
- Tool-less design: Selected IBM systems support tool-less or simple tool designs. These designs require no tools or simple tools such as flathead screw drivers to service the hardware components.
- Positive retention: Positive retention mechanisms help to assure proper connections between hardware components such as cables to connectors, and between two cards that attach to each other. Without positive retention, hardware components run the risk of becoming loose during shipping or installation, preventing a good electrical connection. Positive retention mechanisms like latches, levers, thumbscrews, pop Nylatches (U-clips), and cables are included to help prevent loose connections and aid in installing (seating) parts correctly. These positive retention items do not require tools.
Error handling and reporting
In the event of system hardware or environmentally induced failure, the system runtime error capture capability systematically analyzes the hardware error signature to determine the cause of failure. The analysis result will be stored in system NVRAM. When the system can be successfully restarted either manually or automatically, or if the system continues to operate, the error will be reported to the operating system. Hardware and software failures are recorded in the system log filesystem. When an HMC is attached in the PowerVM environment, an ELA routine analyzes the error, forwards the event to the Service Focal Point (SFP) application running on the HMC, and notifies the system administrator that it has isolated a likely cause of the system problem. The service processor event log also records unrecoverable checkstop conditions, forwards them to the SFP application, and notifies the system administrator.
The system has the ability to call home through the operating system to report platform-recoverable errors and errors associated with PCI adapters/devices.
In the HMC-managed environment, a call home service request will be initiated from the HMC and the pertinent failure data with service parts information and part locations will be sent to an IBM service organization. Client contact information and specific system-related data such as the machine type, model, and serial number, along with error log data related to the failure, are sent to IBM Service.
Live Partition Mobility
With PowerVM Live Partition Mobility (LPM), users can migrate an AIX, IBM I, or Linux VM partition running on one Power partition system to another Power system without disrupting services. The migration transfers the entire system environment, including processor state, memory, attached virtual devices, and connected users. It provides continuous operating system and application availability during planned partition outages for repair of hardware and firmware faults. The Power10 systems using Power10-technology support secure LPM, whereby the VM image is encrypted and compressed prior to transfer. Secure LPM uses on-chip encryption and compression capabilities of the Power10 processor for optimal performance.
Call home
Call home refers to an automatic or manual call from a client location to the IBM support structure with error log data, server status, or other service-related information. Call home invokes the service organization in order for the appropriate service action to begin. Call home can be done through the Electronic Service Agent (ESA) imbedded in the HMC or through a version of ESA imbedded in the operating systems for non-HMC-managed or A version of ESA that runs as a standalone call home application. While configuring call home is optional, clients are encouraged to implement this feature in order to obtain service enhancements such as reduced problem determination and faster and potentially more accurate transmittal of error information. In general, using the call home feature can result in increased system availability. See the next section for specific details on this application.
IBM Electronic Services
Electronic Service Agent and Client Support Portal (CSP) comprise the IBM Electronic Services solution, which is dedicated to providing fast, exceptional support to IBM clients. IBM Electronic Service Agent is a no-charge tool that proactively monitors and reports hardware events such as system errors and collects hardware and software inventory. Electronic Service Agent can help focus on the client's company business initiatives, save time, and spend less effort managing day-to-day IT maintenance issues. In addition, Call Home Cloud Connect Web and Mobile capability extends the common solution and offers IBM Systems related support information applicable to Servers and Storage.
Details are available here: https://clientvantage.ibm.com/channel/ibm-call-home-connect.
System configuration and inventory information collected by Electronic Service Agent also can be used to improve problem determination and resolution between the client and the IBM support team. As part of an increased focus to provide even better service to IBM clients, Electronic Service Agent tool configuration and activation comes standard with the system. In support of this effort, a HMC External Connectivity security whitepaper has been published, which describes data exchanges between the HMC and the IBM Service Delivery Center (SDC) and the methods and protocols for this exchange. To read the whitepaper and prepare for Electronic Service Agent installation, see the "Security" section at the IBM Electronic Service Agent.
Benefits: increased uptime
Electronic Service Agent is designed to enhance the warranty and maintenance service by potentially providing faster hardware error reporting and uploading system information to IBM Support. This can optimize the time monitoring the symptoms, diagnosing the error, and manually calling IBM Support to open a problem record. And 24x7 monitoring and reporting means no more dependency on human intervention or off-hours client personnel when errors are encountered in the middle of the night.
Security: The Electronic Service Agent tool is designed to help secure the monitoring, reporting, and storing of the data at IBM. The Electronic Service Agent tool is designed to help securely transmit through the internet (HTTPS) to provide clients a single point of exit from their site. Initiation of communication is one way. Activating Electronic Service Agent does not enable IBM to call into a client's system.
For additional information, see the IBM Electronic Service Agent website.
More accurate reporting
Because system information and error logs are automatically uploaded to the IBM Support Center in conjunction with the service request, clients are not required to find and send system information, decreasing the risk of misreported or misdiagnosed errors. Once inside IBM, problem error data is run through a data knowledge management system, and knowledge articles are appended to the problem record.
Client Support Portal
Client Support Portal is a single internet entry point that replaces the multiple entry points traditionally used to access IBM Internet services and support. This web portal enables you to gain easier access to IBM resources for assistance in resolving technical problems.
This web portal provides valuable reports of installed hardware and software using information collected from the systems by IBM Electronic Service Agent. Reports are available for any system associated with the client's IBM ID.
For more information on how to utilize client support portal, visit the following website or contact an IBM Systems Services Representative.
Reference information
For additional information about IBM Power Expert Care extends services and support options, see announcement JS22-0008, dated July 12, 2022.
For more information on Power10 scale-out servers, see Hardware Announcements:
JG22-0028, dated July 12, 2022; JG22-0029, dated July 12, 2022; JG22-0030, dated July 12, 2022; JG22-0032, dated July 12, 2022; JG22-0033, dated July 12, 2022.
Product number
The following are newly announced features on the specific models of the IBM Power 9105 machine type:
Machine Model Feature Description type number number
IBM Power S1014 9105 41B One CSC Billing Unit 9105 41B 0010 Ten CSC Billing Units 9105 41B 0011 Mirrored System Disk Level, Specify Code 9105 41B 0040 Device Parity Protection-All, Specify Code 9105 41B 0041 Device Parity RAID-6 All, Specify Code 9105 41B 0047 RISC-to-RISC Data Migration 9105 41B 0205 AIX Partition Specify 9105 41B 0265 Linux Partition Specify 9105 41B 0266 IBM i Operating System Partition Specify 9105 41B 0267 Specify Custom Data Protection 9105 41B 0296 Mirrored Level System Specify Code 9105 41B 0308 RAID Hot Spare Specify 9105 41B 0347 CBU Specify 9105 41B 0444 Customer Specified Placement 9105 41B 0456 Load Source Not in CEC 9105 41B 0719 Fiber Channel SAN Load Source Specify 9105 41B 0837 USB 500 GB Removable Disk Drive 9105 41B 1107 Custom Service Specify, Rochester Minn, USA 9105 41B 1140 300GB 15k RPM SAS SFF-2 Disk Drive (AIX/Linux) 9105 41B 1953 600GB 10k RPM SAS SFF-2 HDD for AIX/Linux 9105 41B 1964 Primary OS - IBM i 9105 41B 2145 Primary OS - AIX 9105 41B 2146 Primary OS - Linux 9105 41B 2147 Factory Deconfiguration of 1-core 9105 41B 2319 1.8 M (6-ft) Extender Cable for Displays (15-pin D-shell to 15-pin D-shell) 9105 41B 4242 Rack Integration Services 9105 41B 4649 Rack Indicator- Not Factory Integrated 9105 41B 4650 Rack Indicator, Rack #1 9105 41B 4651 Rack Indicator, Rack #2 9105 41B 4652 Rack Indicator, Rack #3 9105 41B 4653 Rack Indicator, Rack #4 9105 41B 4654 Rack Indicator, Rack #5 9105 41B 4655 Rack Indicator, Rack #6 9105 41B 4656 Rack Indicator, Rack #7 9105 41B 4657 Rack Indicator, Rack #8 9105 41B 4658 Rack Indicator, Rack #9 9105 41B 4659 Rack Indicator, Rack #10 9105 41B 4660 Rack Indicator, Rack #11 9105 41B 4661 Rack Indicator, Rack #12 9105 41B 4662 Rack Indicator, Rack #13 9105 41B 4663 Rack Indicator, Rack #14 9105 41B 4664 Rack Indicator, Rack #15 9105 41B 4665 Rack Indicator, Rack #16 9105 41B 4666 Solution Edition for IBM i (4-core) 9105 41B 4928 Software Preload Required 9105 41B 5000 PowerVM Enterprise Edition 9105 41B 5228 Sys Console On HMC 9105 41B 5550 System Console-Ethernet LAN adapter 9105 41B 5557 PCIe2 4-port 1GbE Adapter 9105 41B 5899 Power Cord 4.3m (14-ft), Drawer to IBM PDU (250V/ 10A) 9105 41B 6458 Power Cord 4.3m (14-ft), Drawer To OEM PDU (125V, 15A) 9105 41B 6460 Power Cord 4.3m (14-ft), Drawer to Wall/OEM PDU (250V/15A) U. S. 9105 41B 6469 Power Cord 1.8m (6-ft), Drawer to Wall (125V/15A) 9105 41B 6470 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU (250V/10A) 9105 41B 6471 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU (250V/16A) 9105 41B 6472 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU (250V/10A) 9105 41B 6473 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (250V/13A) 9105 41B 6474 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (250V/16A) 9105 41B 6475 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (250V/10A) 9105 41B 6476 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (250V/16A) 9105 41B 6477 Power Cord 2.7 M(9-foot), To Wall/OEM PDU, (250V, 16A) 9105 41B 6478 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (125V/15A or 250V/10A ) 9105 41B 6488 4.3m (14-Ft) 3PH/32A 380-415V Power Cord 9105 41B 6489 4.3m (14-Ft) 1PH/63A 200-240V Power Cord 9105 41B 6491 4.3m (14-Ft) 1PH/60A (48A derated) 200-240V Power Cord 9105 41B 6492 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (250V/10A) 9105 41B 6493 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (250V/10A) 9105 41B 6494 Power Cord 2.7M (9-foot), To Wall/OEM PDU, (250V, 10A) 9105 41B 6496 Power Cable - Drawer to IBM PDU, 200-240V/10A 9105 41B 6577 Power Cord 2.7M (9-foot), To Wall/OEM PDU, (125V, 15A) 9105 41B 6651 4.3m (14-Ft) 3PH/16A 380-415V Power Cord 9105 41B 6653 4.3m (14-Ft) 1PH/30A (24A derated) Power Cord 9105 41B 6654 4.3m (14-Ft) 1PH/30A (24A derated) WR Power Cord 9105 41B 6655 4.3m (14-Ft) 1PH/32A Power Cord 9105 41B 6656 4.3m (14-Ft) 1PH/32A Power Cord-Australia 9105 41B 6657 4.3m (14-Ft) 1PH/30A (24A derated) Power Cord-Korea 9105 41B 6658 Power Cord 2.7M (9-foot), To Wall/OEM PDU, (250V, 15A) 9105 41B 6659 Power Cord 4.3m (14-ft), Drawer to Wall/OEM PDU (125V/15A) 9105 41B 6660 Power Cord 2.8m (9.2-ft), Drawer to IBM PDU, (250V/10A) 9105 41B 6665 4.3m (14-Ft) 3PH/32A 380-415V Power Cord-Australia 9105 41B 6667 Power Cord 4.3M (14-foot), Drawer to OEM PDU, (250V, 15A) 9105 41B 6669 Power Cord 2.7M (9-foot), Drawer to IBM PDU, 250V/10A 9105 41B 6671 Power Cord 2M (6.5-foot), Drawer to IBM PDU, 250V/10A 9105 41B 6672 Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU, (250V/10A) 9105 41B 6680 Intelligent PDU+, 1 EIA Unit, Universal UTG0247 Connector 9105 41B 7109 Power Distribution Unit 9105 41B 7188 Power Distribution Unit (US) - 1 EIA Unit, Universal, Fixed Power Cord 9105 41B 7196 Order Routing Indicator- System Plant 9105 41B 9169 Language Group Specify - US English 9105 41B 9300 New AIX License Core Counter 9105 41B 9440 New IBM i License Core Counter 9105 41B 9441 New Red Hat® License Core Counter 9105 41B 9442 New SUSE License Core Counter 9105 41B 9443 Other AIX License Core Counter 9105 41B 9444 Other Linux License Core Counter 9105 41B 9445 3rd Party Linux License Core Counter 9105 41B 9446 VIOS Core Counter 9105 41B 9447 Other License Core Counter 9105 41B 9449 Month Indicator 9105 41B 9461 Day Indicator 9105 41B 9462 Hour Indicator 9105 41B 9463 Minute Indicator 9105 41B 9464 Qty Indicator 9105 41B 9465 Countable Member Indicator 9105 41B 9466 Language Group Specify - Dutch 9105 41B 9700 Language Group Specify - French 9105 41B 9703 Language Group Specify - German 9105 41B 9704 Language Group Specify - Polish 9105 41B 9705 Language Group Specify - Norwegian 9105 41B 9706 Language Group Specify - Portuguese 9105 41B 9707 Language Group Specify - Spanish 9105 41B 9708 Language Group Specify - Italian 9105 41B 9711 Language Group Specify - Canadian French 9105 41B 9712 Language Group Specify - Japanese 9105 41B 9714 Language Group Specify - Traditional Chinese (Taiwan) 9105 41B 9715 Language Group Specify - Korean 9105 41B 9716 Language Group Specify - Turkish 9105 41B 9718 Language Group Specify - Hungarian 9105 41B 9719 Language Group Specify - Slovakian 9105 41B 9720 Language Group Specify - Russian 9105 41B 9721 Language Group Specify - Simplified Chinese (PRC) 9105 41B 9722 Language Group Specify - Czech 9105 41B 9724 Language Group Specify - Romanian 9105 41B 9725 Language Group Specify - Croatian 9105 41B 9726 Language Group Specify - Slovenian 9105 41B 9727 Language Group Specify - Brazilian Portuguese 9105 41B 9728 Language Group Specify - Thai 9105 41B 9729 10m (30.3-ft) - IBM MTP 12 strand cable for 40/ 100G transceivers 9105 41B EB2J 30m (90.3-ft) - IBM MTP 12 strand cable for 40/ 100G transceivers 9105 41B EB2K AC Titanium Power Supply - 1600W for Server (200-240 VAC) 9105 41B EB3S AC Titanium Power Supply - 1200W for Server (100-127V/200-240V) 9105 41B EB3W Lift tool based on GenieLift GL-8 (standard) 9105 41B EB3Z 10GbE Optical Transceiver SFP+ SR 9105 41B EB46 25GbE Optical Transceiver SFP28 9105 41B EB47 1GbE Base-T Transceiver RJ45 9105 41B EB48 QSFP28 to SFP28 Connector 9105 41B EB49 0.5m SFP28/25GbE copper Cable 9105 41B EB4J 1.0m SFP28/25GbE copper Cable 9105 41B EB4K 2.0m SFP28/25GbE copper Cable 9105 41B EB4M 2.0m QSFP28/100GbE copper split Cable to SFP28 4x25GbE 9105 41B EB4P Service wedge shelf tool kit for EB3Z 9105 41B EB4Z QSFP+ 40GbE Base-SR4 Transceiver 9105 41B EB57 100GbE Optical Transceiver QSFP28 9105 41B EB59 1.0M 100GbE Copper Cable QSFP28 9105 41B EB5K 1.5M 100GbE Copper Cable QSFP28 9105 41B EB5L 2.0M 100GbE Copper Cable QSFP28 9105 41B EB5M 3M 100GbE Optical Cable QSFP28 (AOC) 9105 41B EB5R 5M 100GbE Optical Cable QSFP28 (AOC) 9105 41B EB5S 10M 100GbE Optical Cable QSFP28 (AOC) 9105 41B EB5T 15M 100GbE Optical Cable QSFP28 (AOC) 9105 41B EB5U 20M 100GbE Optical Cable QSFP28 (AOC) 9105 41B EB5V 30M 100GbE Optical Cable QSFP28 (AOC) 9105 41B EB5W 50M 100GbE Optical Cable QSFP28 (AOC) 9105 41B EB5X IBM i 7.3 Indicator 9105 41B EB73 IBM i 7.4 Indicator 9105 41B EB74 IBM i 7.5 Indicator 9105 41B EB75 PCIe3 2-Port 10Gb NIC&ROCE SR/Cu Adapter 9105 41B EC2S PCIe3 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter 9105 41B EC2U PCIe3 x8 1.6 TB NVMe Flash Adapter for AIX/Linux 9105 41B EC5B PCIe3 x8 3.2 TB NVMe Flash Adapter for AIX/Linux 9105 41B EC5D PCIe3 x8 6.4 TB NVMe Flash Adapter for AIX/Linux 9105 41B EC5F Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for AIX/Linux 9105 41B EC5V Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B EC5W Mainstream 800 GB SSD PCIe3 NVMe U.2 module for AIX/Linux 9105 41B EC5X PCIe4 2-port 100Gb ROCE EN adapter 9105 41B EC66 PCIe2 2-Port USB 3.0 Adapter 9105 41B EC6K PCIe3 x8 1.6 TB NVMe Flash Adapter for IBM i 9105 41B EC6V PCIe3 x8 3.2 TB NVMe Flash Adapter for IBM i 9105 41B EC6X PCIe3 x8 6.4 TB NVMe Flash Adapter for IBM i 9105 41B EC6Z PCIe4 2-port 100Gb Crypto Connectx-6 DX QFSP56 9105 41B EC78 PCIe4 1.6TB NVMe Flash Adapter x8 for AIX/Linux 9105 41B EC7B PCIe4 3.2TB NVMe Flash Adapter x8 for AIX/Linux 9105 41B EC7D PCIe4 6.4TB NVMe Flash Adapter x8 for AIX/Linux 9105 41B EC7F PCIe4 1.6TB NVMe Flash Adapter x8 for IBM i 9105 41B EC7K PCIe4 3.2TB NVMe Flash Adapter x8 for IBM i 9105 41B EC7M PCIe4 6.4TB NVMe Flash Adapter x8 for IBM i 9105 41B EC7P 800GB Mainstream NVMe U.2 SSD 4k for AIX/Linux 9105 41B EC7T SAS X Cable 3m - HD Narrow 6Gb 2-Adapters to Enclosure 9105 41B ECBJ SAS X Cable 6m - HD Narrow 6Gb 2-Adapters to Enclosure 9105 41B ECBK SAS YO Cable 1.5m - HD Narrow 6Gb Adapter to Enclosure 9105 41B ECBT SAS YO Cable 3m - HD Narrow 6Gb Adapter to Enclosure 9105 41B ECBU SAS YO Cable 6m - HD Narrow 6Gb Adapter to Enclosure 9105 41B ECBV SAS YO Cable 10m - HD Narrow 6Gb Adapter to Enclosure 9105 41B ECBW SAS AE1 Cable 4m - HD Narrow 6Gb Adapter to Enclosure 9105 41B ECBY SAS YE1 Cable 3m - HD Narrow 6Gb Adapter to Enclosure 9105 41B ECBZ 3M Optical Cable Pair for PCIe3 Expansion Drawer 9105 41B ECC7 10M Optical Cable Pair for PCIe3 Expansion Drawer 9105 41B ECC8 System Port Converter Cable for UPS 9105 41B ECCF 3M Copper CXP Cable Pair for PCIe3 Expansion Drawer 9105 41B ECCS 3M Active Optical Cable Pair for PCIe3 Expansion Drawer 9105 41B ECCX 10M Active Optical Cable Pair for PCIe3 Expansion Drawer 9105 41B ECCY 3.0M SAS X12 Cable (Two Adapter to Enclosure) 9105 41B ECDJ 4.5M SAS X12 Active Optical Cable (Two Adapter to Enclosure) 9105 41B ECDK 10M SAS X12 Active Optical Cable (Two Adapter to Enclosure) 9105 41B ECDL 1.5M SAS YO12 Cable (Adapter to Enclosure) 9105 41B ECDT 3.0M SAS YO12 Cable (Adapter to Enclosure) 9105 41B ECDU 4.5M SAS YO12 Active Optical Cable (Adapter to Enclosure) 9105 41B ECDV 10M SAS YO12 Active Optical Cable (Adapter to Enclosure) 9105 41B ECDW 0.6M SAS AA12 Cable (Adapter to Adapter) 9105 41B ECE0 3.0M SAS AA12 Cable 9105 41B ECE3 4.5M SAS AA12 Active Optical Cable (Adapter to Adapter) 9105 41B ECE4 4.3m (14-Ft) PDU to Wall 3PH/24A 200-240V Delta-wired Power Cord 9105 41B ECJ5 4.3m (14-Ft) PDU to Wall 3PH/40A 200-240V Power Cord 9105 41B ECJ6 4.3m (14-Ft) PDU to Wall 3PH/48A 200-240V Delta-wired Power Cord 9105 41B ECJ7 High Function 9xC19 Single-Phase or Three-Phase Wye PDU plus 9105 41B ECJJ High Function 9xC19 PDU plus 3-Phase Delta 9105 41B ECJL High Function 12xC13 Single-Phase or Three-Phase Wye PDU plus 9105 41B ECJN High Function 12xC13 PDU plus 3-Phase Delta 9105 41B ECJQ Custom Service Specify, Mexico 9105 41B ECSM Custom Service Specify, Poughkeepsie, USA 9105 41B ECSP Optical Wrap Plug 9105 41B ECW0 SAP HANA TRACKING FEATURE 9105 41B EHKV Boot Drive / Load Source in EXP24SX Specify (in #ESLS or #ELLS) 9105 41B EHR2 SSD Placement Indicator - #ESLS/#ELLS 9105 41B EHS2 PCIe3 RAID SAS Adapter Quad-port 6Gb x8 9105 41B EJ0J PCIe3 12GB Cache RAID SAS Adapter Quad-port 6Gb x8 9105 41B EJ0L PCIe3 SAS Tape/DVD Adapter Quad-port 6Gb x8 9105 41B EJ10 PCIe3 12GB Cache RAID PLUS SAS Adapter Quad-port 6Gb x8 9105 41B EJ14 Storage Backplane with eight NVMe U.2 drive slots 9105 41B EJ1Y PCIe x16 to CXP Optical or CU converter Adapter for PCIe3 Expansion Drawer 9105 41B EJ20 PCIe4 x16 to CXP Converter Adapter (support AOC) 9105 41B EJ2A PCIe3 Crypto Coprocessor no BSC 4767 9105 41B EJ32 PCIe3 Crypto Coprocessor BSC-Gen3 4767 9105 41B EJ33 PCIe3 Crypto Coprocessor no BSC 4769 9105 41B EJ35 PCIe3 Crypto Coprocessor BSC-Gen3 4769 9105 41B EJ37 Non-paired Indicator EJ14 PCIe SAS RAID+ Adapter 9105 41B EJRL Non-paired Indicator EJ0L PCIe SAS RAID Adapter 9105 41B EJRU Front OEM Bezel for 16 NVMe-bays Backplane Rack-Mount 9105 41B EJUV Front OEM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount 9105 41B EJUX IBM Cover and Doors for 16 NVMe-bays Backplane Desk-side 9105 41B EJUY OEM Cover and Doors for 16 NVMe-bays Backplane Desk-side 9105 41B EJUZ IBM Cover and Doors for 16 NVMe-bays and RDX Backplane Desk-side 9105 41B EJVY OEM Cover and Doors for 16 NVMe-bays and RDX Backplane Desk-side 9105 41B EJVZ Specify Mode-1 & CEC SAS Ports & (2)YO12 for EXP24SX #ESLS/ELS 9105 41B EJW0 Specify Mode-1 & (1)EJ0J/EJ0M/EL3B/EL59 & (1)YO12 for EXP24SX #ESLS/ELLS 9105 41B EJW1 Specify Mode-1 & (2)EJ0J/EJ0M/EL3B/EL59 & (2)YO12 for EXP24SX #ESLS/ELLS 9105 41B EJW2 Specify Mode-2 & (2)EJ0J/EJ0M/EL3B/EL59 & (2)X12 for EXP24SX #ESLS/ELLS 9105 41B EJW3 Specify Mode-2 & (4)EJ0J/EJ0M/EL3B/EL59 & (2)X12 for EXP24SX #ESLS/ELLS 9105 41B EJW4 Specify Mode-4 & (4)EJ0J/EJ0M/EL3B/EL59 & (2)X12 for EXP24SX #ESLS/ELLS 9105 41B EJW5 Specify Mode-2 & (1)EJ0J/EJ0M/EL3B/EL59 & (2)YO12 for EXP24SX #ESLS/ELLS 9105 41B EJW6 Specify Mode-2 & (2)EJ0J/EJ0M/EL3B/EL59 & (2)YO12 for EXP24SX #ESLS/ELLS 9105 41B EJW7 Specify Mode-2 & (1)EJ0J/EJ0M/EL3B/EL59 & (1)YO12 for EXP24SX #ESLS/ELLS 9105 41B EJWA Specify Mode-2 & (2)EJ0J/EJ0M/EL3B/EL59 & (1)X12 for EXP24SX #ESLS/ELLS 9105 41B EJWB Specify Mode-4 & (1)EJ0J/EJ0M/EL3B/EL59 & (1)X12 for EXP24SX #ESLS/ELLS 9105 41B EJWC Specify Mode-4 & (2)EJ0J/EJ0M/EL3B/EL59 & (1)X12 for EXP24SX #ESLS/ELLS 9105 41B EJWD Specify Mode-4 & (3)EJ0J/EJ0M/EL3B/EL59 & (2)X12 for EXP24SX #ESLS/ELLS 9105 41B EJWE Specify Mode-1 & (2)EJ14 & (2)YO12 for EXP24SX #ESLS/ELLS 9105 41B EJWF Specify Mode-2 & (2)EJ14 & (2)X12 for EXP24SX #ESLS/ELLS 9105 41B EJWG Specify Mode-2 & (2)EJ14 & (1)X12 for EXP24SX #ESLS/ELLS 9105 41B EJWH Specify Mode-2 & (4)EJ14 & (2)X12 for EXP24SX #ESLS/ELLS 9105 41B EJWJ Front IBM Bezel for 16 NVMe-bays Backplane Rack-Mount 9105 41B EJXU Front IBM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount 9105 41B EJXW 300GB 15k RPM SAS SFF-2 Disk Drive (Linux) 9105 41B EL1P 600GB 10k RPM SAS SFF-2 Disk Drive (Linux) 9105 41B EL1Q ESMD Load Source Specify (931GB SSD SFF-2) 9105 41B EL9D ESMH Load Source Specify (1.86TB SSD SFF-2) 9105 41B EL9H ESMS Load Source Specify (3.72TB SSD SFF-2) 9105 41B EL9S ESMX Load Source Specify (7.44TB SSD SFF-2) 9105 41B EL9X PDU Access Cord 0.38m 9105 41B ELC0 4.3m (14-Ft) PDU to Wall 24A 200-240V Power Cord North America 9105 41B ELC1 4.3m (14-Ft) PDU to Wall 3PH/24A 415V Power Cord North America 9105 41B ELC2 Power Cable - Drawer to IBM PDU (250V/10A) 9105 41B ELC5 600GB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4096 9105 41B ELEV 1.2TB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4096 9105 41B ELF3 1.8TB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4096 9105 41B ELFT ESKM Load Source Specify (931GB SSD SFF-2) 9105 41B ELKM ESKR Load Source Specify (1.86TB SSD SFF-2) 9105 41B ELKR ESKV Load Source Specify (3.72TB SSD SFF-2) 9105 41B ELKV ESKZ Load Source Specify (7.44TB SSD SFF-2) 9105 41B ELKZ ES1F Load Source Specify (1.6 TB 4K NVMe U.2 SSD PCIe4 for IBM i) 9105 41B ELS3 ES1K Load Source Specify (800 GB 4K NVMe U.2 SSD PCIe4 for IBM i) 9105 41B ELSG ES1H Load Source Specify (3.2 TB 4K NVMe U.2 SSD for IBM i) 9105 41B ELSQ #ESF2 Load Source Specify (1.1TB HDD SFF-2) 9105 41B ELT2 #ESFS Load Source Specify (1.7TB HDD SFF-2) 9105 41B ELTS #ESEU Load Source Specify (571GB HDD SFF-2) 9105 41B ELTU ESK9 Load Source Specify (387GB SSD SFF-2) 9105 41B ELU9 ESKD Load Source Specify (775GB SSD SFF-2) 9105 41B ELUD ESKH Load Source Specify (1.55TB SSD SFF-2) 9105 41B ELUH ESJK Load Source Specify (931GB SSD SFF-2) 9105 41B ELUK #ESNL Load Source Specify (283GB HDD SFF-2) 9105 41B ELUL ESJM Load Source Specify (1.86TB SSD SFF-2) 9105 41B ELUM ESJP Load Source Specify (3.72TB SSD SFF-2) 9105 41B ELUP #ESNQ Load Source Specify (571GB HDD SFF-2) 9105 41B ELUQ ESJR Load Source Specify (7.44TB SSD SFF-2) 9105 41B ELUR EC5W Load Source Specify (6.4 TB 4K NVMe U.2 SSD for IBM i) 9105 41B ELUW ETK9 Load Source Specify (387 GB SSD SFF-2) 9105 41B ELV9 ETKD Load Source Specify (775 GB SSD SFF-2) 9105 41B ELVD ETKH Load Source Specify (1.55 TB SSD SFF-2) 9105 41B ELVH EC7K Load Source Specify (1.6TB SSD NVMe adapter for IBM i) 9105 41B ELVK EC7M Load Source Specify (3.2TB SSD NVMe adapter for IBM i) 9105 41B ELVM EC7P Load Source Specify (6.4TB SSD NVMe adapter for IBM i) 9105 41B ELVP ES3A Load Source Specify (800 GB 4K NVMe U.2 SSD PCIe4 for IBM i) 9105 41B ELYA ES3C Load Source Specify (1.6 TB 4K NVMe U.2 SSD PCIe4 for IBM i) 9105 41B ELYC ES3E Load Source Specify (3.2 TB 4K NVMe U.2 SSD PCIe4 for IBM i) 9105 41B ELYE ES3G Load Source Specify (6.4 TB 4K NVMe U.2 SSD PCIe4 for IBM i) 9105 41B ELYG ES95 Load Source Specify (387GB SSD SFF-2) 9105 41B ELZ5 ESNB Load Source Specify (775GB SSD SFF-2) 9105 41B ELZB ESNF Load Source Specify (1.55TB SSD SFF-2) 9105 41B ELZF 32GB (2x16GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory 9105 41B EM6N 64GB (2x32GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory 9105 41B EM6W 128GB (2x64GB) DDIMMs, 3200 MHz, 16GBIT DDR4 Memory 9105 41B EM6X 256GB (2x128GB) DDIMMs, 2666 MHz, 16GBIT DDR4 Memory 9105 41B EM6Y PCIe Gen3 I/O Expansion Drawer 9105 41B EMX0 AC Power Supply Conduit for PCIe3 Expansion Drawer 9105 41B EMXA PCIe3 6-Slot Fanout Module for PCIe3 Expansion Drawer 9105 41B EMXF PCIe3 6-Slot Fanout Module for PCIe3 Expansion Drawer 9105 41B EMXG PCIe3 6-Slot Fanout Module for PCIe3 Expansion Drawer 9105 41B EMXH 1m (3.3-ft), 10Gb E'Net Cable SFP+ Act Twinax Copper 9105 41B EN01 3m (9.8-ft), 10Gb E'Net Cable SFP+ Act Twinax Copper 9105 41B EN02 5m (16.4-ft), 10Gb E'Net Cable SFP+ Act Twinax Copper 9105 41B EN03 PCIe2 4-Port (10Gb+1GbE) SR+RJ45 Adapter 9105 41B EN0S PCIe2 4-port (10Gb+1GbE) Copper SFP+RJ45 Adapter 9105 41B EN0U PCIe2 2-port 10/1GbE BaseT RJ45 Adapter 9105 41B EN0W PCIe3 32Gb 2-port Fibre Channel Adapter 9105 41B EN1A PCIe3 16Gb 4-port Fibre Channel Adapter 9105 41B EN1C PCIe3 16Gb 4-port Fibre Channel Adapter 9105 41B EN1E PCIe3 2-Port 16Gb Fibre Channel Adapter 9105 41B EN1G PCIe4 32Gb 2-port Optical Fibre Channel Adapter 9105 41B EN1J PCIe3 16Gb 2-port Fibre Channel Adapter 9105 41B EN2A 188 GB IBM i NVMe Load Source Namespace size 9105 41B ENS1 393 GB IBM i NVMe Load Source Namespace size 9105 41B ENS2 200 GB IBM i NVMe Load Source Namespace size 9105 41B ENSA 400 GB IBM i NVMe Load Source Namespace size 9105 41B ENSB Specify Code Configure all IBM i Namespaces 9105 41B ENSM Deactivation of LPM (Live Partition Mobility) 9105 41B EPA0 One Processor Core Activation for EPG2 9105 41B EPF6 One Processor Core Activation for EPG0 9105 41B EPFT 4-core Typical 3.0 to 3.90 Ghz (max) Power10 Processor 9105 41B EPG0 8-core Typical 3.00 to 3.90 Ghz (max) Power10 Processor 9105 41B EPG2 Horizontal PDU Mounting Hardware 9105 41B EPTH High Function 9xC19 PDU: Switched, Monitoring 9105 41B EPTJ High Function 9xC19 PDU 3-Phase: Switched, Monitoring 9105 41B EPTL High Function 12xC13 PDU: Switched, Monitoring 9105 41B EPTN High Function 12xC13 PDU 3-Phase: Switched, Monitoring 9105 41B EPTQ Rack-Mount Rail Tower to Rack Conversion Kit 9105 41B ERKZ Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for AIX/Linux 9105 41B ES1E Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B ES1F Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for AIX/Linux 9105 41B ES1G Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B ES1H Enterprise 800GB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B ES1K Enterprise 800GB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B ES3A Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for AIX/Linux 9105 41B ES3B Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B ES3C Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for AIX/Linux 9105 41B ES3D Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B ES3E Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for AIX/Linux 9105 41B ES3F Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for IBM i 9105 41B ES3G 387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ES94 387GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ES95 387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ESB2 775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ESB6 387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESBA 387GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESBB 775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESBG 775GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESBH 1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESBL 1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESBM S&H - No Charge 9105 41B ESC0 S&H-b 9105 41B ESC6 Virtual Capacity Expedited Shipment 9105 41B ESCT iSCSI SAN Load Source Specify for AIX 9105 41B ESCZ 571GB 10K RPM SAS SFF- HDD 4K for IBM i 9105 41B ESEU 600GB 10K RPM SAS SFF-2 HDD 4K for AIX/Linux 9105 41B ESEV 1.1TB 10K RPM SAS SFF-2 HDD 4K for IBM i 9105 41B ESF2 1.2TB 10K RPM SAS SFF-2 HDD 4K for AIX/Linux 9105 41B ESF3 1.7TB 10K RPM SAS SFF-2 HDD 4K for IBM i 9105 41B ESFS 1.8TB 10K RPM SAS SFF-2 HDD 4K for AIX/Linux 9105 41B ESFT 387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ESGV 775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ESGZ 931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJ0 931GB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJ1 1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJ2 1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJ3 3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJ4 3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJ5 7.45TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJ6 7.45TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJ7 931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJJ 931GB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJK 1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJL 1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJM 3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJN 3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJP 7.44TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESJQ 7.44TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESJR 387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ESK1 775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ESK3 387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESK8 387GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESK9 775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESKC 775GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESKD 1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESKG 1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESKH 931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESKK 931GB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESKM 1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESKP 1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESKR 3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESKT 3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESKV 7.44TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESKX 7.44TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESKZ Specify AC Power Supply for EXP12SX/EXP24SX Storage Enclosure 9105 41B ESLA ESBB Load Source Specify (387GB SSD SFF-2) 9105 41B ESLB ESBH Load Source Specify (775GB SSD SFF-2) 9105 41B ESLH ESBM Load Source Specify (1.55TB SSD SFF-2) 9105 41B ESLM EXP24SX SAS Storage Enclosure 9105 41B ESLS Load Source Specify for EC6V (NVMe 1.6 TB SSD for IBM i) 9105 41B ESLV Load Source Specify for EC6X (NVMe 3.2 TB SSD for IBM i) 9105 41B ESLX Load Source Specify for EC6Z (NVMe 6.4 TB SSD for IBM i) 9105 41B ESLZ 931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESMB 931GB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESMD 1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESMF 1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESMH 3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESMK 3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESMS 7.44TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESMV 7.44TB Mainstream SAS 4k SFF-2 SSD for IBM i 9105 41B ESMX 775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESNA 775GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESNB 1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ESNE 1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ESNF 283GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (IBM i) 9105 41B ESNL 300GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (AIX/Linux) 9105 41B ESNM 571GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (IBM i) 9105 41B ESNQ 600GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (AIX/Linux) 9105 41B ESNR 300GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (Linux) 9105 41B ESRM 600GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (Linux) 9105 41B ESRR AIX Update Access Key (UAK) 9105 41B ESWK 387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ETK1 775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105 41B ETK3 387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ETK8 387GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ETK9 775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ETKC 775GB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ETKD 1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105 41B ETKG 1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i 9105 41B ETKH 1TB Removable Disk Drive Cartridge 9105 41B EU01 RDX 320 GB Removable Disk Drive 9105 41B EU08 Operator Panel LCD Display 9105 41B EU0K 1.5TB Removable Disk Drive Cartridge 9105 41B EU15 Cable Ties & Labels 9105 41B EU19 Order Placed Indicator 9105 41B EU29 Express Edition 4 core (IBM i) 9105 41B EU2C 2TB Removable Disk Drive Cartridge (RDX) 9105 41B EU2T ESJ1 Load Source Specify (931GB SSD SFF-2) 9105 41B EU41 ESJ3 Load Source Specify (1.86TB SSD SFF-2) 9105 41B EU43 ESJ5 Load Source Specify (3.72TB SSD SFF-2) 9105 41B EU45 ESJ7 Load Source Specify (7.45TB SSD SFF-2) 9105 41B EU47 RDX USB Internal Docking Station 9105 41B EUA0 RDX USB External Docking Station 9105 41B EUA4 Note: Feature EUA4 is not supported in Armenia, Azerbaijan, China, India, Japan, Kazakhstan, Kyrgyzstan, Mexico, Saudi Arabia, Taiwan, Turkmenistan, and Uzbekistan. Standalone USB DVD drive w/cable 9105 41B EUA5 Enable Virtual Serial Number 9105 41B EVSN BP Post-Sale Services: 1 Day 9105 41B SVBP IBM Systems Lab Services Post-Sale Services: 1 Day 9105 41B SVCS Other IBM Post-Sale Services: 1 Day 9105 41B SVNN
The following are newly announced features on the specific models of the IBM Power 7965 machine type:
Planned Availability Date July 22, 2022
New Feature
Machine Model Feature Description type number number
Rack Content Specify 9105-41B 4EIA unit 7965 S42 ER3D
Feature conversions
Feature Conversions
The existing components being replaced during a model or feature conversion become the property of IBM and must be returned.
Feature conversions are always implemented on a "quantity of one for quantity of one" basis. Multiple existing features may not be converted to a single new feature. Single existing features may not be converted to multiple new features.
The following conversions are available to clients:
Feature conversions for 9105-41B adapter features:
Return From FC: To FC: parts
EJ20 - PCIe x16 to CXP EJ2A - PCIe4 x16 to CXP No Optical or CU converter Converter Adapter (support Adapter for PCIe3 Expansion AOC) Drawer EJ35 - PCIe3 Crypto EJ37 - PCIe3 Crypto No Coprocessor no BSC 4769 Coprocessor BSC-Gen3 4769
Feature conversions for 9105-41B cable features:
Return From FC: To FC: parts
ECC7 - 3M Optical Cable ECCX - 3M Active Optical No Pair for PCIe3 Expansion Cable Pair for PCIe3 Drawer Expansion Drawer ECC8 - 10M Optical Cable ECCY - 10M Active Optical No Pair for PCIe3 Expansion Cable Pair for PCIe3 Drawer Expansion Drawer
Feature conversions for 9105-41B miscellaneous features:
Return From FC: To FC: parts
EJUZ - OEM Cover and Doors EJUV - Front OEM Bezel for No for 16 NVMe-bays Backplane 16 NVMe-bays Backplane Desk-side Rack-Mount EJVZ - OEM Cover and Doors EJUX - Front OEM Bezel for No for 16 NVMe-bays and RDX 16 NVMe-bays and RDX Backplane Desk-side Backplane Rack-Mount EJUY - IBM Cover and Doors EJXU - Front IBM Bezel for No for 16 NVMe-bays Backplane 16 NVMe-bays Backplane Desk-side Rack-Mount EJVY - IBM Cover and Doors EJXW - Front IBM Bezel for No for 16 NVMe-bays and RDX 16 NVMe-bays and RDX Backplane Desk-side Backplane Rack-Mount
Feature conversions for 9105-41B rack-related features:
Return From FC: To FC: parts
EJXU - Front IBM Bezel for EJUY - IBM Cover and Doors No 16 NVMe-bays Backplane for 16 NVMe-bays Backplane Rack-Mount Desk-side EJUV - Front OEM Bezel for EJUZ - OEM Cover and Doors No 16 NVMe-bays Backplane for 16 NVMe-bays Backplane Rack-Mount Desk-side EJXW - Front IBM Bezel for EJVY - IBM Cover and Doors No 16 NVMe-bays and RDX for 16 NVMe-bays and RDX Backplane Rack-Mount Backplane Desk-side EJUX - Front OEM Bezel for EJVZ - OEM Cover and Doors No 16 NVMe-bays and RDX for 16 NVMe-bays and RDX Backplane Rack-Mount Backplane Desk-side EMXF - PCIe3 6-Slot Fanout EMXH - PCIe3 6-Slot Fanout No Module for PCIe3 Expansion Module for PCIe3 Expansion Drawer Drawer EMXG - PCIe3 6-Slot Fanout EMXH - PCIe3 6-Slot Fanout No Module for PCIe3 Expansion Module for PCIe3 Expansion Drawer Drawer
Publications
No publications are shipped with the announced product.
IBM Documentation provides you with a single information center where you can access product documentation for IBM systems hardware, operating systems, and server software. Through a consistent framework, you can efficiently find information and personalize your access. See IBM Documentation.
To access the IBM Publications Center Portal, go to the IBM Publications Center website. The IBM Publications Center is a worldwide central repository for IBM product publications and marketing material with a catalog of 70,000 items. Extensive search facilities are provided. A large number of publications are available online in various file formats, which can currently be downloaded.
To access the IBM Publications Center Portal, go to the IBM Publications Center website.
The Publications Center is a worldwide central repository for IBM product publications and marketing material with a catalog of 70,000 items. Extensive search facilities are provided. A large number of publications are available online in various file formats, which can currently be downloaded.
Not applicable
Services
IBM Systems Lab Services
Systems Lab Services offers infrastructure services to help build hybrid cloud and enterprise IT solutions. From servers to storage systems and software, Systems Lab Services can help deploy the building blocks of a next-generation IT infrastructure to empower a client's business. Systems Lab Services consultants can perform infrastructure services for clients online or onsite, offering deep technical expertise, valuable tools, and successful methodologies. Systems Lab Services is designed to help clients solve business challenges, gain new skills, and apply best practices.
Systems Lab Services offers a wide range of infrastructure services for IBM Power servers, IBM Storage systems, IBM Z®, and IBM LinuxONE. Systems Lab Services has a global presence and can deploy experienced consultants online or onsite around the world.
For assistance, contact Systems Lab Services at ibmsls@us.ibm.com.
To learn more, see the IBM Systems Lab Services website.
IBM Consulting
As transformation continues across every industry, businesses need a single partner to map their enterprise-wide business strategy and technology infrastructure. IBM Consulting is the business partner to help accelerate change across an organization. IBM specialists can help businesses succeed through finding collaborative ways of working that forge connections across people, technologies, and partner ecosystems. IBM Consulting brings together the business expertise and an ecosystem of technologies that help solve some of the biggest problems faced by organizations. With methods that get results faster, an integrated approach that is grounded in an open and flexible hybrid cloud architecture, and incorporating technology from IBM Research® and IBM Watson® AI, IBM Consulting enables businesses to lead change with confidence and deliver continuous improvement across a business and its bottom line.
For additional information, see the IBM Consulting website.
IBM Technology Support Services (TSS)
Get preventive maintenance, onsite and remote support, and gain actionable insights into critical business applications and IT systems. Speed developer innovation with support for over 240 open-source packages. Leverage powerful IBM analytics and AI-enabled tools to enable client teams to manage IT problems before they become emergencies.
TSS offers extensive IT maintenance and support services that cover more than one niche of a client's environment. TSS covers products from IBM and OEMs, including servers, storage, network, appliances, and software, to help clients ensure high availability across their data center and hybrid cloud environment.
For details on available services, see the Technology support for hybrid cloud environments website.
IBM Expert Labs
Expert Labs can help clients accelerate their projects and optimize value by leveraging their deep technical skills and knowledge. With more than 20 years of industry experience, these specialists know how to overcome the biggest challenges to deliver business results that can have an immediate impact.
Expert Labs' deep alignment with IBM product development allows for a strategic advantage as they are often the first in line to get access to new products, features, and early visibility into roadmaps. This connection with the development enables them to deliver First of a Kind implementations to address unique needs or expand a client's business with a flexible approach that works best for their organization.
For additional information, see the IBM Expert Labs website.
IBM Security® Expert Labs
With extensive consultative expertise on IBM Security software solutions, Security Expert Labs helps clients and partners modernize the security of their applications, data, and workforce. With an extensive portfolio of consulting and learning services, Expert Labs provides project-based and premier support service subscriptions.
These services can help clients deploy and integrate IBM Security software, extend their team resources, and help guide and accelerate successful hybrid cloud solutions, including critical strategies such as zero trust. Remote and on-premises software deployment assistance is available for IBM Cloud Pak® for Security, IBM Security QRadar®/QRoC, IBM Security SOAR/Resilient®, IBM i2®, IBM Security Verify, IBM Security Guardium®, and IBM Security MaaS360®.
For more information, contact Security Expert Labs at sel@us.ibm.com.
For additional information, see the IBM Security Expert Labs website.
IBM support
For installation and technical support information, see the IBM Support Portal.
Additional support
IBM Client Engineering for Systems
Client Engineering for Systems is a framework for accelerating digital transformation. It helps you generate innovative ideas and equips you with the practices, technologies, and expertise to turn those ideas into business value in weeks. When you work with Client Engineering for Systems, you bring pain points into focus. You empower your team to take manageable risks, adopt leading technologies, speed up solution development, and measure the value of everything you do. Client Engineering for Systems has experts and services to address a broad array of use cases, including capabilities for business transformation, hybrid cloud, analytics and AI, infrastructure systems, security, and more. Contact Client Engineering at sysgarage@ibm.com.
Technical information
Specified operating environment
Physical specifications
- 19-inch rack-mount hardware
- Width1: 482 mm (18.97 in.)
- Depth2: 712 mm (28 in.)
- Height: 173 mm (6.8 in.)
- Weight: 36.28 kg (80 lb)
- Tower hardware
- Width with stand: 329 mm (13 in.)
- Depth with front-rotatable door: 815 mm (32 in.)
- Height with handle: 522 mm (20.6 in.)
- Weight: 47.62 kg (105 lb)
1. The width is measured to the outside edges of the rack-mount bezels. The width is 446 mm (17.6 in.) for the main chassis which fits in between a 482.6 mm (19 in.) rack mounting flanges.
2. The cable management arm with the maximum cable bundle adds 248 mm (9.8 in.) to the depth.
Operating environment
Electrical characteristics
- AC rated voltage and frequency2: 100--1275 V AC or 200--240 V AC at 50 or 60 Hz plus or minus 3 Hz
- Thermal output (maximum)3: 3668 BTU/hr
- Maximum power consumption3: 1075 W
- Maximum kVA4: 1.105 kVA
- Phase: Single
1. Redundancy is supported. The Power S1014 with 1600 W power supplies has a maximum of two power supplies. The Power S1014 with 1200 W power supplies has a maximum of four power supplies, but can operate on two power supplies. There are no specific plugging rules or plugging sequence when you connect the power supplies to the rack PDUs. All the power supplies feed a common DC bus.
2. The power supplies automatically accept any voltage with the published, rated-voltage range. If multiple power supplies are installed and operating, the power supplies draw approximately equal current from the utility (electrical supply) and provide approximately equal current to the load.
3. Power draw and heat load vary greatly by configuration. When you plan for an electrical system, it is important to use the maximum values. However, when you plan for heat load, you can use the IBM Systems Energy Estimator to obtain a heat output estimate based on a specific configuration. For more information, see The IBM Systems Energy Estimator website.
4. To calculate the amperage, multiply the kVA by 1,000 and divide that number by the operating voltage.
5. The Power S1014 with 1200 W power supplies supports 100--127 V AC.
Environment (operating)1
- ASHRAE class; allowable A3 (fourth edition)
- Airflow direction; recommended Front-to-back
- Temperature: Recommended 18.0°C--27.0°C (64.4°F--80.6°F); allowable 5.0°C--40.0°C (41.0°F--104.0°F)
- Low-end moisture: Recommended 9.0°C (15.8°F) dew point; allowable -12.0°C (10.4°F) dew point and 8% relative humidity
- High-end moisture: Recommended 60% relative humidity and 15°C (59°F) dew point; allowable 85% relative humidity and 24.0°C (75.2°F) dew point
- Maximum altitude: 3,050 m (10,000 ft)
Allowable environment (nonoperating)5
- Temperature: Recommended 5°C--45°C (41°F--113°F)
- Relative humidity: Recommended 8% to 85%
- Maximum dew point: Recommended 27.0°C (80.6°F)
1. IBM provides the recommended operating environment as the long-term operating environment that can result in the greatest reliability, energy efficiency, and reliability. The allowable operating environment represents where the equipment is tested to verify functionality. Due to the stresses that operating in the allowable envelope can place on the equipment, these envelopes must be used for short-term operation, not continuous operation. There are a very limited number of configurations that must not operate at the upper bound of the A3 allowable range. For more information, consult your IBM technical specialist.
2. Must derate the maximum allowable temperature 1°C (1.8°F) per 175 m (574 ft) above 900 m (2,953 ft) up to a maximum allowable elevation of 3,050 m (10,000 ft).
3. The minimum humidity level is the larger absolute humidity of the -12°C (10.4°F) dew point and the 8% relative humidity. These levels intersect at approximately 25°C (77°F). Below this intersection, the dew point (-12°C) represents the minimum moisture level, while above it, the relative humidity (8%) is the minimum. For the upper moisture limit, the limit is the minimum absolute humidity of the dew point and relative humidity that is stated.
4. The following minimum requirements apply to data centers that are operated at low relative humidity:
- Data centers that do not have ESD floors and where people are allowed to wear non-ESD shoes might want to consider increasing humidity given that the risk of generating 8 kV increases slightly at 8% relative humidity, when compared to 25% relative humidity.
- All mobile furnishings and equipment must be made of conductive or static dissipative materials and be bonded to ground.
- During maintenance on any hardware, a properly functioning and grounded wrist strap must be used by any personnel who comes in contact with information technology (IT) equipment.
5. Equipment that is removed from the original shipping container and is installed, but is powered down. The allowable non-operating environment is provided to define the environmental range that an unpowered system can experience short term without being damaged.
Electromagnetic compatibility compliance: CISPR 22; CISPR 32; CISPR 24; CISPR 35; FCC, CFR 47, Part 15 (US); VCCI (Japan); EMC Directive (EEA); ICES-003 (Canada); ACMA (Australia, New Zealand); CNS 13438 (Taiwan); Radio Waves Act (Korea); Commodity Inspection Law (China); QCVN 118 (Vietnam); MoCI (Saudi Arabia); SI 961 (Israel); EAC (EAEU).
Safety compliance: This product was designed, tested, manufactured, and certified for safe operation. It complies with IEC 60950-1 and/or IEC 62368-1 and where required, to relevant national differences/deviations (ND) to these IEC base standards. This includes, but is not limited to: EN (European Norms including all Amendments under the Low Voltage Directive), UL/CSA (North America bi-national harmonized and marked per accredited NRTL agency listings), and other such derivative certifications according to corporate determinations and latest regional publication compliance standardized requirements.
See the Installation Planning Guide in IBM Documentation for additional detail.
Hardware requirements
Power S1014 system configuration
The minimum Power S1014 initial order must include a processor module, two 16 GB DIMMs (one feature EM6N 32 GB (2 x 16 GB) DDIMM), four or two power supplies and line cords, an operating system indicator, a cover set indicator, and a Language Group Specify. Also, it must include one of these storage options and one of these network options:
Storage options:
- For boot from NVMe for AIX/Linux: One NVMe drive slot and one NVMe drive or one PCIe NVMe add-in adapter.
- For boot from NVMe for IBM i: Two NVMe drive slots and two NVMe drives or two PCIe NVMe add-in adapters.
- For boot from SAN: Internal NVMe drive and RAID card are not required if feature 0837 (boot from SAN) is selected. An FC adapter must be ordered if feature 0837 is selected.
Network options:
- One PCIe2 4-port 1 Gb Ethernet adapter
- One of the supported 10 Gb Ethernet adapters
When AIX or Linux is the primary operating system, the minimum defined initial order configuration is as follows:
System Feature Codes | Feature Code | Description | Default | Minimum Quantity | Notes |
---|---|---|---|---|---|
Op-Panel | EU0K | Operator Panel LCD Display | 1 | Mandatory for Tower configuration. Optional for Rack configuration with AIX/Linux. Always default Qty. 1, but can be deselected for AIX/Linux. | |
Virtualization Engine | 5228 | PowerVM Enterprise Edition | 1 | 1 | Must select one option. |
or | |||||
EPA0 | Deactivation of LPM (Live Partition Mobility) | 1 | |||
Processor Modules | EPG0 | 4-core Typical 3.0 to 3.90 Ghz (max) Power10 Processor | 1 | Must select one Processor Module option. | |
or | |||||
EPG2 | 8-core Typical 3.0 to 3.90 Ghz (max) Power10 Processor | 1 | |||
Processor Module Activations | EPFT | One Processor Core Activation for EPG0 | 4 | All processor cores must be activated on the Processor Module selected. | |
or | |||||
EPF6 | One Processor Core Activation for EPG2 | 8 | |||
Memory | EM6N | 32GB (2x16GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory | 1 | Minimum 2 DIMMs = 1 DIMM feature. Features EM6X and EM6Y are not available with 4-core processor module configuration. | |
or | |||||
EM6W | 64GB (2x32GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory | 1 | |||
or | |||||
EM6X | 128GB (2x64GB) DDIMMs, 3200 MHz, 16GBIT DDR4 Memory | 1 | |||
or | |||||
EM6Y | 256GB (2x128GB) DDIMMs, 2666 MHz, 16GBIT DDR4 Memory | 1 | |||
Storage Backplane | EJ1Y | Storage Backplane with eight NVMe U.2 drive slots | 1 | Must order Qty. 1 NVMe backplane feature except when #0837 or #ESCZ (iSCSI boot) is on the order or when NVMe PCIe add-in adapter card is used as the Load Source. Mixing NVMe devices is allowed on each backplane. | |
Bezels / Covers and Doors | EJXU | Front IBM Bezel for 16 NVMe-bays Backplane Rack-Mount | 1 | When no NVMe backplane is ordered and no RDX is ordered, default #EJXU. When no NVMe backplane is ordered and there is an RDX on the order, default #EJXW. Tower models: When no NVMe backplane is ordered and no RDX is ordered, default #EJUY. When no NVMe backplane is ordered and there is an RDX on the order, default #EJVY. | |
or | |||||
EJUV | Front OEM Bezel for 16 NVMe-bays Backplane Rack-Mount | 1 | |||
or | |||||
EJXW | Front IBM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount | 1 | |||
or | |||||
EJUX | Front OEM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount | 1 | |||
or | |||||
EJUY | IBM Cover and Doors for 16 NVMe-bays Backplane Desk-side | 1 | |||
or | |||||
EJUZ | OEM Cover and Doors for 16 NVMe-bays Backplane Desk-side | 1 | |||
or | |||||
EJVY | IBM Cover and Doors for 16 NVMe-bays and RDX Backplane Desk-side | 1 | |||
or | |||||
EJVZ | OEM Cover and Doors for 16 NVMe-bays and RDX Backplane Desk-side | 1 | |||
NVMe Devices | EC7T | 800GB Mainstream NVMe U.2 SSD 4k for AIX/Linux | 2 | 0 | For AIX/Linux, default is Qty. 2. For 8-Core Processor configuration, it is allowed to be changed to any quantity. From Qty. 0 to Qty. 16. Note: See 4-Core Power S1014 processor section for specific limitations. |
Required LAN adapters | EC2U | PCIe3 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter | 1 | Qty. 1 of these LAN features required on all Initial orders. Default Adapter: feature 5899. | |
or | |||||
5899 | PCIe2 4-port 1GbE Adapter | 1 | 1 | ||
or | |||||
EN0W | PCIe2 2-port 10/1GbE BaseT RJ45 Adapter | 1 | |||
Power Supply | EB3S | AC Power Supply - 1600W for Server (200-240 VAC) | 2 | 2 | Each initial order must have all power supplies present, power supplies cannot be added later on. Only 200--240V power cords can be used. For 41B Tower/Desk configuration #EB3W - Qty. 4 (only option). For 41B Rack configuration: #EB3S - Qty. 2 (default). |
or | |||||
EB3W | AC Power Supply - 1200W for Server (100-127V/200-240V) | 4 | |||
Power Cables | 6458 | Power Cord 4.3m (14-ft), Drawer to IBM PDU (250V/10A) | 4 | 4 | Qty. 4 or Qty. 2 required. |
Language Group | 9300 | Language Group Specify - US English | 1 | 1 | Language Specify code is required. |
Primary Operating | 2146 | Primary OS - AIX | 1 | Must select one option. | |
or | |||||
2147 | Primary OS - Linux | 1 |
- The racking approach for the initial order can be a MTM 7965-S42.
The minimum defined initial order configuration, if no choice is made, when IBM i is the primary operating system, is:
System Feature Codes | Feature Code | Description | Default | Minimum Quantity | Notes |
---|---|---|---|---|---|
Op-Panel | EU0K | Operator Panel LCD Display | 1 | Mandatory Qty. 1 with IBM i. | |
Virtualization Engine | 5228 | PowerVM Enterprise Edition | 1 | 1 | Must select one option. |
or | |||||
EPA0 | Deactivation of LPM (Live Partition Mobility) | 1 | |||
Processor Modules | EPG0 | 4-core Typical 3.0 to 3.90 Ghz (max) Power10 Processor | 1 | Must select one Processor Module option. | |
or | |||||
EPG2 | 8-core Typical 3.0 to 3.90 Ghz (max) Power10 Processor | 1 | |||
Processor Module Activations | EPFT | One Processor Core Activation for EPG0 | 4 | All processor cores must be activated on the Processor Module selected. | |
or | |||||
EPF6 | One Processor Core Activation for EPG2 | 8 | |||
Memory | EM6N | 32GB (2x16GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory | 1 | Minimum 2 DIMMs = 1 DIMM feature. Features EM6X and EM6Y are not available with 4-core processor module configuration. | |
or | |||||
EM6W | 64GB (2x32GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory | 1 | |||
or | |||||
EM6X | 128GB (2x64GB) DDIMMs, 3200 MHz, 16GBIT DDR4 Memory | 1 | |||
or | |||||
EM6Y | 256GB (2x128GB) DDIMMs, 2666 MHz, 16GBIT DDR4 Memory | 1 | |||
Storage Backplane | EJ1Y | Storage Backplane with eight NVMe U.2 drive slots | 1 | Must order 1 NVMe backplane feature except when #0837 is on the order or when NVMe PCIe add-in adapter card is used as the Load Source. Mixing NVMe devices is allowed on each backplane. | |
Bezels / Covers and Doors | EJXU | Front IBM Bezel for 16 NVMe-bays Backplane Rack-Mount | 1 | When no NVMe backplane is ordered and no RDX is ordered, default #EJXU. When no NVMe backplane is ordered and there is an RDX on the order, default #EJXW. Tower models: When no NVMe backplane is ordered and no RDX is ordered, default #EJUY. When no NVMe backplane is ordered and there is an RDX on the order, default #EJVY. | |
or | |||||
EJUV | Front OEM Bezel for 16 NVMe-bays Backplane Rack-Mount | 1 | |||
or | |||||
EJXW | Front IBM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount | 1 | |||
or | |||||
EJUX | Front OEM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount | 1 | |||
or | |||||
EJUY | IBM Cover and Doors for 16 NVMe-bays Backplane Desk-side | 1 | |||
or | |||||
EJUZ | OEM Cover and Doors for 16 NVMe-bays Backplane Desk-side | 1 | |||
or | |||||
EJVY | IBM Cover and Doors for 16 NVMe-bays and RDX Backplane Desk-side | 1 | |||
or | |||||
EJVZ | OEM Cover and Doors for 16 NVMe-bays and RDX Backplane Desk-side | 1 | |||
NVMe Devices | ES1K | Enterprise 800GB SSD PCIe4 NVMe U.2 module for IBM i | 2 | 0 | For IBM i, default is Qty. 2. For 8-Core Processor configuration, it is allowed to be ordered in any quantity. From Qty. 0 to Qty. 16, except Qty. 1. Note: See4-Core Power S1014 processor section for specific limitations. |
Required LAN adapters | EC2U | PCIe3 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter | 1 | Qty. 1 of these LAN features required on all Initial orders. Default Adapter: feature 5899. | |
or | |||||
5899 | PCIe2 4-port 1GbE Adapter | 1 | 1 | ||
Power Supply | EB3S | AC Power Supply - 1600W for Server (200-240 VAC) | 2 | 2 | Each initial order must have all power supplies present, power supplies cannot be added later on. Only 200--240V power cords can be used. For 41B Tower/Desk configuration #EB3W - Qty. 4 (only option). For 41B Rack configuration: #EB3S - Qty. 2 (default). |
or | |||||
EB3W | AC Power Supply - 1200W for Server (100-127V/200-240V) | 4 | |||
Power Cables | 6458 | Power Cord 4.3m (14-ft), Drawer to IBM PDU (250V/10A) | 4 | 4 | Qty. 4 or QTY. 2 required. |
Language Group | 9300 | Language Group Specify - US English | 1 | 1 | Language Specify code is required. |
Primary Operating | 2145 | Primary OS - IBM i | 1 | Mandatory feature. | |
System Consoles | 5550 | Sys Console On HMC | 1 | Must select one System Console feature. | |
or | |||||
5557 | System Console-Ethernet LAN adapter | 1 | |||
Data Protection | 0040 | Mirrored System Disk Level, Specify Code | 1 | 1 | For IBM i OS only - Qty. 1 system data protection code required. |
- The racking approach for the initial order can be either a MTM 7965-S42.
Power10 Tower-to-Rack conversion
The IBM System Server S1014 Tower-to-Rack conversion is available through the following MES parts that you need to convert a 4U server (MTM 9105-41B) from a tower model to a rack model. You can then install the server into a 19-inch rack enclosure.
One of following MES parts is required for the tower-to-rack conversion:
Description | Feature | Comments |
---|---|---|
Front IBM Bezel for 16 NVMe-bays BackPlane Rack-Mount | #EJXU | Optional, mutually exclusive with #EJUV, #EJXW, and #EJUX |
Front IBM Bezel for 16 NVMe-bays and RDX BackPlane Rack-Mount | #EJXW | Optional, mutually exclusive with #EJUV, #EJXU, and #EJUX |
Front OEM Bezel for 16 NVMe-bays BackPlane Rack-Mount | #EJUV | Optional, mutually exclusive with #EJXU, #EJXW, and #EJUX |
Front OEM Bezel for 16 NVMe-bays and RDX BackPlane Rack-Mount | #EJUX | Optional, mutually exclusive with #EJXU, #EJXW, and #EJUV |
Notes:
- Each of these conversions includes the shipping of the Rack-mount Rail Kit feature ERKZ.
- Choose the correct set of power cords to PDU for your rack configuration, depending on the rack type, the PDU type, and the number of power supplies.
- An IBM Service Support Representative (SSR) needs to be dispatched to your site to assist with installation instructions.
Power10 Rack-to-Tower conversion
The System Server S1014 Rack-to-Tower Conversion is also available through the following MES parts that you need to convert a 4U server (MTM 9105-41B) from a rack model to a tower model.
One of following MES parts is required for the tower-to-rack conversion:
Description | Feature | Comments |
---|---|---|
IBM Cover and Doors for 16 NVMe-bays BackPlane Desk-side | #EJUY | Optional, mutually exclusive with #EJUZ, #EJVY, and #EJVZ |
IBM Cover and Doors for 16 NVMe-bays and RDX BackPlane Desk-side | #EJVY | Optional, mutually exclusive with #EJUZ, #EJUY, and #EJVZ |
OEM Cover and Doors for 16 NVMe-bays BackPlane Desk-side | #EJUZ | Optional, mutually exclusive with #EJUY, #EJUY, and #EJVZ |
OEM Cover and Doors for 16 NVMe-bays and RDX BackPlane Desk-side | #EJVZ | Optional, mutually exclusive with #EJUY, #EJUY, and #EJUZ |
Notes:
- Four 1200 W power supplies are required. If two 1600 W power supplies exist on the rack model, they will be removed on the order.
- Choose the correct set of power cords to wall cables, depending on AC, length of cord required, and number of power cords required per power supply.
- An SSR needs to be dispatched to your site to assist with installation instructions.
Hardware Management Console (HMC) machine code
If the system is ordered with 1020 firmware level, or higher, and is capable to be HMC managed, then the managing HMC must be installed with HMC 10.1.1020.0, or higher.
This level only supports hardware appliance types 7063, or virtual appliances (vHMC) on x86 or PowerVM. The 7042 hardware appliance is not supported.
An HMC is required to manage the Power S1014 server implementing partitioning. Multiple Power8, Power9, and Power10 processor-based servers can be supported by a single HMC with version 10.
Planned HMC hardware and software support:
- Hardware Appliance: 7063-CR1, 7063-CR2
- vHMC on x86
- vHMC on PowerVM based LPAR
If you are attaching an HMC to a new server or adding function to an existing server that requires a firmware update, the HMC machine code may need to be updated because HMC code must always be equal to or higher than the managed server's firmware. Access to firmware and machine code updates is conditioned on entitlement and license validation in accordance with IBM policy and practice. IBM may verify entitlement through customer number, serial number, electronic restrictions, or any other means or methods employed by IBM at its discretion.
To determine the HMC machine code level required for the firmware level on any server, go to the following web page to access the Fix Level Recommendation Tool (FLRT) on or after the planned availability date for this product. FLRT will identify the correct HMC machine code for the selected system firmware level; see the website Fix Level Recommendation Tool.
If a single HMC is attached to multiple servers, the HMC machine code level must be updated to be at or higher than the server with the most recent firmware level. All prior levels of server firmware are supported with the latest HMC machine code level.
For clients installing systems higher than the EIA 29 position (location of the rail that supports the rack-mounted server) in any IBM or non-IBM rack, acquire approved tools outlined in the server specifications section at IBM Documentation.
In situations where IBM service is required and the recommended tools are not available, there could be delays in repair actions.
Software requirements
- Red Hat® Enterprise Linux 9.0, for Power LE, or later
- Red Hat Enterprise Linux 8.4, for Power LE, or later
- SUSE Linux Enterprise Server 15 Service Pack 3, or later
- SUSE Linux Enterprise Server for SAP with SUSE Linux Enterprise Server 15 Service Pack 3, or later
- Red Hat Enterprise Linux for SAP with Red Hat Enterprise Linux 8.4 for Power LE, or later
- Red Hat OpenShift® Container Platform 4.9, or later
Please review the Linux alert page for any known Linux issues or limitations Linux on IBM - Readme first issues website.
If installing IBM i:
- IBM i 7.5, or later
- IBM i 7.4 TR6, or later
- IBM i 7.3 TR12, or later
If installing the AIX operating system LPAR with any I/O configuration (one of these):
- AIX Version 7.3 with the 7300-00 Technology Level and Service Pack 7300-00-02-2220, or later
- AIX Version 7.2 with the 7200-05 Technology Level and Service Pack 7200-05-04-2220, or later
- AIX Version 7.2 with the 7200-04 Technology Level and Service Pack 7200-04-06-2220, or later (planned availability September 16, 2022)
If installing the AIX operating system Virtual I/O only LPAR (one of these):
- AIX Version 7.3 with the 7300-00 Technology Level and service pack 7300-00-01-2148, or later
- AIX Version 7.2 with the 7200-05 Technology Level and service pack 7200-05-01-2038, or later
- AIX Version 7.2 with the 7200-04 Technology Level and Service Pack 7200-04-02-2016 or later
- AIX Version 7.1 with the 7100-05 Technology Level and Service Pack 7100-05-06-2016, or later
If installing VIOS:
- VIOS 3.1.3.21
Limitations
- If IBM i (#2145) is selected as the primary operating system, then feature (#0047) - Device Parity RAID-6 All, Specify Code with NVMe devices is not allowed.
- There is not physical system port on the scale-out Power10 servers.
Boot requirements
- If IBM i (#2145) is selected as the primary operating system and SAN boot is not selected (#0837), one of the load source specify codes for SAS drives or NVMe devices in Special Features - Initial Orders - Specify codes section must be specified.
- If IBM i (#2145) is selected and the load
source disk unit is not in the system unit (CEC),
one of the following specify codes must also be
selected:
- Feature (#0719) Load Source not in CEC and are to be placed in I/O drawers or in external SAN-attached disk
- Feature (#EHR2) Load Source Specifies DASD are placed in an EXP24SX SFF Gen2 bay Drawer (#ESLS)
- Feature (#0837) SAN Operating System Load Source Specify
- If IBM i (#2145) is selected, one of the
following system console specify codes must be
selected:
- Feature (#5550) -- System Console on HMC
- Feature (#5557) -- System Console - Internal LAN
Planning information
Cable orders
No cables required.
Security, auditability, and control
This product uses the security and auditability features of host hardware and application software.
The client is responsible for evaluation, selection, and implementation of security features, administrative procedures, and appropriate controls in application systems and communications facilities.
Terms and conditions
Volume orders
Contact your IBM representative.
Products - terms and conditions
Warranty period
Warranty and additional coverage options: | Coverage summary(1): |
---|---|
Warranty Period: Service Level: |
3 years IBM CRU & On-Site, 9x5 Next Business Day |
Service Upgrade Options: | |
Warranty Service Upgrade | IBM On-Site Repair, 9x5 Same Day(2) and 24x7 Same Day options |
Maintenance Services (Post-Warranty): | IBM On-Site Repair, Next Business Day and Same Day options |
IBM Hardware Maintenance Services - committed maintenance(3): | Y |
- (1) See complete coverage details below.
- (2) Offered in US and EMEA only.
- (3) Not offered in the US.
To obtain copies of the IBM Statement of Limited Warranty, contact your reseller or IBM.
An IBM part or feature installed during the initial installation of an IBM machine is subject to the full warranty period specified by IBM. An IBM part or feature that replaces a previously installed part or feature assumes the remainder of the warranty period for the replaced part or feature. An IBM part or feature added to a machine without replacing a previously installed part or feature is subject to a full warranty. Unless specified otherwise, the warranty period, type of warranty service, and service level of a part or feature are the same as those for the machine in which it is installed.
IBM Solid State Drive (SSD) and Non-Volatile Memory Express (NVMe) devices identified in this document may have a maximum number of write cycles. IBM SDD and NMVe device failures will be replaced during standard warranty and maintenance period for devices that have not reached the maximum number of write cycles. Devices that reach this limit may fail to operate according to specifications and must be replaced at the client's expense. Individual service life may vary and can be monitored using an operating system command.
The IBM warranty covers feature number EB4Z. For warranty terms associated with feature number EB3Z and the Lift tool based on GenieLift GL-8, see the separate warranty terms provided by Genie found in the Genie Operator's Manual at the Genie website.
For clients installing systems higher than the EIA 29 position (location of the rail that supports the rack-mounted server) in any IBM or non-IBM rack, acquire approved tools outlined in the server specifications section at IBM Documentation. In situations where IBM service is required and the recommended tools are not available, there could be delays in repair actions.
Extended Warranty Service
Extended Warranty Service is not applicable.
Warranty service
If required, IBM provides repair or exchange service depending on the types of warranty service specified for the machine. IBM will attempt to resolve your problem over the telephone, or electronically through an IBM website. Certain machines contain remote support capabilities for direct problem reporting, remote problem determination, and resolution with IBM. You must follow the problem determination and resolution procedures that IBM specifies. Following problem determination, if IBM determines on-site service is required, scheduling of service will depend upon the time of your call, machine technology and redundancy, and availability of parts. If applicable to your product, parts considered Customer Replaceable Units (CRUs) will be provided as part of the machine's standard warranty service.
Service levels are response-time objectives and are not guaranteed. The specified level of warranty service may not be available in all worldwide locations. Additional charges may apply outside IBM's normal service area. Contact your local IBM representative or your reseller for country-specific and location-specific information.
CRU Service
IBM provides replacement CRUs to you for you to install. CRU information and replacement instructions are shipped with your machine and are available from IBM upon your request. CRUs are designated as being either a Tier 1 (mandatory) or a Tier 2 (optional) CRU.
Tier 1 (mandatory) CRU
Installation of Tier 1 CRUs, as specified in this announcement, is your responsibility. If IBM installs a Tier 1 CRU at your request, you will be charged for the installation.
The following parts have been designated as Tier 1 CRUs:
- Bezel
- Service Cover
- Op Panel
- Op Panel -- LCD
- Blower
- RDX Docking Station
- RDX Cartridge
- RDX Power Cable
- Front USB Cable
- NVMe drive
- NVMe Filler
- DDIMM Cover for Retention
- DDIMM Filler
- Air Baffle
- Time of Day Battery
- TPM Card
- Processor VRM
- Processor Heatsink
- PCIe Adapter
- Power Supply
- Power Distribution Signal Cable
Tier 2 (optional) CRU
You may install a Tier 2 CRU yourself or request IBM to install it, at no additional charge.
Based upon availability, CRUs will be shipped for next-business-day (NBD) delivery. IBM specifies, in the materials shipped with a replacement CRU, whether a defective CRU must be returned to IBM. When return is required, return instructions and a container are shipped with the replacement CRU. You may be charged for the replacement CRU if IBM does not receive the defective CRU within 15 days of your receipt of the replacement.
The following parts have been designated as Tier 2 CRUs:
- Op Panel -- LCD Cable
- Blower Power Cable
CRU and On-site Service
At IBM's discretion, you will receive specified CRU service, or IBM will repair the failing machine at your location and verify its operation. You must provide a suitable working area to allow disassembly and reassembly of the IBM machine. The area must be clean, well-lit, and suitable for the purpose.
Service level is:
- 9 hours per day, Monday through Friday, excluding holidays, next-business-day response. Calls must be received by 3:00 PM local time in order to qualify for next-business-day response.
Warranty service
IBM is now shipping machines with selected non-IBM parts that contain an IBM field replaceable unit (FRU) part number label. These parts are to be serviced during the IBM machine warranty period. IBM is covering the service on these selected non-IBM parts as an accommodation to their customers, and normal warranty service procedures for the IBM machine apply.
International Warranty Service
International Warranty Service allows you to relocate any machine that is eligible for International Warranty Service and receive continued warranty service in any country where the IBM machine is serviced. If you move your machine to a different country, you are required to report the machine information to your Business Partner or IBM representative.
The warranty service type and the service level provided in the servicing country may be different from that provided in the country in which the machine was purchased. Warranty service will be provided with the prevailing warranty service type and service level available for the eligible machine type in the servicing country, and the warranty period observed will be that of the country in which the machine was purchased.
The following types of information can be found on the International Warranty Service website
- Machine warranty entitlement and eligibility
- Directory of contacts by country with technical support contact information
- Announcement Letters
Warranty service upgrades
During the warranty period, warranty service upgrades provide an enhanced level of On-site Service for an additional charge. Service levels are response-time objectives and are not guaranteed. See the Warranty services section for additional details.
IBM will attempt to resolve your problem over the telephone or electronically by access to an IBM website. Certain machines contain remote support capabilities for direct problem reporting, remote problem determination, and resolution with IBM. You must follow the problem determination and resolution procedures that IBM specifies. Following problem determination, if IBM determines on-site service is required, scheduling of service will depend upon the time of your call, machine technology and redundancy, and availability of parts.
Maintenance service options
For additional information about IBM Power Expert Care services and support options, see announcement JS22-0008, dated July 12, 2022.
Non-IBM parts service
Under certain conditions, IBM provides services for selected non-IBM parts at no additional charge for machines that are covered under warranty service upgrades or maintenance services.
This service includes hardware problem determination (PD) on the non-IBM parts (for example, adapter cards, PCMCIA cards, disk drives, memory) installed within IBM machines and provides the labor to replace the failing parts at no additional charge.
If IBM has a Technical Service Agreement with the manufacturer of the failing part, or if the failing part is an accommodations part (a part with an IBM FRU label), IBM may also source and replace the failing part at no additional charge. For all other non-IBM parts, customers are responsible for sourcing the parts. Installation labor is provided at no additional charge, if the machine is covered under a warranty service upgrade or a maintenance service.
Usage plan machine
No
IBM hourly service rate classification
Two
When a type of service involves the exchange of a machine part, the replacement may not be new, but will be in good working order.
General terms and conditions
Field-installable features
Yes
Model conversions
No
Machine installation
Client setup. Clients are responsible for installation according to the instructions IBM provides with the machine.
Graduated program license charges apply
No
Licensed Machine Code
IBM Machine Code is licensed for use by a client on the IBM machine for which it was provided by IBM under the terms and conditions of the IBM License Agreement for Machine Code, to enable the machine to function in accordance with its specifications, and only for the capacity authorized by IBM and acquired by the client. You can obtain the agreement by contacting your IBM representative. It can also be found on the License Agreement for Machine Code and Licensed Internal Code
Machine using LMC Type Model 9105-41B
Access to Machine Code updates is conditioned on entitlement and license validation in accordance with IBM policy and practice. IBM may verify entitlement through client number, serial number, electronic restrictions, or any other means or methods employed by IBM in its discretion.
If the machine does not function as warranted and your problem can be resolved through your application of downloadable Machine Code, you are responsible for downloading and installing these designated Machine Code changes as IBM specifies. If you would prefer, you may request IBM to install downloadable Machine Code changes; however, you may be charged for that service.
Educational allowance
Educational allowance: A reduced charge is available to qualified education clients. The educational allowance may not be added to any other discount or allowance.
The educational allowance is 5 percentage for the products in this announcement.
Prices
For all local charges, contact your IBM representative.
Annual minimum maintenance charges
Not applicable
IBM Global Financing
IBM Global Financing offers competitive financing to credit-qualified clients to assist them in acquiring IT solutions. Offerings include financing for IT acquisition, including hardware, software, and services, from both IBM and other manufacturers or vendors. Offerings (for all client segments: small, medium, and large enterprise), rates, terms, and availability can vary by country. Contact your local IBM Global Financing organization or go to the IBM Global Financing website for more information.
IBM Global Financing offerings are provided through IBM Credit LLC in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension, or withdrawal without notice.
Financing solutions from IBM Global Financing can help you stretch your budget and affordably acquire the new product. But beyond the initial acquisition, our end-to-end approach to IT management can also help keep your technologies current, reduce costs, minimize risk, and preserve your ability to make flexible equipment decisions throughout the entire technology lifecycle.
Trademarks
IBM Consulting is a trademark of IBM Corporation in the United States, other countries, or both.
IBM, Power, PowerVM, AIX, IBM Cloud, IBM Z, IBM Research, IBM Watson, IBM Security, IBM Cloud Pak, QRadar, Resilient, i2, Guardium and MaaS360 are registered trademarks of IBM Corporation in the United States, other countries, or both.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive licensee of Linus Torvalds, owner of the mark on a worldÂwide basis.
Red Hat and OpenShift are registered trademarks of Red Hat Inc. in the U.S. and other countries.
Other company, product, and service names may be trademarks or service marks of others.
Terms of use
IBM products and services which are announced and available in your country can be ordered under the applicable standard agreements, terms, conditions, and prices in effect at the time. IBM reserves the right to modify or withdraw this announcement at any time without notice. This announcement is provided for your information only. Additional terms of use are located at
For the most current information regarding IBM products, consult your IBM representative or reseller, or go to the IBM worldwide contacts page