IBM Power S1024 テクノロジー・ベースのサーバーは、IT の卓越性を追求する企業に最適化されたコスト・パフォーマンスと拡張性を提供します

日本 IBM のハードウェア発表 JG22-00282022 年 7 月 12 日

Table of contents
概要概要PublicationsPublications
主要要件主要要件Technical informationTechnical information
Planned availability datePlanned availability date契約条件契約条件
DescriptionDescriptionPricesPrices
Product numberProduct number

(2022 年 8 月 8 日修正)

延長保証サービス・セクションを修正しました。

(2022 年 8 月 8 日修正)

「機能詳細」、「制限事項」、「契約条件」セクションを修正しました。



ハイライト

Top rule

IBM® Power® サーバーは、このクラスで最も信頼性が高く、機密保護機能のあるサーバーとして位置づけられています。この度、新しい IBM Power S1024 (9105-42A) テクノロジー・ベースのサーバーは、そのリーダーシップを更に拡張し、ハイブリッドクラウド内のあらゆる場所でコア業務アプリケーションと AI アプリケーションを安全かつ効率的に拡張できるよう独自に設計された、不可欠なスケールアウト・ハイブリッド・クラウド・プラットフォームを導入します。お客様は、管理のオーバーヘッドやパフォーマンスに影響を与えることなく、シンプルにすべてのデータを暗号化し、AI を活用することで、より速くインサイトを導き出すことができます。また、お客様はより多くの業務をこなしながら、単一のハイブリッドクラウドでワークロード展開の柔軟性と俊敏性を獲得することができます。

Power S1024 の機能には以下が含まれます。

  • IBM Power10 プロセッサー (サーバーあたり最大 12、24、32、48 個の合計コアを持つプロセッサー)
  • Capacity Upgrade on Demand (CUoD) activation core features
  • IBM Private Cloud with Dynamic Capacity for Enterprise Pools 2.0 activation core features
  • Matrix Math Accelerator (MMA) 機能を使用したインコア AI 推論と機械学習
  • 最大 8.0 TB のシステムメモリーを 32 個の DDR4 Differential Dual Inline Memory Module (DDIMM) スロットに配置
  • 追加の管理設定やパフォーマンスへの影響のない透過的なメモリー暗号化
  • AMM for Hypervisor は、 IBM PowerVM® Hypervisor が使用する重要なメモリをミラーリングして、耐障害性を強化するオプションとして利用可能です。
  • 8 つの PCIe Gen5 対応のスロットを含む 10 個の PCIe スロットは活性保守対応です
  • 最大 16 個の NVMe U.2 フラッシュ・ベイで最大 102.4 TB の高速ストレージを提供します
  • オプションの RDX 内蔵ドライブ
  • 各エンクロージャーには 2+2 冗長ホットプラグ AC Titanium 電源
  • 最小限の処理オーバーヘッドで PowerVM に統合された仮想化を実現

Power S1024 は、以下をサポートします。

  • IBM AIX®, IBM i、 Linux, ® 、VIOS 環境
  • Capacity Upgrade on Demand (CUoD) プロセッサーのアクティブ化使用許諾 (エンタイトルメント)
  • IBM Private Cloud with Dynamic Capacity for Enterprise Pools 2.0 プロセッサー活動化使用許諾 (エンタイトルメント)
  • IBM Power Expert Care サービス


Back to topBack to top

概要

Top rule

セキュリティ、業務効率、そして市場の変化に迅速に対応するためのリアルタイム・インテリジェンスは、今やITにとって譲れない要素となっています。絶えず変化する常時稼働の環境では、24 時間 365 日の可用性を確保し、サイバー脅威の先を行くとともに、重要な運用機能を自動化および高速化する必要があります。アプリケーションとデータはどこでもエンタープライズグレードである必要がありますが、複雑さとコストを追加する必要はありません。

Power S1024 (9105-42A) サーバーは、摩擦のないハイブリッド クラウド・エクスペリエンスを通じて、アプリケーションとインフラストラクチャーをモダナイズし、今日の予測不能なビジネスに必要な俊敏性を提供します。 Power S1024 は、以下のことを支援します。

  • パブリック・クラウドとプライベート・クラウドにまたがる効率的なスケーリングと一貫した従量課金制ベースの価格設定により、必要な場所でワークロードを実行できます
  • ハイブリッドクラウドに対するゼロトラストセキュリティアプローチをサポートするために設計されたプロセッサレベルでのメモリ暗号化を使用します。
  • コアに直接搭載されたAI推論エンジンにより、データからの洞察を加速します
  • 消費電力を削減できるスケーラビリティとパフォーマンスでワークロードを統合します

Power S1024 サーバーは、クラス最高の信頼性を提供しながら、拡張性、パフォーマンス、セキュリティを向上させるように設計されました。この強化されたパフォーマンスとスケーリングのシステムファミリーは、ミッションクリティカルなワークロードをハイブリッドクラウドに拡張し、柔軟性を高めることで、ビジネスの俊敏性を実現するのに貢献します。

  • ビジネス上の要求への迅速な対応: Power10 プロセッサーは、エネルギーや二酸化炭素排出量を増加させずに、 IBM Power9 と比較して新しいレベルのパフォーマンスを同じワークロードに対して提供し、より効率的なスケーリングを可能にします。 Power Private Cloud with Dynamic Capacity には、 IBM i、 Linux 、 Red Hat OpenShift® Container Platform、 AIX® 環境のメータリング機能が組み込まれており、 Power S1024 と組み合わせることで、パブリッククラウド、プライベートクラウド、ハイブリッドクラウド全体で一貫して柔軟に使用できます。
  • コアからクラウドへのデータの保護: Power10 は、管理オーバーヘッドやパフォーマンスへの影響なしに、プロセッサー・レベルで透過的なメモリー暗号化を使用して、エンドツーエンドのセキュリティーを提供します。Power10 は、ポスト量子暗号化と完全同型暗号化のサポートにより、将来の脅威の一歩先を行くのにも役立ちます。
  • インサイトとオートメーションの合理化: Power10 では、すべてのサーバーに組み込まれた強化型インコア AI 推論機能を利用しており、特殊なハードウェアの追加は必要ありません。最も機密性の高いデータが存在する場所からインサイトを取得できるため、データ移動の時間とリスクをなくすことができます。
  • 可用性と信頼性の最大化: Power10 プロセッサーは、 IBM Cloud® のインフラストラクチャー冗長性と災害時リカバリーのための独自の高度なリカバリーと自己修復機能を使用して、企業が稼働状態の維持を確保するのに役立てます。

Power サーバーは、銀行向けの新しいデジタル・サービス、製造でのリアルタイムの意思決定、エンジニアリングやエレクトロニクスの運用効率など、世界中のお客様に対して結果を達成しています。 Power サーバーが IBM クライアントの成功にどのように貢献しているかについては、 IBM ケース・スタディー をご参照ください。



Back to topBack to top

主要要件

Top rule

IBM AIX 、 IBM i、 Linux 、VIOS オペレーティング・システムが必要です。詳しくは、『 前提ソフトウェア 』のセクションを参照してください。



Back to topBack to top

Planned availability date

Top rule

  • July 22, 2022, except for features EM6U and EM78
  • November 18, 2022, for features EM6U and EM78

Availability within a country is subject to local legal requirements.



Back to topBack to top

Description

Top rule

The Power S1024 (9105-42A) server is a high-performance, flexible, two-socket, 4U system that provides massive scalability and flexibility. It delivers extreme density in an energy-efficient design with superior reliability and resiliency. The Power S1024 server brings a secure environment that balances mission-critical traditional workloads and modernization applications to deliver a frictionless hybrid cloud experience.

Power S1024 feature summary

  • Up to two dual-chip processor modules per system server:
    • 3.40--4.0 GHz, 12-core Power10 processor (#EPGM).
  • Two dual-chip processor modules per system server:
    • 3.10--4.0 GHz, 16-core Power10 processor (#EPGC).
    • 2.75--3.90 GHz, 24-core Power10 processor (#EPGD).
  • MMA feature helps to perform in-core AI inferencing and machine learning where data resides.
  • Processor core activation features for Pools 2.0 available on a per-core basis:
    • 1 core Base Processor Activation Pools 2.0 for #EPGM - any OS (#EUBX).
    • 1 core Base Processor Activation Pools 2.0 for #EPGC - any OS (#EUCK).
    • 1 core Base Processor Activation Pools 2.0 for #EPGD - any OS (#EUCS).
  • CUoD Static core activation features available on a per-core basis:
    • One CUoD Static Processor Core Activation for #EPGM (#EPFM).
    • One CUoD Static Processor Core Activation for #EPGC (#EPFC).
    • One CUoD Static Processor Core Activation for #EPGD (#EPFD).
  • Up to 8 TB of system memory distributed across 32 DDIMM slots per system server. DDIMMs are extremely high-performance, high-reliability, intelligent, and dynamic random access memory (DRAM) devices.
  • DDR4 DDIMM memory cards:
    • 32 GB (2 x 16 GB), (#EM6N).
    • 64 GB (2 x 32 GB), (#EM6W).
    • 128 GB (2 x 64 GB), (#EM6X).
    • 256 GB (2 x 128 GB), (#EM6U).
    • 512 GB (2 x 256 GB), (#EM78).
  • AMM for Hypervisor is available as an option to enhance resilience by mirroring critical memory used by the PowerVM hypervisor.
  • PCIe slots with two processors:
    • Four x16 Gen4 or x8 Gen5 full-height, half-length slots.
    • Four x8 Gen5 full-height, half-length slots (with x16 connectors).
    • Two x8 Gen4 full-height, half-length slots (with x16 connectors).
    • All PCIe slots are concurrently maintainable.
  • Integrated:
    • System management using an Enterprise Baseboard Management Controller (eBMC).
    • EnergyScale technology.
    • Redundant hot-swap cooling.
    • Redundant hot-swap AC power supplies.
    • Up to two HMCs with 1 GbE RJ45 ports.
    • One rear USB 3.0 port.
    • One front USB 3.0 port.
    • One internal USB 3.0 Port for RDX.
    • Nineteen-inch rack-mounting hardware (4U).
  • Optional PCIe I/O expansion drawer with PCIe slots:
    • Up to two drawers (#EMX0).
    • Each I/O drawer holds one or two six-slot PCIe fanout modules (#EMXH).
    • Each fanout module attaches to the system node through a PCIe optical or copper cable adapter (#EJ2A).

PowerVM

PowerVM, which delivers industrial-strength virtualization for AIX and Linux environments on Power processor-based systems, provides a virtualization-oriented performance monitor, and performance statistics are available through the HMC. These performance statistics can be used to understand the workload characteristics and to prepare for capacity planning.

Processor modules

The Power10 processor is the compute engine for the next generation of Power systems and successor to the current IBM Power9 processor. It offers superior performance on applications such as MMA facility to accelerate computation-intensive kernels, matrix multiplication, convolution, and discrete Fourier transform. To efficiently accelerate MMA operations, the Power10 processor core implements a dense math engine (DME) microarchitecture that effectively provides an accelerator for cognitive computing, machine learning, and AI inferencing workloads.

A maximum of two Power10 processors of the same type are allowed.

  • One or two 12-core, typical 3.40 to 4.0 Ghz (max) processors (#EPGM) are allowed.
  • Two 16-core, typical 3.10 to 4.0 Ghz (max) (max) processors (#EPGC) are allowed.
  • Two 24-core, typical 2.75 to 3.90 GHz (max) processors (#EPGD) are allowed.

The Power S1024 offers enhanced Workload Optimized Frequency for optimum performance. This mode can dynamically optimize the processor frequency at any given time based on CPU utilization and operating environmental conditions. For a description of this feature and other power management options available for this server, see the IBM EnergyScale for Power10 Processor-Based Systems website.

The following defines the allowed quantities of base or static processor activation entitlements:

Base Processor Core Activations for Pools 2.0 (#EP20)

  • From one to maximum of twelve Base Processor Activations (Pools 2.0) for #EPGM - any OS (#EUBX) with one processor module are allowed.
  • From one to maximum of twenty-four Base Processor Activations (Pools 2.0) for #EPGM - any OS (#EUBX) with two processor modules are allowed.
  • From one to maximum of thirty-two Base Processor Activations (Pools 2.0) for #EPGC - any OS (#EUCK) with two processor modules are allowed.
  • From one to maximum of forty-eight Base Processor Activations (Pools 2.0) for #EPGD - any OS (#EUCS) with two processor modules are allowed.

Note: Base Processor for Pools 2.0 features EUBX, EUCK, and EUCM are not available to order in China.

Shared Utility Capacity on Power S1024 systems provides enhanced multisystem resource sharing and by-the-minute tracking and consumption of computing resources across a collection of systems within a Power Enterprise Pools (2.0). It delivers a complete range of flexibility to tailor initial system configurations with the right mix of purchased and pay-for-use consumption of processors and software.

A Power Private Cloud Solution infrastructure consolidated onto Power S1024 systems has the potential to greatly simplify system management so IT teams can focus on optimizing their business results instead of moving resources around within their data center.

Shared Utility Capacity resources are easily tracked by virtual machine (VM) and monitored by a CMC, which integrates with local HMCs to manage the pool and track resource use by system and VM, by the minute, across a pool.

You no longer need to worry about overprovisioning capacity on each system to support growth, as all available processors on all systems in a pool are activated and available for use.

Base Capacity for processor resources is purchased on each Power S924, Power S922, or Power S1024 system and is then aggregated across a defined pool of systems for consumption monitoring.

Power Enterprise Pools created after July 12th, 2022, require a consistent tier of IBM i software license entitlement on all systems within the pool.

Clients with an existing Power Enterprise Pool of Power S924 and/or Power S922 systems supporting only AIX and/or Linux applications may simply add a Power S1024 systems to their pool and migrate their applications at a rate and pace of their choosing.

Clients with an existing Power Enterprise Pool of S924 systems (P20) supporting IBM i applications should deploy new Power S1024 systems with up to 24 cores (P20) into the pool. A Power S1024 with 32 cores or 48 cores (P30) may not be deployed into a Pool with Power S924, Power S922, or Power S1022 systems with P10/P20 tier IBM i license entitlements.

Clients may create new Power Enterprise Pools with a mix of Power S1022 and S1024 systems within a single pool. By doing so, all IBM i license entitlements must be acquired at the higher tier required by the processor feature of the largest S1024 server in the Pool, either P20 or P30.

Capacity Upgrade on Demand Static Processor Core Activations

  • From six to maximum of twelve CUoD Static Processor Core Activations for #EPGM - any OS (#EPFM) with one processor module are allowed.
  • From twelve to maximum of twenty-four CUoD Static Processor Core Activations for #EPGM - any OS (#EPFM) with two processor modules are allowed.
  • From sixteen to maximum of thirty-two CUoD Static Processor Core Activations for #EPGC - any OS (#EPFC) with two processor modules are allowed.
  • From twenty-four to maximum of forty-eight CUoD Static Processor Core Activations for #EPGD - any OS (#EPFD) with two processor modules are allowed.

Note: At least 50 percent of the total processor cores in the Power S1024 system must be static.

Conversions CUoD Static to Base Processor core for Pools 2.0

A variety of activations fit different usage and pricing options. Static activations are permanent and support any type of application environment on this server. Base processor activations are ordered against a specific server, but they can be moved to any server within the Power Pool and can support any type of application. The following defines the allowed conversions from static to base processor and activation entitlements:

From FC: To FC:
EPFM - One CUoD Static Processor Core Activation for #EPGM EUBZ - 1 core Base Processor Activation (Pools 2.0) for EPGM - Any OS (Conv from EPFM)
EPFC - One CUoD Static Processor Core Activation for #EPGC EUCR - 1 core Base Processor Activation (Pools 2.0) for EPGC - Any OS (Conv from EPFC)
EPFD - One CUoD Static Processor Core Activation for #EPGD EUCT - 1 core Base Processor Activation (Pools 2.0) for EPGD - Any OS (Conv from EPFD)

Note: Pools 2.0 feature EP20 is required.

MMA

The Power10 processor core inherits the modular architecture of the Power9 processor core, but the redesigned and enhanced microarchitecture significantly increases the processor core performance and processing efficiency. The peak computational throughput is markedly improved by new execution capabilities and optimized cache bandwidth characteristics. Extra matrix math acceleration engines can deliver significant performance gains for machine learning, particularly for AI inferencing workloads.

Memory

The Power S1024 server uses the next-generation DDIMMs, which are high-performance, high-reliability, high-function memory cards that contain a buffer chip, intelligence, and 2933 MHz, or 3200 MHz DRAM memory. DDIMMs are placed in DDIMM slots in the server system.

  • A minimum 32 GB of memory is required with one processor module. All Memory DIMMs must be ordered in pairs.
  • A minimum 64 GB of memory is required with two processor modules. All Memory DIMMs must be ordered in quads.
  • Each DIMM feature code delivers two physical Memory DIMMs.

Plans for future memory upgrades should be taken into account when deciding which memory feature size to use at the time of initial system order.

For the best possible performance, it is generally recommended that memory be installed in all memory slots. IBM recommends populating all the DIMM slots, or the more as possible, mainly for OLAP and similar high bandwidth workloads.

To assist with the plugging rules, two DDIMMs are ordered using one memory feature number. Select from:

  • 32 GB (2 x 16 GB) DDIMMs, 3200 MHz, 8 Gb DDR4 Memory (#EM6N)
  • 64 GB (2 x 32 GB) DDIMMs, 3200 MHz, 8 Gb DDR4 Memory (#EM6W)
  • 128 GB (2 x 64 GB) DDIMMs, 3200 MHz, 16 Gb DDR4 Memory (#EM6X)
  • 256 GB (2 x 128 GB) DDIMMs, 2933 MHz, 16 Gb DDR4 Memory (#EM6U)
  • 512 GB (2 x 256 GB) DDIMMs, 2933 MHz, 16 Gb DDR4 Memory (#EM78)

AMM

AMM for Hypervisor is available as an option (#EM8G) to enhance resilience by mirroring critical memory used by the PowerVM hypervisor so that it can continue operating in the event of a memory failure. A portion of available memory can be proactively partitioned such that a duplicate set may be utilized upon non-correctable memory errors. This can be implemented at the granularity of DIMMs or logical memory blocks.

Power S1024 Capacity Backup (CBU) for IBM i

The Power S1024 CBU designation enables you to temporarily transfer IBM i processor license entitlements and IBM i user license entitlements purchased for a primary machine to a secondary CBU-designated system for high availability (HA) and disaster recovery (DR) operations. Temporarily transferring these resources instead of purchasing them for your secondary system may result in significant savings. Processor activations cannot be transferred.

The CBU specify feature 0444 or CBU specify feature 4891 are available only as part of a new server purchase. Certain system prerequisites must be met, and system registration and approval are required before the CBU specify feature can be applied on a new server. Standard IBM i terms and conditions do not allow either IBM i processor license entitlements or IBM i user license entitlements to be transferred permanently or temporarily. These entitlements remain with the machine they were ordered for. When you register the association between your primary and on-order CBU system, you must agree to certain terms and conditions regarding the temporary transfer.

After a new CBU system is registered as a pair with the proposed primary system and the configuration is approved, you can temporarily move your optional IBM i processor license entitlement and IBM i user license entitlements from the primary system to the CBU system when the primary system is down or while the primary system processors are inactive. The CBU system can then support failover and role swapping for a full range of test, DR, and HA scenarios. Temporary entitlement transfer means that the entitlement is a property transferred from the primary system to the CBU system and may remain in use on the CBU system as long as the registered primary and CBU system are in deployment for the high availability or disaster recovery operation. The intent of the CBU offering is to enable regular role-swap operations.

Before you can temporarily transfer IBM i processor license entitlements from the registered primary system, you must have more than one IBM i processor license on the primary machine and at least one IBM i processor license on the CBU server. To be in compliance, the CBU will be configured in a such a manner that there will be no out-of-compliance messages prior to a failover. An activated processor must be available on the CBU server to use the transferred entitlement. You can then transfer any IBM i processor entitlements above the minimum one, assuming the total IBM i workload on the primary system does not require the IBM i entitlement you would like to transfer during the time of the transfer. During this temporary transfer, the CBU system's internal records of its total number of IBM i processor license entitlements are not updated, and you may see IBM i license noncompliance warning messages from the CBU system. These warning messages in this situation do not mean you are not in compliance.

Before you can temporarily transfer 5250 Enterprise Enablement entitlements, you must have more than one 5250 Enterprise Enablement entitlement on the primary server and at least one 5250 Enterprise Enablement entitlement on the CBU system. You can then transfer the entitlements that are not required on the primary server during the time of transfer and that are above the minimum of one entitlement. The minimum number of permanent entitlements on the CBU is one; however, you are required to license all permanent workload, such as replication workload. If, for example, the replication workload consumes four processor cores at peak workload, then you are required to permanently license four cores on the CBU.

The servers with P20, P30, or higher, software tiers do not have user entitlements that can be transferred, and only processor license entitlements can be transferred.

For a Power S1024 CBU which is in the P20 software tier, the following are eligible primary systems:

  • Power E1080 (9080-HEX)
  • Power E980 (9080-M9S)
  • Power S1024 (9105-42A) with 48, 32, 24, or 12 cores
  • Power S924 (9009-42G)
  • Power S924 (9009-42A)

For a Power S1024 CBU which is in the P30 software tier, the following are eligible primary systems:

  • Power E1080 (9080-HEX)
  • Power E980 (9080-M9S)
  • Power S1024 (9105-42A) with 32 or 48 cores

Power S1024 SW tiers for IBM i

  • The 12- and 24-core processor servers (#EPGM, QPRCFEAT EPGM) are IBM i SW tier P20.
  • The 32-core processor server (#EPGC, QPRCFEAT EPGC) is IBM i SW tier P30.
  • The 48-core processor server (#EPGD, QPRCFEAT EPGD) is IBM i SW tier P30.

During the temporary transfer, the CBU system's internal records of its total number of IBM i processor entitlements are not updated, and you may see IBM i license noncompliance warning messages from the CBU system. Prior to a temporary transfer, the CBU will be configured in such a manner that there are no out of compliance warning messages.

If your primary or CBU machine is sold or discontinued from use, any temporary entitlement transfers must be returned to the machine on which they were originally acquired. For CBU registration, terms and conditions, and further information, see the IBM Power Systems: Capacity BackUp website.

Titanium power supply

Titanium power supplies are designed to meet the latest efficiency regulations. The S1024 has four titanium power supplies supporting a rack: 2+2 1600 watt, 200--240 volt (#EB3S).

The power supplies dock directly to the power distribution board, which is bolted down to the system planar. Each power supply unit (PSU) has an interlock mechanism that prevents a PSU from being removed from the chassis while its line cord is connected. This ensures that input power is removed from the PSU prior to the PSU's removal from the chassis. The 1600-watt power supply uses CFF card-edge connectors. The power distribution board includes one auxiliary power connector for a high-power PCIe or OpenCAPI card on the left side of the power supplies, viewing from the back of the system. This connector supports up to 400 watts.

Redundant fans

Redundant fans are standard.

Power cords

Four power cords are required. The Power S1024 server supports power cord 4.3-meter (14-foot), drawer to wall/IBM PDU (250V/10A) in the base shipment group. See the feature listing for other options.

PCIe slots

The Power S1024 server has up to sixteen U.2 NVMe devices and up to ten PCIe hot-plug slots with concurrent maintenance, providing excellent configuration flexibility and expandability. For more information about PCIe slots, see the rack-integrated system with I/O expansion drawer section below.

With two Power10 processor DCMs, ten PCIe slots are available:

  • Four x16 Gen4 or x8 Gen5 full-height, half-length slots
  • Four x8 Gen5 full-height, half-length slots (with x16 connectors)
  • Two x8 Gen4 full-height, half-length slots (with x16 connectors)

With one Power10 processor DCM, five PCIe slots are available:

  • One PCIe x16 Gen4 or x8 Gen5, full-height, half-length slot
  • Three PCIe x8 Gen5, full-height, half-length slots (with x16 connector)
  • One PCIe x8 Gen4, full-height, half-length slot(with x16 connector)

The x16 slots can provide up to twice the bandwidth of x8 slots because they offer twice as many PCIe lanes. PCIe Gen5 slots can support up to twice the bandwidth of a PCIe Gen4 slot, and PCIe Gen4 slots can support up to twice the bandwidth of a PCIe Gen3 slot, assuming an equivalent number of PCIe lanes.

At least one PCIe Ethernet adapter is required on the server by IBM to ensure proper manufacture, test, and support of the server. One of the x8 PCIe slots is used for this required adapter.

These servers are smarter about energy efficiency when cooling the PCIe adapter environment. They sense which PCIe adapters are installed in their PCIe slots and, if an adapter requires higher levels of cooling, they automatically speed up fans to increase airflow across the PCIe adapters. Note that faster fans increase the sound level of the server. Higher wattage PCIe adapters include the PCIe3 SAS adapters and SSD/flash PCIe adapters (#EJ10, #EJ14, and #EJ0J).

NVMe drive slots, RDX bay, and storage backplane options

Non-volatile memory express (NVMe) SSDs, in the 15-millimeter carrier U.2 2.5-inch form factor, are used for internal storage in the Power S1024 system. The Power S1024 supports up to 16 NVMe U.2 devices when two storage backplanes with eight NVMe U.2 drive slots (#EJ1Y) are ordered. Both 7-millimeter and 15-millimeter NVMe are supported in the 15-millimeter carrier. The Power S1024 also supports an internal RDX drive attached through the USB controller.

Note: There is no SAS storage backplane supported on the Power S1024.

Cable management arm

A folding arm is attached to the server's rails at the rear of the server. The server's power cords and the cables from the PCIe adapters or integrated ports run through the arm and into the rack. The arm enables the server to be pulled forward on its rails for service access to PCIe slots, memory, processors, and so on without disconnecting the cables from the server. Approximately 1 meter (3 feet) of cord or cable length is needed for the arm.

Integrated I/O ports

There are two HMC ports, one USB 3.0 port internal only for RDX attach, and two USB 3.0 ports. The two HMC ports are RJ45, supporting 1 Gb Ethernet connections. The eBMC USB 2.0 port can be used for communication to an Uninterrupted Power Supply (UPS) or code update.

Rack-integrated system with I/O expansion drawer

Regardless of the rack-integrated system to which the PCIe Gen3 I/O expansion drawer is attached, if the expansion drawer is ordered as factory integrated, the PDUs in the rack will placed horizontally by default to enhance cable management.

Expansion drawers complicate the access to vertical PDUs if located at the same height. IBM recommends accommodating PDUs horizontally on racks containing one or more PCIe Gen3 I/O expansion drawers.

After the rack with expansion drawers is delivered, you may rearrange the PDUs from horizontal to vertical. However, the configurator will continue to consider the PDUs as being placed horizontally for the matter of calculating the free space still available in the rack.

Vertical PDUs can be used only if CSRP (#0469) is on the order. When specifying CSRP, you must provide the locations where the PCIe Gen3 I/O expansion drawers should be placed. Note that you must avoid placing drawers adjacent to vertical PDU locations EIA 6 through 16 and 21 through 31.

The I/O expansion drawer can be migrated from a Power9 to a Power10 processor-based system. Only I/O cards supported on Power10 in the I/O expansion drawer are allowed. Clients migrating the I/O expansion drawer configuration might have one or two PCIe3 six-slot fanout modules (#EMXH) installed in the rear of the I/O expansion drawer.

For a 4U server configuration with one processor module, up to one I/O expansion drawer (#EMX0) and one fanout module (#EMXH) connected to one PCIe x16 to CXP Converter Card Adapter (#EJ2A) are supported. The right PCIe module bay must be populated by a filler module.

For a 4U server configuration with two processor modules, up to two I/O expansion drawers (#EMX0) and four fanout modules (#EMXH) connected to four PCIe x16 to CXP Converter Card Adapters (#EJ2A) are supported.

Limitations:

  • Mixing of prior PCIe3 fanout modules (#EMXF or #EMXG) with PCIe3 fanout module (#EMXH) in the same I/O expansion drawer is not allowed.
  • PCIe x16 to CXP Converter Card Adapter (#EJ2A) one PCIe3 x16 slot in system unit plus a pair of optical cables (such as feature ECCX or feature ECCY) or a pair of copper cables (such as feature ECCS).

RDX docking station

The RDX docking station accommodates RDX removable disk cartridges of any capacity. The disk is in a protective rugged cartridge enclosure that plugs into the docking station. The docking station holds one removable rugged disk drive or cartridge at a time. The rugged removable disk cartridge and docking station performs saves, restores, and backups similar to a tape drive. This docking station can be an excellent entry capacity and performance option.

EXP24SX SAS storage enclosure

The EXP24SX is a storage expansion enclosure with 24 2.5-inch SFF SAS bays. It supports up to 24 hot-plug HDDs or SSDs in only 2 EIA of space in a 19-inch rack. The EXP24SX SFF bays use SFF Gen2 (SFF-2) carriers or trays.

The EXP24SX drawer feature ESLS is supported on Power10 scale-out servers by AIX, IBM i, Linux, and VIOS.

With AIX, Linux, and VIOS, the EXP24SX can be ordered with four sets of 6 bays (mode 4), two sets of 12 bays (mode 2), or one set of 24 bays (mode 1). With IBM i, only one set of 2 bays (mode 1) is supported. It is possible to change the mode setting in the field using software commands along with a specifically documented procedure.

Important: When changing modes, a skilled, technically qualified person should follow the special documented procedures. Improperly changing modes can potentially destroy existing RAID sets, prevent access to existing data, or allow other partitions to access another partition's existing data. Hire an expert to assist if you are not familiar with this type of reconfiguration work.

Four mini-SAS HD ports on the EXP24SX are attached to PCIe Gen3 SAS adapters or attached to an integrated SAS controller in a Power10 scale-out server. The following PCIe3 SAS adapters support the EXP24SX:

  • PCIe3 RAID SAS Adapter Quad-port 6 Gb x8 (#EJ0J)
  • PCIe3 12 GB Cache RAID Plus SAS Adapter Quad-port 6 Gb x8 (#EJ14)
  • PCIe3 LP RAID SAS Adapter Quad-port 6 Gb x8 (#EJ0M)

Earlier-generation PCIe1 or PCIe2 SAS adapters are not supported with the EXP24SX.

The attachment between the EXP24SX and the PCIe3 SAS adapters or integrated SAS controllers is through SAS YO12 or X12 cables. X12 and YO12 cables are designed to support up to 12 Gb SAS. The PCIe Gen3 SAS adapters support up to 6 Gb throughput. The EXP24SX has been designed to support up to 12 Gb throughput if future SAS adapters support that capability. All ends of the YO12 and X12 cables have mini-SAS HD narrow connectors. Cable options are:

  • X12 cable: 3-meter copper (#ECDJ), 4.5-meter optical (#ECDK), 10-meter optical (#ECDL)
  • YO12 cables: 1.5-meter copper (#ECDT), 3-meter copper (#ECDU)
  • 1M 100 GbE Optical Cable QSFP28 (AOC) (#EB5K)
  • 1.5M 100 GbE Optical Cable QSFP28 (AOC) (#EB5L)
  • 2M 100 GbE Optical Cable QSFP28 (AOC) (#EB5M)
  • 3M 100 GbE Optical Cable QSFP28 (AOC) (#EB5R)
  • 5M 100 GbE Optical Cable QSFP28 (AOC) (#EB5S)
  • 10M 100 GbE Optical Cable QSFP28 (AOC) (#EB5T)
  • 15M 100 GbE Optical Cable QSFP28 (AOC) (#EB5U)
  • 20M 100 GbE Optical Cable QSFP28 (AOC) (#EB5V)
  • 30M 100 GbE Optical Cable QSFP28 (AOC) (#EB5W)
  • 50M 100 GbE Optical Cable QSFP28 (AOC) (#EB5X)

An AA12 cable interconnecting a pair of PCIe3 12 GB cache adapters (two #EJ14) is not attached to the EXP24SX. These higher-bandwidth cables could support 12 Gb throughput if future adapters support that capability. Copper feature ECE0 is 0.6 meters long, feature ECE3 is 3 meters long, and optical AA12 feature ECE4 is 4.5 meters long.

One no-charge specify code is used with each EXP24SX I/O drawer (#ESLS) to communicate to IBM configurator tools and IBM Manufacturing which mode setting, adapter, and SAS cable are needed. With this specify code, no hardware is shipped. The physical adapters, controllers, and cables must be ordered with their own chargeable feature numbers. There are more technically supported configurations than are represented by these specify codes. IBM Manufacturing and IBM configurator tools such as e-config only understand and support EXP24SX configurations represented by these specify codes.

Specify code Mode Adapter/Controller Cable to drawer Environment
EJW0 Mode 1 CEC SAS Ports 2 YO12 cables AIX/IBM i/Linux/VIOS
EJW1 Mode 1 One (unpaired) #EJ0J/#EJ0M 1 YO12 cable AIX/IBM i/Linux/VIOS
EJW2 Mode 1 Two (one pair) #EJ0J/#EJ0M 2 YO12 cables AIX/IBM i/Linux/VIOS
EJW3 Mode 2 Two (unpaired) #EJ0J/#EJ0M 2 X12 cables AIX/Linux/VIOS
EJW4 Mode 2 Four (two pair) #EJ0J/#EJ0M 2 X12 cables AIX/Linux/VIOS
EJW5 Mode 4 Four (unpaired) #EJ0J/#EJ0M 2 X12 cables AIX/Linux/VIOS
EJW6 Mode 2 One (unpaired) #EJ0J/#EJ0M 2 YO12 cables AIX/Linux/VIOS
EJW7 Mode 2 Two (unpaired) #EJ0J/#EJ0M 2 YO12 cables AIX/Linux/VIOS
EJWF Mode 1 Two (one pair) #EJ14 2 Y012 cables AIX/IBM i/Linux/VIOS
EJWG Mode 2 Two (one pair) #EJ14 2 X12 cables AIX/Linux/VIOS
EJWJ Mode 2 Four (two pair) #EJ14 2 X12 cables AIX/Linux/VIOS

All of the above EXP24SX specify codes assume a full set of adapters and cables able to run all the SAS bays configured. The following specify codes communicate to IBM Manufacturing a lower-cost partial configuration is to be configured where the ordered adapters and cables can run only a portion of the SAS bays. The future MES addition of adapters and cables can enable the remaining SAS bays for growth. The following specify codes are used:

Specify code Mode Adapter/Controller Cable to drawer Environment
EJWA (1/2 of EJW7) Mode 2 One (unpaired) #EJ0J/#EJ0M 1 YO12 cables AIX/Linux/VIOS
EJWB (1/2 of EJW4) Mode 2 Two (one pair) #EJ0J/#EJ0M 1 X12 cable AIX/Linux/VIOS
EJWC (1/4 of EJW5) Mode 4 One (unpaired) #EJ0J/#EJ0M 1 X12 cable AIX/Linux/VIOS
EJWD (1/2 of EJW5) Mode 4 Two (unpaired) #EJ0J/#EJ0M 1 X12 cable AIX/Linux/VIOS
EJWE (3/4 of EJW5) Mode 4 Three (unpaired) #EJ0J/#EJ0M 2 X12 cables AIX/Linux/VIOS
EJWH (1/2 of EJWJ) Mode 2 Two (one pair) #EJ14 1 X12 cable AIX/Linux/VIOS

An EXP24SX drawer in mode 4 can be attached to two or four SAS controllers and provide a great deal of configuration flexibility. For example, if using unpaired feature EJ0J adapters, these EJ0J adapters could be in the same server in the same partition, same server in different partitions, or even different servers.

An EXP24SX drawer in mode 2 has similar flexibility. If the I/O drawer is in mode 2, then half of its SAS bays can be controlled by one pair of PCIe3 SAS adapters, such as a 12 GB write cache adapter pair (#EJ14), and the other half can be controlled by a different PCIe3 SAS 12 GB write cache adapter pair or by zero-write-cache PCIe3 SAS adapters.

Note that for simplicity, IBM configurator tools such as e-config assume that the SAS bays of an individual I/O drawer are controlled by one type of SAS adapter. As a client, you have more flexibility than e-config understands.

A maximum of 24 2.5-inch SSDs or 2.5-inch HDDs is supported in the EXP24SX 24 SAS bays. There can be no mixing of HDDs and SSDs in the same mode 1 drawer. HDDs and SSDs can be mixed in a mode 2 or mode 4 drawer, but they cannot be mixed within a logical split of the drawer. For example, in a mode 2 drawer with two sets of 12 bays, one set could hold SSDs and one set could hold HDDs, but you cannot mix SSDs and HDDs in the same set of 12 bays.

The indicator feature EHS2 helps IBM Manufacturing understand where SSDs are placed in a mode 2 or a mode 4 EXP24SX drawer. On one mode 2 drawer, use a quantity of one feature EHS2 to have SSDs placed in just half the bays, and use two EHS2 features to have SSDs placed in any of the bays. Similarly, on one mode 4 drawer, use a quantity of one, two, three, or four EHS2 features to indicate how many bays can have SSDs. With multiple EXP24SX orders, IBM Manufacturing will have to guess which quantity of feature ESH2 is associated with each EXP24SX. Consider using CSP (#0456) to reduce guessing.

Two-and-a-half-inch SFF SAS HDDs and SSDs are supported in the EXP24SX. All drives are mounted on Gen2 carriers or trays and thus named SFF-2 drives.

The EXP24SX drawer has many high-reliability design points:

  • SAS drive bays that support hot swap
  • Redundant and hot-plug-capable power and fan assemblies
  • Dual line cords
  • Redundant and hot-plug enclosure service modules (ESMs)
  • Redundant data paths to all drives
  • LED indicators on drives, bays, ESMs, and power supplies that support problem identification
  • Through the SAS adapters or controllers, drives that can be protected with RAID and mirroring and hot-spare capability

Order two ESLA features for AC power supplies. The enclosure is shipped with adjustable depth rails and can accommodate 19-inch rack depths from 59.5--75 centimeters (23.4--29.5 inches). Slot filler panels are provided for empty bays when initially shipped from IBM.

PCIe Gen3 I/O drawer cabling option

A copper cabling option (#CCS) is available for the scale-out servers. The cable option offers a much lower-cost connection between the server and the PCIe Gen3 I/O drawer fanout modules. The currently available Active Optical Cable (AOC) offers much longer length cables, providing rack placement flexibility. Plus, AOC cables are much thinner and have tighter bend radius and thus are much easier to cable in the rack.

The 3M Copper CXP Cable Pair (#ECCS) has the same performance and same reliability, availability, and serviceability (RAS) characteristics as the AOC cables. One copper cable length of 3 meters is offered. Note that the cable management arm of the scale-out servers requires about 1 meter of cable.

Like the AOC cable pair, the copper pair is cabled in the same manner. One cable attaches to the top CXP port in the PCIe adapter in the x16 PCIe slot in the server system unit and then attaches to the top CXP port in the fanout module in the I/O drawer. Its cable pair attaches to the bottom CXP port of the same PCIe adapter and to the bottom CXP port of the same fanout module. Note that the PCIe adapter providing the CXP ports on the server was named a PCIe3 "Optical" Cable Adapter. In hindsight, this naming was unfortunate as the adapter's CXP ports are not unique to optical. But at the time, optical cables were the only connection option planned.

Copper and AOC cabling can be mixed on the same server. However, they cannot be mixed on the same PCIe Gen3 I/O drawer or mixed on the same fanout module.

Copper cables have the same operating system software prerequisites as AOC cables.

Racks

The Power S1024 is designed to fit a standard 19-inch rack. IBM Development has tested and certified the system in the IBM Enterprise Rack (7965-S42). The 7965-S42 rack is a 2-meter enterprise rack that provides 42U or 42 EIA of space. You can choose to place the server in other racks if you are confident those racks have the strength, rigidity, depth, and hole pattern characteristics required. You should work with IBM Service to determine the appropriateness of other racks.

It is highly recommended that the Power S1024 be ordered with an IBM 42U Enterprise Rack (7965-S42). An initial system order is placed in a 7965-S42 rack. This is done to ease and speed client installation, provide a more complete and higher quality environment for IBM Manufacturing system assembly and testing, and provide a more complete shipping package.

Recommendation: The 7965-S42 rack has optimized cable routing, so all 42U may be populated with equipment.

The 7965-S42 rack does not need 2U on either the top or bottom for cable egress.

With the 2-meter 7965-S42 rack, a rear rack extension of 12.7 centimeters (5 inches) feature ECRK provides space to hold cables on the side of the rack and keep the center area clear for cooling and service access.

Recommendation: Include the above extensions when approximately more than 16 I/O cables per side are present or may be added in the future; when using the short-length, thinner SAS cables; or when using thinner I/O cables, such as Ethernet. If you use longer-length, thicker SAS cables, fewer cables will fit within the rack.

SAS cables are most commonly found with multiple EXP24SX SAS drawers (#ESLS) driven by multiple PCIe SAS adapters. For this reason, it is good practice to keep multiple EXP24SX drawers in the same rack as the PCIe I/O drawer or in a separate rack close to the PCIe I/O drawer, using shorter, thinner SAS cables. The feature ECRK extension can be good to use even with smaller numbers of cables because it enhances the ease of cable management with the extra space it provides.

Multiple service personnel are required to manually remove or insert a system node drawer into a rack, given its dimensions and weight and content.

Recommendation: To avoid any delay in service, obtain an optional lift tool (#EB3Z). A lighter, lower-cost lift tool is FC EB3Z1 (lift tool) and EB4Z1 (angled shelf kit for lift tool). The EB3Z lift tool provides a hand crank to lift and position a server up to 400 pounds. Note that a single system node can weigh up to 86.2 kilograms (190 pounds).

1 Feature EB3Z and feature EB4Z are not available to order in Albania, Bahrain, Bulgaria, Croatia, Egypt, Greece, Jordan, Kuwait, Kosovo, Montenegro, Morocco, Oman, UAE, Qatar, Saudi Arabia, Serbia, Slovakia, Slovenia, Taiwan, and Ukraine.

High-function (switched and monitored) PDUs plus

Hardware:

  • IEC 62368-1 and IEC 60950 safety standard
  • A new product safety approval
  • No China 5000-meter altitude or tropical restrictions
  • Detachable inlet for 3-phase delta-wired PDU with 30A, 50A, and 60A wall plugs
  • IBM Technology and Qualification approved components, such as anti-sulfur resistors (ASRs)
  • Ethernet 10/100/1000 Mb/s

Software:

  • Internet Protocol (IP) version 4 and IPv6 support
  • Secure Shell (SSH) protocol command line
  • Ability to change passwords over a network
PDU description 208 V 3-phase delta 200 V--240 V 1-phase or 3-phase wye
High-Function 12xC13 #ECJQ/#ECJP #ECJN/#ECJM
High-Function 9xC19 #ECJL/#ECJK #ECJJ/#ECJG

These PDUs can be mounted vertically in rack-side pockets or they can be mounted horizontally. If mounted horizontally, they each use one EIA (1U) of rack space. See feature EPTH for horizontal mounting hardware, which is used when IBM Manufacturing doesn't automatically factory-install the PDU. Two RJ45 ports on the front of the PDU enable you to monitor each receptacle's electrical power usage and to remotely switch any receptacle on or off.

Recommendation: The PDU is shipped with a generic PDU password. IBM strongly urges you to change it upon installation.

Existing and new high-function (switched and monitored) PDUs have the same physical dimensions. New high-function (switched and monitored) PDUs can be supported in the same racks as existing PDUs. Mixing of PDUs in a rack on new orders is not allowed.

Also, all factory-integrated orders must have the same PDU line cord.

The PDU features ECJQ/ECJP and ECJL/ECJK with the Amphenol inlet connector require new PDU line cords:

  • #ECJ5 - 4.3-meter (14-foot) PDU to Wall 3PH/24A 200--240V Delta-wired Power Cord
  • #ECJ7 - 4.3-meter (14-foot) PDU to Wall 3PH/48A 200--240V Delta-wired Power Cord

No pigtail (like #ELC0) is available because an Amphenol male inline connector is unavailable.

The PDU features ECJJ/ECJG and ECJN/ECJM with the UTG624-7SKIT4/5 inlet connector use the existing PDU line cord features 6653, 6667, 6489, 6654, 6655, 6656, 6657, 6658, 6491, or 6492.

Reliability, Availability, and Serviceability

Reliability, fault tolerance, and data correction

The reliability of systems starts with components, devices, and subsystems that are designed to be highly reliable. During the design and development process, subsystems go through rigorous verification and integration testing processes. During system manufacturing, systems go through a thorough testing process to help ensure the highest level of product quality.

The Power10 processor-based scale-out systems come with the following RAS characteristics:

  • Power10 processor RAS
  • Open Memory Interface, DDIMMs RAS
  • Enterprise BMC service processor for system management and Service
  • AMM for Hypervisor
  • NVMe drives concurrent maintenance
  • PCIe adapters concurrent maintenance
  • Redundant and hot-plug cooling
  • Redundant and hot-plug power
  • Light path enclosure and FRU LEDs
  • Service and FRU labels
  • Client or IBM install
  • Proactive support and service -- call home
  • Client or IBM service

Service processor

Power10 scale-out 2S-4S systems come with a redesigned service processor based on a Baseboard Management Controller (BMC) design with firmware that is accessible through open-source industry standard APIs, such as Redfish. An upgraded ASMI web browser user interface preserves the required RAS functions while allowing the user to perform tasks in a more intuitive way.

Diagnostic monitoring of recoverable error from the processor chipset is performed on the system processor itself, while the fatal diagnostic monitoring of the processor chipset is performed by the service processor. It runs on its own power boundary and does not require resources from a system processor to be operational to perform its tasks.

The service processor supports surveillance of the connection to the HMC and to the system firmware (hypervisor). It also provides several remote power control options, environmental monitoring, reset, restart, remote maintenance, and diagnostic functions, including console mirroring. The BMC service processors menus (ASMI) can be accessed concurrently during system operation, allowing nondisruptive abilities to change system default parameters, view and download error logs, check system health.

Redfish, an industry standard for server management, enables the Power servers to be managed individually or in a large data center. Standard functions such as inventory, event logs, sensors, dumps, and certificate management are all supported with Redfish. In addition, new user management features support multiple users and privileges on the BMC via Redfish or ASMI. User management via LDAP is also supported. The Redfish events service provides a means for notification of specific critical events such that actions can be taken to correct issues. The Redfish telemetry service provides access to a wide variety of data (eg. power consumption, ambient, core, DIMM and I/O temperatures, etc) that can be streamed on periodic intervals.

Mutual surveillance

The service processor monitors the operation of the firmware during the boot process and also monitors the hypervisor for termination. The hypervisor monitors the service processor and reports a service reference code when it detects surveillance loss. In the PowerVM environment, it will perform a reset/reload if it detects the loss of the service processor.

Environmental monitoring functions

The Power family does ambient and over temperature monitoring and reporting. It also adjusts fan speeds automatically based on those temperatures.

Memory subsystem RAS:

The Power10 scale-out system introduces a new 2U tall DDIMM, which has new open CAPI memory interface known as OMI for resilient and fast communication to the processor. This new memory subsystem design delivers solid RAS as described below.

Power10 processor functions

As in Power9, the Power10 processor has the ability to do processor instruction retry for some transient errors and core-contained checkstop for certain solid faults. The fabric bus design with CRC and retry persists in Power10 where a CRC code is used for checking data on the bus and has an ability to retry a faulty operation.

Cache availability

The L2/L3 caches in the Power10 processor in the memory buffer chip are protected with double-bit detect, single-bit correct error detection code (ECC). In addition, a threshold of correctable errors detected on cache lines can result in the data in the cache lines being purged and the cache lines removed from further operation without requiring a reboot in the PowerVM environment.

Modified data would be handled through Special Uncorrectable Error handling. L1 data and instruction caches also have a retry capability for intermittent errors and a cache set delete mechanism for handling solid failures.

Special Uncorrectable Error handling

Special Uncorrectable Error (SUE) handling prevents an uncorrectable error in memory or cache from immediately causing the system to terminate. Rather, the system tags the data and determines whether it will ever be used again. If the error is irrelevant, it will not force a check stop. When and if data is used, I/O adapters controlled by an I/O hub controller would freeze if data were transferred to an I/O device, otherwise, termination may be limited to the program/kernel or if the data is not owned by the hypervisor.

PCI extended error handling

PCI extended error handling (EEH)-enabled adapters respond to a special data packet generated from the affected PCI slot hardware by calling system firmware, which will examine the affected bus, allow the device driver to reset it, and continue without a system reboot. For Linux, EEH support extends to the majority of frequently used devices, although some third-party PCI devices may not provide native EEH support.

Uncorrectable error recovery

When the auto-restart option is enabled, the system can automatically restart following an unrecoverable software error, hardware failure, or environmentally induced (AC power) failure.

Serviceability

The purpose of serviceability is to efficiently repair the system while attempting to minimize or eliminate impact to system operation. Serviceability includes system installation, MES (system upgrades/downgrades), and system maintenance/repair. Depending upon the system and warranty contract, service may be performed by the client, an IBM representative, or an authorized warranty service provider.

The serviceability features delivered in this system help provide a highly efficient service environment by incorporating the following attributes:

  • Design for SSR setup, install, and service
  • Error Detection and Fault Isolation (ED/FI)
  • First Failure Data Capture (FFDC)
  • Light path service indicators
  • Service and FRU labels available on the system
  • Service procedures documented in IBM Documentation or available through the HMC
  • Automatic reporting of serviceable events to IBM through the Electronic Service Agent Call Home application

Service environment

In the PowerVM environment, the HMC is a dedicated server that provides functions for configuring and managing servers for either partitioned or full-system partition using a GUI or command-line interface (CLI) or REST API. An HMC attached to the system enables support personnel (with client authorization) to remotely, or locally to the physical HMC that is in proximity of the server being serviced, log in to review error logs and perform remote maintenance if required.

The Power10 processor-based systems support several service environments:

  • Attachment to one or more HMCs or vHMCs is a supported option by the system with PowerVM. This is the default configuration for servers supporting logical partitions with dedicated or virtual I/O. In this case, all servers have at least one logical partition.
  • No HMC. There are two service strategies for non-HMC systems.
    • Full-system partition with PowerVM: A single partition owns all the server resources and only one operating system may be installed. The primary service interface is through the operating system and the service processor.
    • Partitioned system with NovaLink: In this configuration, the system can have more than one partition and can be running more than one operating system. The primary service interface is through the service processor.

Service interface

Support personnel can use the service interface to communicate with the service support applications in a server using an operator console, a graphical user interface on the management console or service processor, or an operating system terminal. The service interface helps to deliver a clear, concise view of available service applications, helping the support team to manage system resources and service information in an efficient and effective way. Applications available through the service interface are carefully configured and placed to give service providers access to important service functions.

Different service interfaces are used, depending on the state of the system, hypervisor, and operating environment. The primary service interfaces are:

  • LEDs
  • Operator panel
  • BMC Service Processor menu
  • Operating system service menu
  • Service Focal Point on the HMC or vHMC with PowerVM

In the light path LED implementation, the system can clearly identify components for replacement by using specific component-level LEDs and can also guide the servicer directly to the component by signaling (turning on solid) the enclosure fault LED, and component FRU fault LED. The servicer can also use the identify function to blink the FRU-level LED. When this function is activated, a roll-up to the blue enclosure identify will occur to identify an enclosure in a rack. These enclosure LEDs will turn on solid and can be used to follow the light path from the enclosure and down to the specific FRU in the PowerVM environment.

First Failure Data Capture and error data analysis

First Failure Data Capture (FFDC) is a technique that helps ensure that when a fault is detected in a system, the root cause of the fault will be captured without the need to re-create the problem or run any sort of extending tracing or diagnostics program. For the vast majority of faults, a good FFDC design means that the root cause can also be detected automatically without servicer intervention.

FFDC information, error data analysis, and fault isolation are necessary to implement the advanced serviceability techniques that enable efficient service of the systems and to help determine the failing items.

In the rare absence of FFDC and Error Data Analysis, diagnostics are required to re-create the failure and determine the failing items.

Diagnostics

General diagnostic objectives are to detect and identify problems so they can be resolved quickly. Elements of IBM's diagnostics strategy include:

  • Provide a common error code format equivalent to a system reference code with PowerVM, system reference number, checkpoint, or firmware error code.
  • Provide fault detection and problem isolation procedures. Support remote connection ability to be used by the IBM Remote Support Center or IBM Designated Service.
  • Provide interactive intelligence within the diagnostics with detailed online failure information while connected to IBM's back-end system.

Automatic diagnostics

The processor and memory FFDC technology is designed to perform without the need for re-create diagnostics nor require user intervention. Solid and intermittent errors are designed to be correctly detected and isolated at the time the failure occurs. Runtime and boot-time diagnostics fall into this category.

Standalone diagnostics

As the name implies, standalone or user-initiated diagnostics requires user intervention. The user must perform manual steps, including:

  • Booting from the diagnostics CD, DVD, USB, or network
  • Interactively selecting steps from a list of choices

Concurrent maintenance

The determination of whether a firmware release can be updated concurrently is identified in the readme information file that is released with the firmware. An HMC is required for the concurrent firmware update with PowerVM. In addition, concurrent maintenance of PCIe adapters and NVMe drives are supported with PowerVM. Power supplies, fans, and op panel LCD are hot pluggable.

Service labels

Service providers use these labels to assist them in performing maintenance actions. Service labels are found in various formats and positions and are intended to transmit readily available information to the servicer during the repair process. Following are some of these service labels and their purpose:

  • Location diagrams: Location diagrams are located on the system hardware, relating information regarding the placement of hardware components. Location diagrams may include location codes, drawings of physical locations, concurrent maintenance status, or other data pertinent to a repair. Location diagrams are especially useful when multiple components such as DIMMs, processors, fans, adapter cards, and power supplies are installed.
  • Remove/replace procedures: Service labels that contain remove/replace procedures are often found on a cover of the system or in other spots accessible to the servicer. These labels provide systematic procedures, including diagrams detailing how to remove or replace certain serviceable hardware components.
  • Arrows: Numbered arrows are used to indicate the order of operation and the serviceability direction of components. Some serviceable parts such as latches, levers, and touch points need to be pulled or pushed in a certain direction and in a certain order for the mechanical mechanisms to engage or disengage. Arrows generally improve the ease of serviceability.

QR labels

QR labels are placed on the system to provide access to key service functions through a mobile device. When the QR label is scanned, it will go to a landing page for Power10 processor-based systems which contains each MTM service functions of interest while physically located at the server. These include things such as installation and repair instructions, reference code look up, and so on.

Packaging for service

The following service features are included in the physical packaging of the systems to facilitate service:

  • Color coding (touch points): Blue-colored touch points delineate touchpoints on service components where the component can be safely handled for service actions such as removal or installation.
  • Tool-less design: Selected IBM systems support tool-less or simple tool designs. These designs require no tools or simple tools such as flathead screw drivers to service the hardware components.
  • Positive retention: Positive retention mechanisms help to assure proper connections between hardware components such as cables to connectors, and between two cards that attach to each other. Without positive retention, hardware components run the risk of becoming loose during shipping or installation, preventing a good electrical connection. Positive retention mechanisms like latches, levers, thumbscrews, pop Nylatches (U-clips), and cables are included to help prevent loose connections and aid in installing (seating) parts correctly. These positive retention items do not require tools.

Error handling and reporting

In the event of system hardware or environmentally induced failure, the system runtime error capture capability systematically analyzes the hardware error signature to determine the cause of failure. The analysis result will be stored in system NVRAM. When the system can be successfully restarted either manually or automatically, or if the system continues to operate, the error will be reported to the operating system. Hardware and software failures are recorded in the system log filesystem. When an HMC is attached in the PowerVM environment, an ELA routine analyzes the error, forwards the event to the Service Focal Point (SFP) application running on the HMC, and notifies the system administrator that it has isolated a likely cause of the system problem. The service processor event log also records unrecoverable checkstop conditions, forwards them to the SFP application, and notifies the system administrator.

The system has the ability to call home through the operating system to report platform-recoverable errors and errors associated with PCI adapters/devices.

In the HMC-managed environment, a call home service request will be initiated from the HMC and the pertinent failure data with service parts information and part locations will be sent to an IBM service organization. Client contact information and specific system-related data such as the machine type, model, and serial number, along with error log data related to the failure, are sent to IBM Service.

Live Partition Mobility

With PowerVM Live Partition Mobility (LPM), users can migrate an AIX, IBM I, or Linux VM partition running on one Power partition system to another Power system without disrupting services. The migration transfers the entire system environment, including processor state, memory, attached virtual devices, and connected users. It provides continuous operating system and application availability during planned partition outages for repair of hardware and firmware faults. The Power10 systems using Power10-technology support secure LPM, whereby the VM image is encrypted and compressed prior to transfer. Secure LPM uses on-chip encryption and compression capabilities of the Power10 processor for optimal performance.

Call home

Call home refers to an automatic or manual call from a client location to the IBM support structure with error log data, server status, or other service-related information. Call home invokes the service organization in order for the appropriate service action to begin. Call home can be done through the Electronic Service Agent (ESA) imbedded in the HMC or through a version of ESA imbedded in the operating systems for non-HMC-managed or A version of ESA that runs as a standalone call home application. While configuring call home is optional, clients are encouraged to implement this feature in order to obtain service enhancements such as reduced problem determination and faster and potentially more accurate transmittal of error information. In general, using the call home feature can result in increased system availability. See the next section for specific details on this application.

IBM Electronic Services

Electronic Service Agent and Client Support Portal (CSP) comprise the IBM Electronic Services solution, which is dedicated to providing fast, exceptional support to IBM clients. IBM Electronic Service Agent is a no-charge tool that proactively monitors and reports hardware events such as system errors and collects hardware and software inventory. Electronic Service Agent can help focus on the client's company business initiatives, save time, and spend less effort managing day-to-day IT maintenance issues. In addition, Call Home Cloud Connect Web and Mobile capability extends the common solution and offers IBM Systems related support information applicable to Servers and Storage.

Details are available here: https://clientvantage.ibm.com/channel/ibm-call-home-connect.

System configuration and inventory information collected by Electronic Service Agent also can be used to improve problem determination and resolution between the client and the IBM support team. As part of an increased focus to provide even better service to IBM clients, Electronic Service Agent tool configuration and activation comes standard with the system. In support of this effort, a HMC External Connectivity security whitepaper has been published, which describes data exchanges between the HMC and the IBM Service Delivery Center (SDC) and the methods and protocols for this exchange. To read the whitepaper and prepare for Electronic Service Agent installation, see the "Security" section at the IBM Electronic Service Agent.

Benefits: increased uptime

Electronic Service Agent is designed to enhance the warranty and maintenance service by potentially providing faster hardware error reporting and uploading system information to IBM Support. This can optimize the time monitoring the symptoms, diagnosing the error, and manually calling IBM Support to open a problem record. And 24x7 monitoring and reporting means no more dependency on human intervention or off-hours client personnel when errors are encountered in the middle of the night.

Security: The Electronic Service Agent tool is designed to help secure the monitoring, reporting, and storing of the data at IBM. The Electronic Service Agent tool is designed to help securely transmit through the internet (HTTPS) to provide clients a single point of exit from their site. Initiation of communication is one way. Activating Electronic Service Agent does not enable IBM to call into a client's system.

For additional information, see the IBM Electronic Service Agent website.

More accurate reporting

Because system information and error logs are automatically uploaded to the IBM Support Center in conjunction with the service request, clients are not required to find and send system information, decreasing the risk of misreported or misdiagnosed errors. Once inside IBM, problem error data is run through a data knowledge management system, and knowledge articles are appended to the problem record.

Client Support Portal

Client Support Portal is a single internet entry point that replaces the multiple entry points traditionally used to access IBM Internet services and support. This web portal enables you to gain easier access to IBM resources for assistance in resolving technical problems.

This web portal provides valuable reports of installed hardware and software using information collected from the systems by IBM Electronic Service Agent. Reports are available for any system associated with the client's IBM ID.

For more information on how to utilize client support portal, visit the following website or contact an IBM Systems Services Representative.



Back to topBack to top

Reference information

Top rule

For additional information about IBM Power Expert Care extends services and support options, see announcement JS22-0008, dated July 12, 2022.

For more information on the Power10 scale-out servers, see Hardware Announcements: JG22-0029, dated July 12, 2022; JG22-0030, dated July 12, 2022; JG22-0031, dated July 12, 2022; JG22-0032, dated July 12, 2022; JG22-0033, dated July 12, 2022.



Back to topBack to top

Product number

Top rule

The following are newly announced features on the specific models of the IBM Power 9105 machine type:

                                                  Machine Model   Feature
Description                                       type    number  number 
IBM Power S1024                                   9105    42A          

EMEA Bulk MES Indicator                           9105    42A     0004 
One CSC Billing Unit                              9105    42A     0010 
Ten CSC Billing Units                             9105    42A     0011 
Mirrored System Disk Level, Specify Code          9105    42A     0040 
Device Parity Protection-All, Specify Code        9105    42A     0041 
Device Parity RAID-6 All, Specify Code            9105    42A     0047 


RISC-to-RISC Data Migration                       9105    42A     0205 
AIX Partition Specify                             9105    42A     0265 
Linux Partition Specify                           9105    42A     0266 
IBM i Operating System Partition Specify          9105    42A     0267 
Specify Custom Data Protection                    9105    42A     0296 
Mirrored Level System Specify Code                9105    42A     0308 
RAID Hot Spare Specify                            9105    42A     0347 
CBU Specify                                       9105    42A     0444 
Customer Specified Placement                      9105    42A     0456 
Load Source Not in CEC                            9105    42A     0719 
Fiber Channel SAN Load Source Specify             9105    42A     0837 


USB 500 GB Removable Disk Drive                   9105    42A     1107 
Custom Service Specify, Rochester Minn, USA       9105    42A     1140 
300GB 15k RPM SAS SFF-2 Disk Drive (AIX/Linux)    9105    42A     1953 
600GB 10k RPM SAS SFF-2 HDD for AIX/Linux 9105    42A     1964 
Primary OS - IBM i                                9105    42A     2145 
Primary OS - AIX 9105    42A     2146 
Primary OS - Linux 9105    42A     2147 
Factory Deconfiguration of 1-core                 9105    42A     2319 
1.8 M (6-ft) Extender Cable for Displays (15-pin
D-shell to 15-pin D-shell)                        9105    42A     4242 
Rack Integration Services                         9105    42A     4649 
Rack Indicator- Not Factory Integrated            9105    42A     4650 
Rack Indicator, Rack #1                           9105    42A     4651 
Rack Indicator, Rack #2                           9105    42A     4652 
Rack Indicator, Rack #3                           9105    42A     4653 
Rack Indicator, Rack #4                           9105    42A     4654 
Rack Indicator, Rack #5                           9105    42A     4655 
Rack Indicator, Rack #6                           9105    42A     4656 
Rack Indicator, Rack #7                           9105    42A     4657 
Rack Indicator, Rack #8                           9105    42A     4658 
Rack Indicator, Rack #9                           9105    42A     4659 
Rack Indicator, Rack #10                          9105    42A     4660 
Rack Indicator, Rack #11                          9105    42A     4661 
Rack Indicator, Rack #12                          9105    42A     4662 
Rack Indicator, Rack #13                          9105    42A     4663 
Rack Indicator, Rack #14                          9105    42A     4664 
Rack Indicator, Rack #15                          9105    42A     4665 
Rack Indicator, Rack #16                          9105    42A     4666 
CBU SPECIFY                                       9105    42A     4891 
One Processor of 5250 Enterprise Enablement       9105    42A     4970 
Full 5250 Enterprise Enablement                   9105    42A     4974 
Software Preload Required                         9105    42A     5000 
PowerVM Enterprise Edition                        9105    42A     5228 
Sys Console On HMC                                9105    42A     5550 
System Console-Ethernet LAN adapter               9105    42A     5557 
PCIe2 4-port 1GbE Adapter                         9105    42A     5899 
Power Cord 4.3m (14-ft), Drawer to IBM PDU (250V/
10A)                                              9105    42A     6458 
Power Cord 4.3m (14-ft), Drawer To OEM PDU
(125V, 15A)                                       9105    42A     6460 
Power Cord 4.3m (14-ft), Drawer to Wall/OEM PDU
(250V/15A) U. S.                                  9105    42A     6469 
Power Cord 1.8m (6-ft), Drawer to Wall (125V/15A) 9105    42A     6470 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU 
(250V/10A)                                        9105    42A     6471 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU
(250V/16A)                                        9105    42A     6472 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU
(250V/10A)                                        9105    42A     6473 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(250V/13A)                                        9105    42A     6474 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(250V/16A)                                        9105    42A     6475 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(250V/10A)                                        9105    42A     6476 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(250V/16A)                                        9105    42A     6477 
Power Cord 2.7 M(9-foot), To Wall/OEM PDU,
(250V, 16A)                                       9105    42A     6478 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(125V/15A or 250V/10A )                           9105    42A     6488 
4.3m (14-Ft) 3PH/32A 380-415V Power Cord          9105    42A     6489 
4.3m (14-Ft) 1PH/63A 200-240V Power Cord          9105    42A     6491 
4.3m (14-Ft) 1PH/60A (48A derated) 200-240V 
Power Cord                                        9105    42A     6492 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(250V/10A)                                        9105    42A     6493 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(250V/10A)                                        9105    42A     6494 
Power Cord 2.7M (9-foot), To Wall/OEM PDU,
(250V, 10A)                                       9105    42A     6496 
Power Cable - Drawer to IBM PDU, 200-240V/10A     9105    42A     6577 
Power Cord 2.7M (9-foot), To Wall/OEM PDU,
(125V, 15A)                                       9105    42A     6651 
4.3m (14-Ft) 3PH/16A 380-415V Power Cord          9105    42A     6653 
4.3m (14-Ft) 1PH/30A (24A derated) Power Cord     9105    42A     6654 
4.3m (14-Ft) 1PH/30A (24A derated) WR Power Cord  9105    42A     6655 
4.3m (14-Ft) 1PH/32A Power Cord                   9105    42A     6656 
4.3m (14-Ft) 1PH/32A Power Cord-Australia         9105    42A     6657 
4.3m (14-Ft) 1PH/30A (24A derated) Power Cord-Korea                                        9105    42A     6658 
Power Cord 2.7M (9-foot), To Wall/OEM PDU,
(250V, 15A)                                       9105    42A     6659 
Power Cord 4.3m (14-ft), Drawer to Wall/OEM PDU
(125V/15A)                                        9105    42A     6660 
Power Cord 2.8m (9.2-ft), Drawer to IBM PDU,
(250V/10A)                                        9105    42A     6665 
4.3m (14-Ft) 3PH/32A 380-415V Power Cord-Australia                                    9105    42A     6667 
Power Cord 4.3M (14-foot), Drawer to OEM PDU,
(250V, 15A)                                       9105    42A     6669 
Power Cord 2.7M (9-foot), Drawer to IBM PDU,
250V/10A                                          9105    42A     6671 
Power Cord 2M (6.5-foot), Drawer to IBM PDU,
250V/10A                                          9105    42A     6672 
Power Cord 2.7m (9-ft), Drawer to Wall/OEM PDU,
(250V/10A)                                        9105    42A     6680 
Intelligent PDU+, 1 EIA Unit, Universal UTG0247
Connector                                         9105    42A     7109 
Power Distribution Unit                           9105    42A     7188 
Power Distribution Unit (US) - 1 EIA Unit,
Universal, Fixed Power Cord                       9105    42A     7196 
Order Routing Indicator- System Plant             9105    42A     9169 
Language Group Specify - US English               9105    42A     9300 
New AIX License Core Counter                      9105    42A     9440 
New IBM i License Core Counter                    9105    42A     9441 
New Red Hat License Core Counter                  9105    42A     9442 
New SUSE License Core Counter                     9105    42A     9443 
Other AIX License Core Counter                    9105    42A     9444 
Other Linux License Core Counter                  9105    42A     9445 
3rd Party Linux License Core Counter              9105    42A     9446 
VIOS Core Counter                                 9105    42A     9447 
Other License Core Counter                        9105    42A     9449 
Month Indicator                                   9105    42A     9461 
Day Indicator                                     9105    42A     9462 
Hour Indicator                                    9105    42A     9463 
Minute Indicator                                  9105    42A     9464 
Qty Indicator                                     9105    42A     9465 
Countable Member Indicator                        9105    42A     9466 
Language Group Specify - Dutch                    9105    42A     9700 
Language Group Specify - French                   9105    42A     9703 
Language Group Specify - German                   9105    42A     9704 
Language Group Specify - Polish                   9105    42A     9705 
Language Group Specify - Norwegian                9105    42A     9706 
Language Group Specify - Portuguese               9105    42A     9707 
Language Group Specify - Spanish                  9105    42A     9708 
Language Group Specify - Italian                  9105    42A     9711 
Language Group Specify - Canadian French          9105    42A     9712 
Language Group Specify - Japanese                 9105    42A     9714 
Language Group Specify - Traditional Chinese 
(Taiwan)                                          9105    42A     9715 
Language Group Specify - Korean                   9105    42A     9716 
Language Group Specify - Turkish                  9105    42A     9718 
Language Group Specify - Hungarian                9105    42A     9719 
Language Group Specify - Slovakian                9105    42A     9720 
Language Group Specify - Russian                  9105    42A     9721 
Language Group Specify - Simplified Chinese (PRC) 9105    42A     9722 
Language Group Specify - Czech                    9105    42A     9724 
Language Group Specify - Romanian                 9105    42A     9725 
Language Group Specify - Croatian                 9105    42A     9726 
Language Group Specify - Slovenian                9105    42A     9727 
Language Group Specify - Brazilian Portuguese     9105    42A     9728 
Language Group Specify - Thai                     9105    42A     9729 
10m (30.3-ft) - IBM MTP 12 strand cable for 40/
100G transceivers                                 9105    42A     EB2J 
30m (90.3-ft) - IBM MTP 12 strand cable for 40/
100G transceivers                                 9105    42A     EB2K 
AC Titanium Power Supply - 1600W for Server
(200-240 VAC)                                     9105    42A     EB3S 
Lift tool based on GenieLift GL-8 (standard)      9105    42A     EB3Z 
10GbE Optical Transceiver SFP+ SR                 9105    42A     EB46 
25GbE Optical Transceiver SFP28                   9105    42A     EB47 
1GbE Base-T Transceiver RJ45                      9105    42A     EB48 
QSFP28 to SFP28 Connector                         9105    42A     EB49 
0.5m SFP28/25GbE copper Cable                     9105    42A     EB4J 
1.0m SFP28/25GbE copper Cable                     9105    42A     EB4K 
2.0m SFP28/25GbE copper Cable                     9105    42A     EB4M 
2.0m QSFP28/100GbE copper split Cable to SFP28
4x25GbE                                           9105    42A     EB4P 
Service wedge shelf tool kit for EB3Z             9105    42A     EB4Z 
QSFP+ 40GbE Base-SR4 Transceiver                  9105    42A     EB57 
100GbE Optical Transceiver QSFP28                 9105    42A     EB59 
1.0M 100GbE Copper Cable QSFP28                   9105    42A     EB5K 
1.5M 100GbE Copper Cable QSFP28                   9105    42A     EB5L 
2.0M 100GbE Copper Cable QSFP28                   9105    42A     EB5M 
3M 100GbE Optical Cable QSFP28 (AOC)              9105    42A     EB5R 
5M 100GbE Optical Cable QSFP28 (AOC)              9105    42A     EB5S 
10M 100GbE Optical Cable QSFP28 (AOC)             9105    42A     EB5T 
15M 100GbE Optical Cable QSFP28 (AOC)             9105    42A     EB5U 
20M 100GbE Optical Cable QSFP28 (AOC)             9105    42A     EB5V 
30M 100GbE Optical Cable QSFP28 (AOC)             9105    42A     EB5W 
50M 100GbE Optical Cable QSFP28 (AOC)             9105    42A     EB5X 
IBM i 7.3 Indicator                               9105    42A     EB73 
IBM i 7.4 Indicator                               9105    42A     EB74 
IBM i 7.5 Indicator                               9105    42A     EB75 
PCIe3 2-Port 10Gb NIC&ROCE SR/Cu Adapter          9105    42A     EC2S 
PCIe3 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter       9105    42A     EC2U 
PCIe3 x8 1.6 TB NVMe Flash Adapter for AIX/Linux 9105    42A     EC5B 
PCIe3 x8 3.2 TB NVMe Flash Adapter for AIX/Linux 9105    42A     EC5D 
PCIe3 x8 6.4 TB NVMe Flash Adapter for AIX/Linux 9105    42A     EC5F 
Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for
AIX/Linux 9105    42A     EC5V 
Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     EC5W 
Mainstream 800 GB SSD PCIe3 NVMe U.2 module for
AIX/Linux 9105    42A     EC5X 
PCIe4 2-port 100Gb ROCE EN adapter                9105    42A     EC66 
PCIe2 2-Port USB 3.0 Adapter                      9105    42A     EC6K 
PCIe3 x8 1.6 TB NVMe Flash Adapter for IBM i      9105    42A     EC6V 
PCIe3 x8 3.2 TB NVMe Flash Adapter for IBM i      9105    42A     EC6X 
PCIe3 x8 6.4 TB NVMe Flash Adapter for IBM i      9105    42A     EC6Z 
PCIe4 2-port 100Gb Crypto Connectx-6 DX QFSP56    9105    42A     EC78 
PCIe4 1.6TB NVMe Flash Adapter x8 for AIX/Linux 9105    42A     EC7B 
PCIe4 3.2TB NVMe Flash Adapter x8 for AIX/Linux 9105    42A     EC7D 
PCIe4 6.4TB NVMe Flash Adapter x8 for AIX/Linux 9105    42A     EC7F 
PCIe4 1.6TB NVMe Flash Adapter x8 for IBM i       9105    42A     EC7K 
PCIe4 3.2TB NVMe Flash Adapter x8 for IBM i       9105    42A     EC7M 
PCIe4 6.4TB NVMe Flash Adapter x8 for IBM i       9105    42A     EC7P 
800GB Mainstream NVMe U.2 SSD 4k for AIX/Linux 9105    42A     EC7T 
SAS X Cable 3m - HD Narrow 6Gb 2-Adapters to
Enclosure                                         9105    42A     ECBJ 
SAS X Cable 6m - HD Narrow 6Gb 2-Adapters to
Enclosure                                         9105    42A     ECBK 
SAS YO Cable 1.5m - HD Narrow 6Gb Adapter to
Enclosure                                         9105    42A     ECBT 
SAS YO Cable 3m - HD Narrow 6Gb Adapter to
Enclosure                                         9105    42A     ECBU 
SAS YO Cable 6m - HD Narrow 6Gb Adapter to
Enclosure                                         9105    42A     ECBV 
SAS YO Cable 10m - HD Narrow 6Gb Adapter to
Enclosure                                         9105    42A     ECBW 
SAS AE1 Cable 4m - HD Narrow 6Gb Adapter to
Enclosure                                         9105    42A     ECBY 
SAS YE1 Cable 3m - HD Narrow 6Gb Adapter to
Enclosure                                         9105    42A     ECBZ 
3M Optical Cable Pair for PCIe3 Expansion Drawer  9105    42A     ECC7 
10M Optical Cable Pair for PCIe3 Expansion Drawer 9105    42A     ECC8 
System Port Converter Cable for UPS               9105    42A     ECCF 
3M Copper CXP Cable Pair for PCIe3 Expansion
Drawer                                            9105    42A     ECCS 
3M Active Optical Cable Pair for PCIe3 Expansion
Drawer                                            9105    42A     ECCX 
10M Active Optical Cable Pair for PCIe3
Expansion Drawer                                  9105    42A     ECCY 
3.0M SAS X12 Cable (Two Adapter to Enclosure)     9105    42A     ECDJ 
4.5M SAS X12 Active Optical Cable (Two Adapter
to Enclosure)                                     9105    42A     ECDK 
10M SAS X12 Active Optical Cable (Two Adapter to
Enclosure)                                        9105    42A     ECDL 
1.5M SAS YO12 Cable (Adapter to Enclosure)        9105    42A     ECDT 
3.0M SAS YO12 Cable (Adapter to Enclosure)        9105    42A     ECDU 
4.5M SAS YO12 Active Optical Cable (Adapter to
Enclosure)                                        9105    42A     ECDV 
10M SAS YO12 Active Optical Cable (Adapter to
Enclosure)                                        9105    42A     ECDW 
0.6M SAS AA12 Cable (Adapter to Adapter)          9105    42A     ECE0 
3.0M SAS AA12 Cable                               9105    42A     ECE3 
4.5M SAS AA12 Active Optical Cable (Adapter to
Adapter)                                          9105    42A     ECE4 
4.3m (14-Ft) PDU to Wall 3PH/24A 200-240V
Delta-wired Power Cord                            9105    42A     ECJ5 
4.3m (14-Ft) PDU to Wall 3PH/40A 200-240V Power Cord                                              9105    42A     ECJ6 
4.3m (14-Ft) PDU to Wall 3PH/48A 200-240V
Delta-wired Power Cord                            9105    42A     ECJ7 
High Function 9xC19 Single-Phase or Three-Phase
Wye PDU plus                                      9105    42A     ECJJ 
High Function 9xC19 PDU plus 3-Phase Delta        9105    42A     ECJL 
High Function 12xC13 Single-Phase or Three-Phase
Wye PDU plus                                      9105    42A     ECJN 
High Function 12xC13 PDU plus 3-Phase Delta       9105    42A     ECJQ 
Custom Service Specify, Mexico                    9105    42A     ECSM 
Custom Service Specify, Poughkeepsie, USA         9105    42A     ECSP 
Optical Wrap Plug                                 9105    42A     ECW0 
SAP HANA TRACKING FEATURE                         9105    42A     EHKV 
Boot Drive / Load Source in EXP24SX Specify (in
#ESLS or #ELLS)                                   9105    42A     EHR2 
SSD Placement Indicator - #ESLS/#ELLS             9105    42A     EHS2 
PCIe3 RAID SAS Adapter Quad-port 6Gb x8           9105    42A     EJ0J 
PCIe3 12GB Cache RAID SAS Adapter Quad-port 6Gb 
x8                                                9105    42A     EJ0L 
PCIe3 SAS Tape/DVD Adapter Quad-port 6Gb x8       9105    42A     EJ10 
PCIe3 12GB Cache RAID PLUS SAS Adapter Quad-port 
6Gb x8                                            9105    42A     EJ14 
Storage Backplane with eight NVMe U.2 drive slots 9105    42A     EJ1Y 
PCIe x16 to CXP Optical or CU converter Adapter 
for PCIe3 Expansion Drawer                        9105    42A     EJ20 
PCIe4 x16 to CXP Converter Adapter (support AOC)  9105    42A     EJ2A 
PCIe3 Crypto Coprocessor no BSC 4767              9105    42A     EJ32 
PCIe3 Crypto Coprocessor BSC-Gen3 4767            9105    42A     EJ33 
PCIe3 Crypto Coprocessor no BSC 4769              9105    42A     EJ35 
PCIe3 Crypto Coprocessor BSC-Gen3 4769            9105    42A     EJ37 
Non-paired Indicator EJ14 PCIe SAS RAID+ Adapter  9105    42A     EJRL 
Non-paired Indicator EJ0L PCIe SAS RAID Adapter   9105    42A     EJRU 
Front IBM Bezel for 16 NVMe-bays Backplane 
Rack-Mount                                        9105    42A     EJUU 
Front OEM Bezel for 16 NVMe-bays Backplane
Rack-Mount                                        9105    42A     EJUV 
Front IBM Bezel for 16 NVMe-bays and RDX
Backplane Rack-Mount                              9105    42A     EJUW 
Front OEM Bezel for 16 NVMe-bays and RDX
Backplane Rack-Mount                              9105    42A     EJUX 
Specify Mode-1 & CEC SAS Ports & (2)YO12 for
EXP24SX #ESLS/ELS                                 9105    42A     EJW0 
Specify Mode-1 & (1)EJ0J/EJ0M/EL3B/EL59 &
(1)YO12 for EXP24SX #ESLS/ELLS                    9105    42A     EJW1 
Specify Mode-1 & (2)EJ0J/EJ0M/EL3B/EL59 &
(2)YO12 for EXP24SX #ESLS/ELLS                    9105    42A     EJW2 
Specify Mode-2 & (2)EJ0J/EJ0M/EL3B/EL59 & (2)X12
for EXP24SX #ESLS/ELLS                            9105    42A     EJW3 
Specify Mode-2 & (4)EJ0J/EJ0M/EL3B/EL59 & (2)X12
for EXP24SX #ESLS/ELLS                            9105    42A     EJW4 
Specify Mode-4 & (4)EJ0J/EJ0M/EL3B/EL59 & (2)X12
for EXP24SX #ESLS/ELLS                            9105    42A     EJW5 
Specify Mode-2 & (1)EJ0J/EJ0M/EL3B/EL59 &
(2)YO12 for EXP24SX #ESLS/ELLS                    9105    42A     EJW6 
Specify Mode-2 & (2)EJ0J/EJ0M/EL3B/EL59 &
(2)YO12 for EXP24SX #ESLS/ELLS                    9105    42A     EJW7 
Specify Mode-2 & (1)EJ0J/EJ0M/EL3B/EL59 &
(1)YO12 for EXP24SX #ESLS/ELLS                    9105    42A     EJWA 
Specify Mode-2 & (2)EJ0J/EJ0M/EL3B/EL59 & (1)X12
for EXP24SX #ESLS/ELLS                            9105    42A     EJWB 
Specify Mode-4 & (1)EJ0J/EJ0M/EL3B/EL59 & (1)X12
for EXP24SX #ESLS/ELLS                            9105    42A     EJWC 
Specify Mode-4 & (2)EJ0J/EJ0M/EL3B/EL59 & (1)X12
for EXP24SX #ESLS/ELLS                            9105    42A     EJWD 
Specify Mode-4 & (3)EJ0J/EJ0M/EL3B/EL59 & (2)X12
for EXP24SX #ESLS/ELLS                            9105    42A     EJWE 
Specify Mode-1 & (2)EJ14 & (2)YO12 for EXP24SX
#ESLS/ELLS                                        9105    42A     EJWF 
Specify Mode-2 & (2)EJ14 & (2)X12 for EXP24SX
#ESLS/ELLS                                        9105    42A     EJWG 
Specify Mode-2 & (2)EJ14 & (1)X12 for EXP24SX
#ESLS/ELLS                                        9105    42A     EJWH 
Specify Mode-2 & (4)EJ14 & (2)X12 for EXP24SX
#ESLS/ELLS                                        9105    42A     EJWJ 
300GB 15k RPM SAS SFF-2 Disk Drive (Linux)        9105    42A     EL1P 
600GB 10k RPM SAS SFF-2 Disk Drive (Linux)        9105    42A     EL1Q 
ESMD Load Source Specify (931GB SSD SFF-2)        9105    42A     EL9D 
ESMH Load Source Specify (1.86TB SSD SFF-2)       9105    42A     EL9H 
ESMS Load Source Specify (3.72TB SSD SFF-2)       9105    42A     EL9S 
ESMX Load Source Specify (7.44TB SSD SFF-2)       9105    42A     EL9X 
PDU Access Cord 0.38m                             9105    42A     ELC0 
4.3m (14-Ft) PDU to Wall 24A 200-240V Power Cord
North America                                     9105    42A     ELC1 
4.3m (14-Ft) PDU to Wall 3PH/24A 415V Power Cord
North America                                     9105    42A     ELC2 
Power Cable - Drawer to IBM PDU (250V/10A)        9105    42A     ELC5 
600GB 10K RPM SAS SFF-2 Disk Drive 4K Block -
4096                                              9105    42A     ELEV 
1.2TB 10K RPM SAS SFF-2 Disk Drive 4K Block -
4096                                              9105    42A     ELF3 
1.8TB 10K RPM SAS SFF-2 Disk Drive 4K Block -
4096                                              9105    42A     ELFT 
ESKM Load Source Specify (931GB SSD SFF-2)        9105    42A     ELKM 
ESKR Load Source Specify (1.86TB SSD SFF-2)       9105    42A     ELKR 
ESKV Load Source Specify (3.72TB SSD SFF-2)       9105    42A     ELKV 
ESKZ Load Source Specify (7.44TB SSD SFF-2)       9105    42A     ELKZ 
ES1F Load Source Specify (1.6 TB 4K NVMe U.2 SSD
PCIe4 for IBM i)                                  9105    42A     ELS3 
ES1K Load Source Specify (800 GB 4K NVMe U.2 SSD
PCIe4 for IBM i)                                  9105    42A     ELSG 
ES1H Load Source Specify (3.2 TB 4K NVMe U.2 SSD
for IBM i)                                        9105    42A     ELSQ 
#ESF2 Load Source Specify (1.1TB HDD SFF-2)       9105    42A     ELT2 
#ESFS Load Source Specify (1.7TB HDD SFF-2)       9105    42A     ELTS 
#ESEU Load Source Specify (571GB HDD SFF-2)       9105    42A     ELTU 
ESK9 Load Source Specify (387GB SSD SFF-2)        9105    42A     ELU9 
ESKD Load Source Specify (775GB SSD SFF-2)        9105    42A     ELUD 
ESKH Load Source Specify (1.55TB SSD SFF-2)       9105    42A     ELUH 
ESJK Load Source Specify (931GB SSD SFF-2)        9105    42A     ELUK 
#ESNL Load Source Specify (283GB HDD SFF-2)       9105    42A     ELUL 
ESJM Load Source Specify (1.86TB SSD SFF-2)       9105    42A     ELUM 
ESJP Load Source Specify (3.72TB SSD SFF-2)       9105    42A     ELUP 
#ESNQ Load Source Specify (571GB HDD SFF-2)       9105    42A     ELUQ 
ESJR Load Source Specify (7.44TB SSD SFF-2)       9105    42A     ELUR 
EC5W Load Source Specify (6.4 TB 4K NVMe U.2 SSD 
for IBM i)                                        9105    42A     ELUW 
ETK9 Load Source Specify (387 GB SSD SFF-2)       9105    42A     ELV9 
ETKD Load Source Specify (775 GB SSD SFF-2)       9105    42A     ELVD 
ETKH Load Source Specify (1.55 TB SSD SFF-2)      9105    42A     ELVH 
EC7K Load Source Specify (1.6TB SSD NVMe adapter
for IBM i)                                        9105    42A     ELVK 
EC7M Load Source Specify (3.2TB SSD NVMe adapter
for IBM i)                                        9105    42A     ELVM 
EC7P Load Source Specify (6.4TB SSD NVMe adapter
for IBM i)                                        9105    42A     ELVP 
ES3A Load Source Specify (800 GB 4K NVMe U.2 SSD
PCIe4 for IBM i)                                  9105    42A     ELYA 
ES3C Load Source Specify (1.6 TB 4K NVMe U.2 SSD
PCIe4 for IBM i)                                  9105    42A     ELYC 
ES3E Load Source Specify (3.2 TB 4K NVMe U.2 SSD
PCIe4 for IBM i)                                  9105    42A     ELYE 
ES3G Load Source Specify (6.4 TB 4K NVMe U.2 SSD
PCIe4 for IBM i)                                  9105    42A     ELYG 
ES95 Load Source Specify (387GB SSD SFF-2)        9105    42A     ELZ5 
ESNB Load Source Specify (775GB SSD SFF-2)        9105    42A     ELZB 
ESNF Load Source Specify (1.55TB SSD SFF-2)       9105    42A     ELZF 
32GB (2x16GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory 9105    42A     EM6N 
256GB (2x128GB) DDIMMs, 2933 MHz, 16GBIT DDR4
Memory                                            9105    42A     EM6U 
64GB (2x32GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory 9105    42A     EM6W 
128GB (2x64GB) DDIMMs, 3200 MHz, 16GBIT DDR4
Memory                                            9105    42A     EM6X 
512GB (2x256GB) DDIMMs, 2933 MHz, 16GBIT DDR4
Memory                                            9105    42A     EM78 
Active Memory Mirroring (AMM)                     9105    42A     EM8G 
PCIe Gen3 I/O Expansion Drawer                    9105    42A     EMX0 
AC Power Supply Conduit for PCIe3 Expansion
Drawer                                            9105    42A     EMXA 
PCIe3 6-Slot Fanout Module for PCIe3 Expansion
Drawer                                            9105    42A     EMXF 
PCIe3 6-Slot Fanout Module for PCIe3 Expansion
Drawer                                            9105    42A     EMXG 
PCIe3 6-Slot Fanout Module for PCIe3 Expansion
Drawer                                            9105    42A     EMXH 
1m (3.3-ft), 10Gb E'Net Cable SFP+ Act Twinax
Copper                                            9105    42A     EN01 
3m (9.8-ft), 10Gb E'Net Cable SFP+ Act Twinax
Copper                                            9105    42A     EN02 
5m (16.4-ft), 10Gb E'Net Cable SFP+ Act Twinax
Copper                                            9105    42A     EN03 
PCIe2 4-Port (10Gb+1GbE) SR+RJ45 Adapter          9105    42A     EN0S 
PCIe2 4-port (10Gb+1GbE) Copper SFP+RJ45 Adapter  9105    42A     EN0U 
PCIe2 2-port 10/1GbE BaseT RJ45 Adapter           9105    42A     EN0W 
PCIe3 32Gb 2-port Fibre Channel Adapter           9105    42A     EN1A 
PCIe3 16Gb 4-port Fibre Channel Adapter           9105    42A     EN1C 
PCIe3 16Gb 4-port Fibre Channel Adapter           9105    42A     EN1E 
PCIe3 2-Port 16Gb Fibre Channel Adapter           9105    42A     EN1G 
PCIe4 32Gb 2-port Optical Fibre Channel Adapter   9105    42A     EN1J 
PCIe3 16Gb 2-port Fibre Channel Adapter           9105    42A     EN2A 


188 GB IBM i NVMe Load Source Namespace size      9105    42A     ENS1 
393 GB IBM i NVMe Load Source Namespace size      9105    42A     ENS2 
200 GB IBM i NVMe Load Source Namespace size      9105    42A     ENSA 
400 GB IBM i NVMe Load Source Namespace size      9105    42A     ENSB 




Power Enterprise Pools 2.0 Enablement             9105    42A     EP20 
Deactivation of LPM (Live Partition Mobility)     9105    42A     EPA0 
One CUoD Static Processor Core Activation for
EPGC                                              9105    42A     EPFC 
One CUoD Static Processor Core Activation for
EPGD                                              9105    42A     EPFD 
One CUoD Static Processor Core Activation for
EPGM                                              9105    42A     EPFM 
16-core Typical 3.10 to 4.0 Ghz (max) Power10
Processor                                         9105    42A     EPGC 
24-core Typical 2.75 to 3.90 Ghz (max) Power10
Processor                                         9105    42A     EPGD 
12-core Typical 3.40 to 4.0 Ghz (max) Power10
Processor                                         9105    42A     EPGM 
Horizontal PDU Mounting Hardware                  9105    42A     EPTH 
High Function 9xC19 PDU: Switched, Monitoring     9105    42A     EPTJ 
High Function 9xC19 PDU 3-Phase: Switched,
Monitoring                                        9105    42A     EPTL 
High Function 12xC13 PDU: Switched, Monitoring    9105    42A     EPTN 
High Function 12xC13 PDU 3-Phase: Switched,
Monitoring                                        9105    42A     EPTQ 
Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for
AIX/Linux 9105    42A     ES1E 
Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     ES1F 
Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for
AIX/Linux 9105    42A     ES1G 
Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     ES1H 
Enterprise 800GB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     ES1K 
Enterprise 800GB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     ES3A 
Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for
AIX/Linux 9105    42A     ES3B 
Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     ES3C 
Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for
AIX/Linux 9105    42A     ES3D 
Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     ES3E 
Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for
AIX/Linux 9105    42A     ES3F 
Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for
IBM i                                             9105    42A     ES3G 
387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ES94 
387GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ES95 
387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ESB2 
775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ESB6 
387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESBA 
387GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ESBB 
775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESBG 
775GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ESBH 
1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESBL 
1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i      9105    42A     ESBM 
S&H - No Charge                                   9105    42A     ESC0 
S&H-b                                             9105    42A     ESC6 
Virtual Capacity Expedited Shipment               9105    42A     ESCT 
iSCSI SAN Load Source Specify for AIX 9105    42A     ESCZ 
571GB 10K RPM SAS SFF- HDD 4K for IBM i           9105    42A     ESEU 
600GB 10K RPM SAS SFF-2 HDD 4K for AIX/Linux 9105    42A     ESEV 
1.1TB 10K RPM SAS SFF-2 HDD 4K for IBM i          9105    42A     ESF2 
1.2TB 10K RPM SAS SFF-2 HDD 4K for AIX/Linux 9105    42A     ESF3 
1.7TB 10K RPM SAS SFF-2 HDD 4K for IBM i          9105    42A     ESFS 
1.8TB 10K RPM SAS SFF-2 HDD 4K for AIX/Linux 9105    42A     ESFT 
387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ESGV 
775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ESGZ 
931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJ0 
931GB Mainstream SAS 4k SFF-2 SSD for IBM i       9105    42A     ESJ1 
1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJ2 
1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESJ3 
3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJ4 
3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESJ5 
7.45TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJ6 
7.45TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESJ7 
931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJJ 
931GB Mainstream SAS 4k SFF-2 SSD for IBM i       9105    42A     ESJK 
1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJL 
1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESJM 
3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJN 
3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESJP 
7.44TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESJQ 
7.44TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESJR 
387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ESK1 
775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ESK3 
387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESK8 
387GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ESK9 
775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESKC 
775GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ESKD 
1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESKG 
1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i      9105    42A     ESKH 
931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESKK 
931GB Mainstream SAS 4k SFF-2 SSD for IBM i       9105    42A     ESKM 
1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESKP 
1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESKR 
3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESKT 
3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESKV 
7.44TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESKX 
7.44TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESKZ 
Specify AC Power Supply for EXP12SX/EXP24SX
Storage Enclosure                                 9105    42A     ESLA 
ESBB Load Source Specify (387GB SSD SFF-2)        9105    42A     ESLB 
ESBH Load Source Specify (775GB SSD SFF-2)        9105    42A     ESLH 
ESBM Load Source Specify (1.55TB SSD SFF-2)       9105    42A     ESLM 
EXP24SX SAS Storage Enclosure                     9105    42A     ESLS 
Load Source Specify for EC6V (NVMe 1.6 TB SSD
for IBM i)                                        9105    42A     ESLV 
Load Source Specify for EC6X (NVMe 3.2 TB SSD
for IBM i)                                        9105    42A     ESLX 
Load Source Specify for EC6Z (NVMe 6.4 TB SSD
for IBM i)                                        9105    42A     ESLZ 
931GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESMB 
931GB Mainstream SAS 4k SFF-2 SSD for IBM i       9105    42A     ESMD 
1.86TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESMF 
1.86TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESMH 
3.72TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESMK 
3.72TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESMS 
7.44TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESMV 
7.44TB Mainstream SAS 4k SFF-2 SSD for IBM i      9105    42A     ESMX 
775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESNA 
775GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ESNB 
1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ESNE 
1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i      9105    42A     ESNF 
283GB 15K RPM SAS SFF-2 4k Block Cached Disk
Drive (IBM i)                                     9105    42A     ESNL 
300GB 15K RPM SAS SFF-2 4k Block Cached Disk
Drive (AIX/Linux)                                 9105    42A     ESNM 
571GB 15K RPM SAS SFF-2 4k Block Cached Disk
Drive (IBM i)                                     9105    42A     ESNQ 
600GB 15K RPM SAS SFF-2 4k Block Cached Disk
Drive (AIX/Linux)                                 9105    42A     ESNR 
300GB 15K RPM SAS SFF-2 4k Block Cached Disk
Drive (Linux)                                     9105    42A     ESRM 
600GB 15K RPM SAS SFF-2 4k Block Cached Disk
Drive (Linux)                                     9105    42A     ESRR 
AIX Update Access Key (UAK)                       9105    42A     ESWK 
387GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ETK1 
775GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux 9105    42A     ETK3 
387GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ETK8 
387GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ETK9 
775GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ETKC 
775GB Enterprise SAS 4k SFF-2 SSD for IBM i       9105    42A     ETKD 
1.55TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux 9105    42A     ETKG 
1.55TB Enterprise SAS 4k SFF-2 SSD for IBM i      9105    42A     ETKH 
1TB Removable Disk Drive Cartridge                9105    42A     EU01 
RDX 320 GB Removable Disk Drive                   9105    42A     EU08 
Operator Panel LCD Display                        9105    42A     EU0K 
1.5TB Removable Disk Drive Cartridge              9105    42A     EU15 
Cable Ties & Labels                               9105    42A     EU19 
Order Placed Indicator                            9105    42A     EU29 
2TB Removable Disk Drive Cartridge (RDX)          9105    42A     EU2T 
ESJ1 Load Source Specify (931GB SSD SFF-2)        9105    42A     EU41 
ESJ3 Load Source Specify (1.86TB SSD SFF-2)       9105    42A     EU43 
ESJ5 Load Source Specify (3.72TB SSD SFF-2)       9105    42A     EU45 
ESJ7 Load Source Specify (7.45TB SSD SFF-2)       9105    42A     EU47 
RDX USB Internal Docking Station                  9105    42A     EUA0 
RDX USB External Docking Station                  9105    42A     EUA4 
 


Note: Feature EUA4 is not supported in Armenia, Azerbaijan, China, 
India, Japan, Kazakhstan, Kyrgyzstan, Mexico, Saudi Arabia, Taiwan, Turkmenistan, and Uzbekistan. 

 

Standalone USB DVD drive w/cable                  9105    42A     EUA5 
1 core Base Processor Activation (Pools 2.0) for
EPGM - Any OS                                     9105    42A     EUBX 
1 core Base Processor Activation (Pools 2.0) for
EPGC - Any OS                                     9105    42A     EUCK 
1 core Base Processor Activation (Pools 2.0) for
EPGD - Any OS                                     9105    42A     EUCS 
Enable Virtual Serial Number                      9105    42A     EVSN 
BP Post-Sale Services: 1 Day                      9105    42A     SVBP 
IBM Systems Lab Services Post-Sale Services: 1
Day                                               9105    42A     SVCS 
Other IBM Post-Sale Services: 1 Day               9105    42A     SVNN 
1 core Base Processor Activation (Pools 2.0) for
EPGM - Any O/S (Conv from EPFM)                   9105    42A     EUBZ
1 core Base Processor Activation (Pools 2.0) for
EPGC - Any O/S (Conv from EPFC)                   9105    42A     EUCR
1 core Base Processor Activation (Pools 2.0) for
EPGD - Any O/S (Conv from EPFD)                   9105    42A     EUCT

The following are newly announced features on the specific models of the IBM Power 7965 machine type:

Planned Availability Date July 22, 2022

New Feature

                                                  Machine  Model   Feature
Description                                       type     number  number

Rack Content Specify 9105-42A, 9786-42H 4EIA unit 7965     S42     ER3B 

Feature conversions

Feature Conversions

The existing components being replaced during a model or feature conversion become the property of IBM and must be returned.

Feature conversions are always implemented on a "quantity of one for quantity of one" basis. Multiple existing features may not be converted to a single new feature. Single existing features may not be converted to multiple new features.

The following conversions are available to clients:

Feature conversions for 9105-42A adapter features:

                                                          
From FC:                     To FC:                        Return
                                                           parts

EJ20 - PCIe x16 to CXP       EJ2A - PCIe4 x16 to CXP       No       
Optical or CU converter      Converter Adapter (support  
Adapter for PCIe3 Expansion  AOC)                        
Drawer                                                   
EJ35 - PCIe3 Crypto          EJ37 - PCIe3 Crypto           No       
Coprocessor no BSC 4769      Coprocessor BSC-Gen3 4769   

Feature conversions for 9105-42A cable features:

                                                          
From FC:                     To FC:                        Return
                                                           parts

ECC7 - 3M Optical Cable      ECCX - 3M Active Optical      No       
Pair for PCIe3 Expansion     Cable Pair for PCIe3        
Drawer                       Expansion Drawer            
ECC8 - 10M Optical Cable     ECCY - 10M Active Optical     No       
Pair for PCIe3 Expansion     Cable Pair for PCIe3        
Drawer                       Expansion Drawer            

Feature conversions for 9105-42A processor features:

                                                          
From FC:                     To FC:                        Return
                                                           parts

EPFM - One CUoD Static       EUBZ - 1 core Base            No       
Processor Core Activation    Processor Activation (Pools 
for EPGM                     2.0) for EPGM - Any OS 
                             (Conv from EPFM)      
EPFC - One CUoD Static       EUCR - 1 core Base            No       
Processor Core Activation    Processor Activation (Pools 
for EPGC                     2.0) for EPGC - Any OS 
                             (Conv from EPFC)     
EPFD - One CUoD Static       EUCT - 1 core Base            No       
Processor Core Activation    Processor Activation (Pools 
for EPGD                     2.0) for EPGD - Any OS      
                             (Conv from EPFD)   

Feature conversions for 9105-42A rack-related features:

                                                          
From FC:                     To FC:                        Return
                                                           parts

EMXF - PCIe3 6-Slot Fanout   EMXH - PCIe3 6-Slot Fanout    No       
Module for PCIe3 Expansion   Module for PCIe3 Expansion  
Drawer                       Drawer                      
EMXG - PCIe3 6-Slot Fanout   EMXH - PCIe3 6-Slot Fanout    No       
Module for PCIe3 Expansion   Module for PCIe3 Expansion  
Drawer                       Drawer                      


Back to topBack to top

Publications

Top rule

No publications are shipped with the announced product.

IBM Documentation provides you with a single information center where you can access product documentation for IBM systems hardware, operating systems, and server software. Through a consistent framework, you can efficiently find information and personalize your access. See IBM Documentation.

To access the IBM Publications Center Portal, go to the IBM Publications Center website. The IBM Publications Center is a worldwide central repository for IBM product publications and marketing material with a catalog of 70,000 items. Extensive search facilities are provided. A large number of publications are available online in various file formats, which can currently be downloaded.

To access the IBM Publications Center Portal, go to the IBM Publications Center website.

The Publications Center is a worldwide central repository for IBM product publications and marketing material with a catalog of 70,000 items. Extensive search facilities are provided. A large number of publications are available online in various file formats, which can currently be downloaded.

National language support

Not applicable



Back to topBack to top

Services

Top rule

IBM Systems Lab Services

Systems Lab Services offers infrastructure services to help build hybrid cloud and enterprise IT solutions. From servers to storage systems and software, Systems Lab Services can help deploy the building blocks of a next-generation IT infrastructure to empower a client's business. Systems Lab Services consultants can perform infrastructure services for clients online or onsite, offering deep technical expertise, valuable tools, and successful methodologies. Systems Lab Services is designed to help clients solve business challenges, gain new skills, and apply best practices.

Systems Lab Services offers a wide range of infrastructure services for IBM Power servers, IBM Storage systems, IBM Z®, and IBM LinuxONE. Systems Lab Services has a global presence and can deploy experienced consultants online or onsite around the world.

For assistance, contact Systems Lab Services at ibmsls@us.ibm.com.

To learn more, see the IBM Systems Lab Services website.

IBM Consulting™

As transformation continues across every industry, businesses need a single partner to map their enterprise-wide business strategy and technology infrastructure. IBM Consulting is the business partner to help accelerate change across an organization. IBM specialists can help businesses succeed through finding collaborative ways of working that forge connections across people, technologies, and partner ecosystems. IBM Consulting brings together the business expertise and an ecosystem of technologies that help solve some of the biggest problems faced by organizations. With methods that get results faster, an integrated approach that is grounded in an open and flexible hybrid cloud architecture, and incorporating technology from IBM Research® and IBM Watson® AI, IBM Consulting enables businesses to lead change with confidence and deliver continuous improvement across a business and its bottom line.

For additional information, see the IBM Consulting website.

IBM Technology Support Services (TSS)

Get preventive maintenance, onsite and remote support, and gain actionable insights into critical business applications and IT systems. Speed developer innovation with support for over 240 open-source packages. Leverage powerful IBM analytics and AI-enabled tools to enable client teams to manage IT problems before they become emergencies.

TSS offers extensive IT maintenance and support services that cover more than one niche of a client's environment. TSS covers products from IBM and OEMs, including servers, storage, network, appliances, and software, to help clients ensure high availability across their data center and hybrid cloud environment.

For details on available services, see the Technology support for hybrid cloud environments website.

IBM Expert Labs

Expert Labs can help clients accelerate their projects and optimize value by leveraging their deep technical skills and knowledge. With more than 20 years of industry experience, these specialists know how to overcome the biggest challenges to deliver business results that can have an immediate impact.

Expert Labs' deep alignment with IBM product development allows for a strategic advantage as they are often the first in line to get access to new products, features, and early visibility into roadmaps. This connection with the development enables them to deliver First of a Kind implementations to address unique needs or expand a client's business with a flexible approach that works best for their organization.

For additional information, see the IBM Expert Labs website.

IBM Security® Expert Labs

With extensive consultative expertise on IBM Security software solutions, Security Expert Labs helps clients and partners modernize the security of their applications, data, and workforce. With an extensive portfolio of consulting and learning services, Expert Labs provides project-based and premier support service subscriptions.

These services can help clients deploy and integrate IBM Security software, extend their team resources, and help guide and accelerate successful hybrid cloud solutions, including critical strategies such as zero trust. Remote and on-premises software deployment assistance is available for IBM Cloud Pak® for Security, IBM Security QRadar®/QRoC, IBM Security SOAR/Resilient®, IBM i2®, IBM Security Verify, IBM Security Guardium®, and IBM Security MaaS360®.

For more information, contact Security Expert Labs at sel@us.ibm.com.

For additional information, see the IBM Security Expert Labs website.



Back to topBack to top

IBM support

Top rule

For installation and technical support information, see the IBM Support Portal.



Back to topBack to top

Additional support

Top rule

IBM Client Engineering for Systems

Client Engineering for Systems is a framework for accelerating digital transformation. It helps you generate innovative ideas and equips you with the practices, technologies, and expertise to turn those ideas into business value in weeks. When you work with Client Engineering for Systems, you bring pain points into focus. You empower your team to take manageable risks, adopt leading technologies, speed up solution development, and measure the value of everything you do. Client Engineering for Systems has experts and services to address a broad array of use cases, including capabilities for business transformation, hybrid cloud, analytics and AI, infrastructure systems, security, and more. Contact Client Engineering at sysgarage@ibm.com.



Back to topBack to top

Technical information

Top rule

Specified operating environment

Physical specifications
  • Width1: 482 mm (18.97 in.)
  • Depth2: 712 mm (28 in.)
  • Height: 173 mm (6.8 in.)
  • Weight: 43.54 kg (96 lb)

1. The width is measured to the outside edges of the rack-mount bezels. The width is 446 mm (17.6 in.) for the main chassis, which fits in between a 482.6 mm (19 in.) rack mounting flanges.

2. The cable management arm with the maximum cable bundle adds 248 mm (9.8 in.) to the depth.

To assure installability and serviceability in non-IBM industry-standard racks, review the installation planning information for any product-specific installation requirements.

Operating environment

Electrical characteristics

  • AC rated voltage and frequency2: 200--240 V AC at 50 or 60 Hz plus or minus 3 Hz
  • Thermal output (maximum)3: 9383 BTU/hr
  • Maximum power consumption3: 2750 W
  • Maximum kVA4: 2.835 kVA
  • Phase: Single

1. Redundancy is supported. The Power S1024 has a maximum of four power supplies. There are no specific plugging rules or plugging sequence when you connect the power supplies to the rack PDUs. All the power supplies feed a common DC bus.

2. The power supplies automatically accept any voltage with the published, rated-voltage range. If multiple power supplies are installed and operating, the power supplies draw approximately equal current from the utility (electrical supply) and provide approximately equal current to the load.

3. Power draw and heat load vary greatly by configuration. When you plan for an electrical system, it is important to use the maximum values. However, when you plan for heat load, you can use the IBM Systems Energy Estimator to obtain a heat output estimate based on a specific configuration. For more information, see The IBM Systems Energy Estimator website.

4. To calculate the amperage, multiply the kVA by 1,000 and divide that number by the operating voltage.

Environment (operating)1

  • ASHRAE class; allowable A3 (fourth edition)
  • Airflow direction; recommended Front-to-back
  • Temperature: Recommended 18.0°C--27.0°C (64.4°F--80.6°F); allowable 5.0°C--40.0°C (41.0°F--104.0°F)
  • Low-end moisture: Recommended 9.0°C (15.8°F) dew point; allowable -12.0°C (10.4°F) dew point and 8% relative humidity
  • High-end moisture: Recommended 60% relative humidity and 15°C (59°F) dew point; allowable 85% relative humidity and 24.0°C (75.2°F) dew point
  • Maximum altitude: 3,050 m (10,000 ft)

Allowable environment (nonoperating)5

  • Temperature: Recommended 5°C--45°C (41°F--113°F)
  • Relative humidity: Recommended 8% to 85%
  • Maximum dew point: Recommended 27.0°C (80.6°F)

1. IBM provides the recommended operating environment as the long-term operating environment that can result in the greatest reliability, energy efficiency, and reliability. The allowable operating environment represents where the equipment is tested to verify functionality. Due to the stresses that operating in the allowable envelope can place on the equipment, these envelopes must be used for short-term operation, not continuous operation. There are a very limited number of configurations that must not operate at the upper bound of the A3 allowable range. For more information, consult your IBM technical specialist.

2. Must derate the maximum allowable temperature 1°C (1.8°F) per 175 m (574 ft) above 900 m (2,953 ft) up to a maximum allowable elevation of 3,050 m (10,000 ft).

3. The minimum humidity level is the larger absolute humidity of the -12°C (10.4°F) dew point and the 8% relative humidity. These levels intersect at approximately 25°C (77°F). Below this intersection, the dew point (-12°C) represents the minimum moisture level, while above it, the relative humidity (8%) is the minimum. For the upper moisture limit, the limit is the minimum absolute humidity of the dew point and relative humidity that is stated.

4. The following minimum requirements apply to data centers that are operated at low relative humidity:

  • Data centers that do not have ESD floors and where people are allowed to wear non-ESD shoes might want to consider increasing humidity given that the risk of generating 8 kV increases slightly at 8% relative humidity, when compared to 25% relative humidity.
  • All mobile furnishings and equipment must be made of conductive or static dissipative materials and be bonded to ground.
  • During maintenance on any hardware, a properly functioning and grounded wrist strap must be used by any personnel who comes in contact with information technology (IT) equipment.

5. Equipment that is removed from the original shipping container and is installed, but is powered down. The allowable non-operating environment is provided to define the environmental range that an unpowered system can experience short term without being damaged.

Electromagnetic compatibility compliance: CISPR 22; CISPR 32; CISPR 24; CISPR 35; FCC, CFR 47, Part 15 (US); VCCI (Japan); EMC Directive (EEA); ICES-003 (Canada); ACMA (Australia, New Zealand); CNS 13438 (Taiwan); Radio Waves Act (Korea); Commodity Inspection Law (China); QCVN 118 (Vietnam); MoCI (Saudi Arabia); SI 961 (Israel); EAC (EAEU).

Safety compliance: This product was designed, tested, manufactured, and certified for safe operation. It complies with IEC 60950-1 and/or IEC 62368-1 and where required, to relevant national differences/deviations (ND) to these IEC base standards. This includes, but is not limited to: EN (European Norms including all Amendments under the Low Voltage Directive), UL/CSA (North America bi-national harmonized and marked per accredited NRTL agency listings), and other such derivative certifications according to corporate determinations and latest regional publication compliance standardized requirements.

See the Installation Planning Guide in IBM Documentation for additional detail.

Homologation

This product is not certified for direct connection by any means whatsoever to interfaces of public telecommunications networks. Certification may be required by law prior to making any such connection. Contact an IBM representative or reseller for any questions.

Hardware requirements

Power S1024 system configuration

The minimum Power S1024 initial order must include a processor module, two 16 GB DIMMs (one feature EM6N 32 GB (2 x 16 GB) DDIMM), four power supplies and line cords, an operating system indicator, a cover set indicator, and a Language Group Specify. Also, it must include one of these storage options and one of these network options:

Storage options:

  • For boot from NVMe for AIX/Linux: One NVMe drive slot and one NVMe drive or one PCIe NVMe add-in adapter.
  • For boot from NVMe for IBM i: Two NVMe drive slots and two NVMe drives or two PCIe NVMe add-in adapters.
  • For boot from SAN: Internal NVMe drive and RAID card are not required if feature 0837 (boot from SAN) is selected. A FC adapter must be ordered if feature 0837 is selected.

Network options:

  • One PCIe2 4-port 1 Gb Ethernet adapter
  • One of the supported 10 Gb Ethernet adapters

When AIX or Linux is the primary operating system, the minimum defined initial order configuration is as follows:

System Feature Codes Feature Code Description Default Minimum Quantity Notes
Op-Panel EU0K Operator Panel LCD Display   1 Optional with AIX/Linux. Always default Qty. 1, but can be deselected for AIX/Linux.
           
Virtualization Engine 5228 PowerVM Enterprise Edition 1 1 Must select one option.
or
EPA0 Deactivation of LPM (Live Partition Mobility)   1
           
Processor Modules EPGM 12-core Typical 3.40 to 4.0 Ghz (max) Power10 Processor   1 Must select Processor Module option.
or
EPGC 16-core Typical 3.10 to 4.0 Ghz (max) Power10 Processor   2
or
EPGD 24-core Typical 2.75 to 3.90 Ghz (max) Power10 Processor   2
           
Processor Module Activations EPFM One CUoD Static Processor Core Activation for EPGM   6 Minimum of 50 % of CUoD Static processor core activations need to be ordered.
or
EPFC One CUoD Static Processor Core Activation for EPGC   16
or
EPFD One CUoD Static Processor Core Activation for EPGD   24
or  
EUBX 1 core Base Processor Activation (Pools 2.0) for EPGM - Any OS   from 1 to 24 Requires Pools 2.0 feature EP20 Power Enterprise Pools 2.0 Enablement.
or
EUCK 1 core Base Processor Activation (Pools 2.0) for EPGC - Any OS   from 1 to 32
or
EUCS 1 core Base Processor Activation (Pools 2.0) for EPGD - Any OS   from 1 to 48
           
Memory EM6N 32GB (2x16GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory   1 Minimum 2 DIMMs = 1 DIMM feature.
or
EM6W 64GB (2x32GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory   1
or
EM6X 128GB (2x64GB) DDIMMs, 3200 MHz, 16GBIT DDR4 Memory   1
or
EM6U 256GB (2x128GB) DDIMMs, 2933 MHz, 16GBIT DDR4 Memory   1
or
EM78 512GB (2x256GB) DDIMMs, 2933 MHz, 16GBIT DDR4 Memory   1
           
Active Memory Mirroring EM8G Active Memory Mirroring (AMM) 0 0 Optional feature. Max. Qty. 1 per system. Memory Mirroring requires a minimum of 8 DIMMS (4 features DIMM).
           
Storage Backplane EJ1Y Storage Backplane with eight NVMe U.2 drive slots   1 Must order Qty. 1 NVMe backplane feature except when #0837 or #ESCZ (iSCSI boot) is on the order or when NVMe PCIe add-in adapter card is used as the Load Source. Mixing NVMe devices is allowed on each backplane.
           
Bezels EJUU Front IBM Bezel for 16 NVMe-bays Backplane Rack-Mount   1 When no NVMe backplane is ordered and no RDX is ordered, default #EJUU. When no NVMe backplane is ordered and there is an RDX on the order, default #EJUW.
or
EJUV Front OEM Bezel for 16 NVMe-bays Backplane Rack-Mount   1
or
EJUW Front IBM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount   1
or
EJUX Front OEM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount   1
           
NVMe Devices EC7T 800GB Mainstream NVMe U.2 SSD 4k for AIX/Linux 2 0 For AIX/Linux, default is Qty. 2. It is allowed to be ordered in any quantity. From Qty. 0 to Qty. 16.
           
Required LAN adapters EC2U PCIe3 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter 1 1 Qty. 1 of these LAN features required on all Initial orders. Default Adapter: feature EC2U.
or
EN0W PCIe2 2-port 10/1GbE BaseT RJ45 Adapter   1
           
Power Supply EB3S AC Power Supply - 1600W for Server (200-240 VAC) 4 4 Each initial order must have all power supplies present, power supplies cannot be added later on. Only 200--240V power cords can be used.
           
Power Cables 6458 Power Cord 4.3m (14-ft), Drawer to IBM PDU (250V/10A) 4 4 Qty. 4 required.
           
Language Group 9300 Language Group Specify - US English 1 1 Language Specify code is required.
           
Primary Operating 2146 Primary OS - AIX   1 Must select one option.
or
2147 Primary OS - Linux   1
  1. The racking approach for the initial order can be a MTM 7965-S42.

The minimum defined initial order configuration, if no choice is made, when IBM i is the primary operating system, is:

System Feature Codes Feature Code Description Default Minimum Quantity Notes
Op-Panel EU0K Operator Panel LCD Display   1 Mandatory Qty. 1 with IBM i.
           
Virtualization Engine 5228 PowerVM Enterprise Edition 1 1 Must select one option.
or
EPA0 Deactivation of LPM (Live Partition Mobility)   1
           
Processor Modules EPGM 12-core Typical 3.40 to 4.0 Ghz (max) Power10 Processor   1 Must select Processor Module option.
or
EPGC 16-core Typical 3.10 to 4.0 Ghz (max) Power10 Processor   2
or
EPGD 24-core Typical 2.75 to 3.90 Ghz (max) Power10 Processor   2
           
Processor Module Activations EPFM One CUoD Static Processor Core Activation for EPGM   6 Minimum of 50% of CUoD Static processor core activations need to be ordered.
or
EPFC One CUoD Static Processor Core Activation for EPGC   16
or
EPFD One CUoD Static Processor Core Activation for EPGD   24
or  
EUBX 1 core Base Processor Activation (Pools 2.0) for EPGM - Any OS   from 1 to 24 Requires Pools 2.0 feature EP20 Power Enterprise Pools 2.0 Enablement.
or
EUCK 1 core Base Processor Activation (Pools 2.0) for EPGC - Any OS   from 1 to 32
or
EUCS 1 core Base Processor Activation (Pools 2.0) for EPGD - Any OS   from 1 to 48
           
Memory EM6N 32GB (2x16GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory   1 Minimum 2 DIMMs = 1 DIMM feature.
or      
EM6W 64GB (2x32GB) DDIMMs, 3200 MHz, 8GBIT DDR4 Memory   1
or      
EM6X 128GB (2x64GB) DDIMMs, 3200 MHz, 16GBIT DDR4 Memory   1
or      
EM6U 256GB (2x128GB) DDIMMs, 2933 MHz, 16GBIT DDR4 Memory   1
or      
EM78 512GB (2x256GB) DDIMMs, 2933 MHz, 16GBIT DDR4 Memory   1
           
Active Memory Mirroring EM8G Active Memory Mirroring (AMM) 0 0 Optional feature. Max. Qty. 1 per system. Memory Mirroring requires a minimum of 8 DIMMS (4 features DIMM).
           
Storage Backplane EJ1Y Storage Backplane with eight NVMe U.2 drive slots   1 Must order Qty. 1 NVMe backplane feature except when #0837 is on the order or when NVMe PCIe add-in adapter card is used as the Load Source. Mixing NVMe devices is allowed on each backplane.
           
Bezels EJUU Front IBM Bezel for 16 NVMe-bays Backplane Rack-Mount   1 When no NVMe backplane is ordered and no RDX is ordered, default #EJUU. When no NVMe backplane is ordered and there is an RDX on the order, default #EJUW.
or
EJUV Front OEM Bezel for 16 NVMe-bays Backplane Rack-Mount   1
or
EJUW Front IBM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount   1
or
EJUX Front OEM Bezel for 16 NVMe-bays and RDX Backplane Rack-Mount   1
           
NVMe Devices ES1K Enterprise 800GB SSD PCIe4 NVMe U.2 module for IBM i 2 0 For IBM i, default is Qty. 2. It is allowed to be ordered in any quantity. From Qty. 0 to Qty. 16, except Qty. 1.
           
Required LAN adapters EC2U PCIe3 2-Port 25/10Gb NIC&ROCE SR/Cu Adapter 1 1 Qty. 1 of these LAN features required on all Initial orders. Default Adapter: feature EC2U.
           
Power Supply EB3S AC Power Supply - 1600W for Server (200-240 VAC) 4 4 Each initial order must have all power supplies present, power supplies cannot be added later on. Only 200--240V power cords can be used.
           
Power Cables 6458 Power Cord 4.3m (14-ft), Drawer to IBM PDU (250V/10A) 4 4 Qty. 4 required.
           
Language Group 9300 Language Group Specify - US English 1 1 Language Specify code is required.
           
Primary Operating 2145 Primary OS - IBM i   1 Mandatory feature.
           
System Consoles 5550 Sys Console On HMC 1 1 Must select one System Console feature.
or
5557 System Console-Ethernet LAN adapter   1
           
Data Protection 0040 Mirrored System Disk Level, Specify Code 1 1 For IBM i OS only - Qty. 1 system data protection code required.
  1. The racking approach for the initial order can be either a MTM 7965-S42.

Hardware Management Console (HMC) machine code

If the system is ordered with 1020 firmware level, or higher, and is capable to be HMC-managed, then the managing HMC must be installed with HMC 10.1.1020.0, or higher.

This level only supports hardware appliance types 7063, or virtual appliances (vHMC) on x86 or PowerVM. The 7042 hardware appliance is supported.

An HMC is required to manage the Power S1024 server implementing partitioning. Multiple Power8, Power9, and Power10 processor-based servers can be supported by a single HMC with version 10.

Planned HMC hardware and software support:

  • Hardware Appliance: 7063-CR1, 7063-CR2
  • vHMC on x86
  • vHMC on PowerVM based LPAR

If you are attaching an HMC to a new server or adding function to an existing server that requires a firmware update, the HMC machine code may need to be updated because HMC code must always be equal to or higher than the managed server's firmware. Access to firmware and machine code updates is conditioned on entitlement and license validation in accordance with IBM policy and practice. IBM may verify entitlement through customer number, serial number, electronic restrictions, or any other means or methods employed by IBM at its discretion.

To determine the HMC machine code level required for the firmware level on any server, go to the following web page to access the Fix Level Recommendation Tool (FLRT) on or after the planned availability date for this product. FLRT will identify the correct HMC machine code for the selected system firmware level; see the website Fix Level Recommendation Tool.

If a single HMC is attached to multiple servers, the HMC machine code level must be updated to be at or higher than the server with the most recent firmware level. All prior levels of server firmware are supported with the latest HMC machine code level.

For clients installing systems higher than the EIA 29 position (location of the rail that supports the rack-mounted server) in any IBM or non-IBM rack, acquire approved tools outlined in the server specifications section at IBM Documentation.

In situations where IBM service is required and the recommended tools are not available, there could be delays in repair actions.

Software requirements

  • Red Hat Enterprise Linux 9.0, for Power LE, or later
  • Red Hat Enterprise Linux 8.4, for Power LE, or later
  • SUSE Linux Enterprise Server 15 Service Pack 3, or later
  • SUSE Linux Enterprise Server for SAP with SUSE Linux Enterprise Server 15 Service Pack 3, or later
  • Red Hat Enterprise Linux for SAP with Red Hat Enterprise Linux 8.4 for Power LE, or later
  • Red Hat OpenShift Container Platform 4.9, or later

Please review the Linux alert page for any known Linux issues or limitations Linux on IBM - Readme first issues website.

If installing IBM i:

  • IBM i 7.5, or later
  • IBM i 7.4 TR6, or later
  • IBM i 7.3 TR12, or later

If installing the AIX operating system LPAR with any I/O configuration (one of these):

  • AIX Version 7.3 with the 7300-00 Technology Level and Service Pack 7300-00-02-2220, or later
  • AIX Version 7.2 with the 7200-05 Technology Level and Service Pack 7200-05-04-2220, or later
  • AIX Version 7.2 with the 7200-04 Technology Level and Service Pack 7200-04-06-2220, or later (planned availability September 16, 2022)

If installing the AIX operating system Virtual I/O only LPAR (one of these):

  • AIX Version 7.3 with the 7300-00 Technology Level and service pack 7300-00-01-2148, or later
  • AIX Version 7.2 with the 7200-05 Technology Level and service pack 7200-05-01-2038, or later
  • AIX Version 7.2 with the 7200-04 Technology Level and Service Pack 7200-04-02-2016, or later
  • AIX Version 7.1 with the 7100-05 Technology Level and Service Pack 7100-05-06-2016, or later

If installing VIOS:

  • VIOS 3.1.3.21

Limitations
  • If IBM i (#2145) is selected as the primary operating system, then feature (#0047) - Device Parity RAID-6 All, Specify Code with NVMe devices is not allowed.
  • There is no physical system port on the scale-out Power10 servers.

Boot requirements

  • If IBM i (#2145) is selected as the primary operating system and SAN boot is not selected (#0837), one of the load source specify codes for SAS drives or NVMe devices in Special Features - Initial Orders - Specify codes section must be specified.
  • If IBM i (#2145) is selected and the load source disk unit is not in the system unit (CEC), one of the following specify codes must also be selected:
    • Feature (#0719) Load Source Not in CEC and are to be placed in I/O drawers or in external SAN-attached disk
    • Feature (#EHR2) Load Source Specifies DASD are placed in an EXP24SX SFF Gen2 bay Drawer (#ESLS)
    • Feature (#0837) SAN Operating System Load Source Specify
  • If IBM i (#2145) is selected, one of the following system console specify codes must be selected:
    • Feature (#5550) -- System Console on HMC
    • Feature (#5557) -- System Console - Internal LAN

Planning information

Cable orders

No cables required.

Security, auditability, and control

This product uses the security and auditability features of host hardware and application software.

The client is responsible for evaluation, selection, and implementation of security features, administrative procedures, and appropriate controls in application systems and communications facilities.



Back to topBack to top

契約条件

Top rule

大量発注

IBM 担当員にお問い合わせください。

製品 - 契約条件

保証期間
保証および追加補償範囲オプション: 補償範囲の要約 (1):

保証期間:

サービス・レベル:

3 年間

IBM CRU およびオンサイト、9 時間 x 週 5 日、翌営業日対応

サービス・アップグレード・オプション:  
保証サービスのアップグレード IBM オンサイト修理、9 時間 x 週 5 日、当日対応 (2)、24 時間 365 日当日対応
保守サービス (保証期間後): IBM オンサイト修理、翌営業日対応オプションおよび当日対応オプション
IBM ハードウェアの 保守サービス - 専用保守(3): Y
(1) 以下の補償範囲の詳細を参照してください。
(2) 米国および EMEA でのみ提供されます。
(3) 米国では提供されません。

IBM 保証の内容と制限のコピーを入手するには、販売店または IBM にお問い合わせください。

IBM マシンの初期の取り付けで取り付けられた IBM 部品またはフィーチャーは、 IBM が指定した期間の完全保証の対象となります。以前に取り付けられていた部品またはフィーチャーを新しい IBM 部品またはフィーチャーと交換する場合、新しく交換された部品またはフィーチャーは残りの保証期間を引き継ぎます。以前に取り付けられた部品またはフィーチャーを交換しないで IBM 部品または機能をマシンに追加した場合、完全保証の対象となります。特に明記されていない限り、保証期間、保証サービスのタイプ、および部品またはフィーチャーのサービス・レベルは、それが取り付けられている機械と同じです。

本書に記載されている IBM ソリッド・ステート・ドライブ (SSD) および Non-Volatile Memory Express (NVMe) デバイスには、書き込みサイクルの最大数がある場合があります。 IBM SSD および NMVe デバイスで障害が発生し、SSD および NMVe デバイスが書き込みサイクルの最大数に達していない場合、標準の保証および保守期間中であれば、SSD および NMVe デバイスは交換の対象となります。この制限に達したデバイスは、その仕様に従って動作しなくなることがあり、お客様の負担で交換する必要があります。個々の耐用期間はそれぞれ異なり、オペレーティング・システム・コマンドを使用してモニター可能です。

IBM 保証は、フィーチャー番号 EB4Z を対象としています。フィーチャー番号 EB3Z および GenieLift GL-8 をベースとしたリフト・ツールに関連する保証条件については、Genie が提供する個別の保証条件を参照してください。これらの保証条件は、 Genie Web サイトにある Genie オペレーターの資料に記載されています。

IBM または IBM 以外のラックの EIA 29 (ラック・マウント型サーバーをサポートするレールの場所) より上の位置にシステムを取り付けるお客様の場合は、 IBM Documentation のサーバー仕様セクションに概要が示されている承認済みのツールを入手してください。 IBM によるサービスが必要で、推奨されるツールを入手できない状況では、修復処置が遅れる可能性があります。

延長保証サービス

延長保証サービスは適用されません。

保証サービス

IBM は、必要に応じ、機械に指定された保証サービスの種類に従って、修理または交換サービスを提供します。 IBM は、電話 または IBM Web サイトを介して、コンピューター上で問題の解決を試みます。特定の機械には、直接問題報告、リモート問題判別、および IBM による解決を対象としたリモート・サポート機能が付いています。お客様は、 IBM が指定する問題判別手順および解決手順に従っていただく必要があります。問題判別に続いて、 IBM がオンサイト・サービスが必要であると判断する場合、サービスのご提供日時は、お客様のお問い合わせの時間、機械のテクノロジーおよび冗長度、部品の在庫状況によって異なります。お客様の製品に該当する場合、カスタマー交換可能ユニット (CRU) と見なされる部品は、機械の標準保証サービスの一部として提供されます。

サービス・レベルは目標対応時間で設定されますが、保証されるものではありません。指定された保証サービス・レベルは、世界中のすべての地域で利用可能であるとは限りません。 IBM の通常のサービス範囲を超えると、追加料金がかかる場合があります。国や地域に特定の情報については、 IBM 担当員または販売店にお問い合わせください。

CRU サービス

IBM はお客様がご自身で導入できるように交換用 CRU をお客様に出荷します。CRU についての情報および交換手順は、ご使用の機械に付属しているか、またはIBM Webサイトに掲載されています。また、ご要望により IBM から入手いただけます。CRU は、Tier 1 (必須) または Tier 2 (オプション) CRU と指定されます。

Tier 1 (必須) CRU

Tier 1 CRU の取り付けは、この発表で明記されているとおり、お客様ご自身で行っていただきます。お客様の要請により IBM が Tier 1 CRU の導入を行った場合は、その料金を請求させていただきます。

Tier 1 CRU に指定されている部品は以下のとおりです。

  • ベゼル
  • サービス・カバー
  • 操作パネル
  • 操作パネル -- LCD
  • Blower
  • RDX ドッキング・ステーション
  • RDX カートリッジ
  • RDX 電源ケーブル
  • フロント USB ケーブル
  • NVMe ドライブ
  • NVMe フィラー
  • DDIMM 保持用カバー
  • DDIMM フィラー
  • エア・バッフル
  • 時刻バッテリー
  • TPM カード
  • プロセッサー VRM
  • プロセッサー・ヒートシンク
  • PCIe アダプター
  • 電源機構
  • 配電信号ケーブル
Tier 2 (オプション) CRU

Tier 2 CRU は、お客様ご自身で取り付けていただくこともできますし、追加料金なしで IBM に取り付けを要求することもできます。

在庫がある場合、CRU が翌営業日 (NBD) 配送で出荷されます。 IBM は、取り外した (故障した) CRU を IBM に返却する必要があるかどうかを交換用 CRU に同梱される資料に指定するものとします。返却が必要な場合は、返却の指示および返送用梱包材が交換用 CRU と一緒に出荷されます。お客様が、故障した CRU を交換用 CRU の受領から 15 日以内に返却しない場合、 IBM は交換用 CRU の代金を請求させていただくことがあります。

以下の部品は、Tier 2 CRU 部品として指定されています。

  • 操作パネル -- LCD ケーブル
  • ブロワー電源ケーブル
CRU + オンサイト・サービス

IBM の判断で、お客様が指定された CRU サービスを受けるか、または IBM がお客様の機械設置場所で機械の修理を行い、機械の動作を検証します。 IBM 機械の分解および再組み立てができる適切な作業場所を提供いただく必要があります。作業場所は、清潔で明るく、分解および再組み立てに適した場所でなければなりません。

サービス・レベルは以下のとおりです。

  • 1 日 9 時間、休日を除く月曜日から金曜日、翌営業日対応。翌営業日対応の対象となるには、現地時間の午後 3:00 までにお問い合わせいただく必要があります。

保証サービス

IBM は現在、 IBM フィールド交換可能ユニット (FRU) の部品番号ラベルをつけた、特定の IBM 以外の部品を使用したマシンを出荷しています。これらの部品は、 IBM マシンの保証期間中は、保守の対象になります。 IBM は、お客様の便宜を図るためにこれらの特定の IBM 以外の部品をサービスの対象としており、これらの部品には IBM マシンの通常の保証サービス手順が適用されます。

国際保証サービス

国際保証サービスにより、国際保証サービスに適しているマシンを再配置し、 IBM マシンのサービスが提供される国で継続保証サービスを受けることができます。マシンを別の国に移動する場合は、マシン情報をビジネス・パートナーまたは IBM 担当員に報告する必要があります。

サービスを提供している国で提供された保証サービス・タイプおよびサービス・レベルは、マシンが購入された国で提供される保証サービス・タイプとサービス・レベルとは異なる場合があります。保証サービスには、サービス対象国の対象マシン・タイプで利用可能な一般的な保証サービス・タイプとサービス・レベルが提供され、保証期間は、マシンが購入された国の保証期間になります。

以下の情報タイプは、 国際保証サービス Web サイト内で見つかります。

  • 機械保証の資格と適格性
  • テクニカル・サポートの連絡先情報を含む国別の連絡先のディレクトリー
  • 発表レター

保証サービスのアップグレード

保証期間内において、保証サービスをアップグレードすると、オンサイト・サービスのレベルを強化することができます。これには、追加料金がかかります。サービス・レベルは目標対応時間で設定されますが、保証されるものではありません。追加情報については、『保証サービス』のセクションを参照してください。

IBM は、電話による対応または IBM Web サイトを使用した電子的な方法により、問題を解決するよう努めます。特定の機械には、直接問題報告、リモート問題判別、および IBM による解決を対象としたリモート・サポート機能が付いています。お客様は、 IBM が指定する問題判別手順および解決手順に従っていただく必要があります。問題判別に続いて、 IBM がオンサイト・サービスが必要であると判断する場合、サービスのご提供日時は、お客様のお問い合わせの時間、機械のテクノロジーおよび冗長度、部品の在庫状況によって異なります。

保守サービスのオプション

IBM Power Expert Care のサービスおよびサポート・オプションについて詳しくは、発表レター JS22-0008 を参照してください。

IBM 製ではない部品のサービス

特定の条件下で、IBM は、保証サービスのアップグレードまたは保守サービスの対象となるマシンについて、特定の IBM 以外の部品に追加料金なしで保守を提供します。

このサービスには IBM マシンに取り付けられた IBM 以外の部品 (例えば、アダプター・カード、PCMCIA カード、ディスク・ドライブ、メモリーなど) のハードウェア問題判別 (PD) が含まれ、故障した部品の交換作業が追加料金なしで提供されます。

IBM が故障した部品の製造者と技術サービス契約を交わしている場合、または故障した部品が適合部品 (IBM FRU ラベルのついた部品) である場合、IBM は故障した部品を追加料金なしで調達および交換します。 それ以外の IBM 製ではない部品については、お客様自身が部品を調達する責任を負います。マシンが保証サービスのアップグレードまたは保守サービス期間中であれば、追加料金なしで取り付けが行われます。

使用量対応機械

なし

IBM の時間制サービス料率の分類

2

サービス・タイプに機械部品の交換が含まれる場合、交換用部品として新品ではない (ただし、正常に機能する) 部品を使用する場合があります。

一般的な取引条件

現場で取り付け可能なフィーチャー

あり

モデル変更: なし

なし

機械の取り付け

お客様によるセットアップ。 IBM からマシンとともに提供された手順書に従って、お客様ご自身で取り付ける必要があります。

段階的プログラム・ライセンス料金の適用

なし

ライセンス・マシン・コード

IBM マシン・コードは、 IBM マシン・コード使用許諾契約書の条件に基づいて IBM から提供された IBM マシン上のクライアントによって使用するためのライセンス交付を受けており、マシンをその仕様に従って機能させるために、 IBM が許可し、お客様が取得された容量に対してのみ使用されます。契約書を入手するには、 IBM 担当員にお問い合わせください。 License Agreement for Machine Code and Licensed Internal Code からも入手できます

LMC タイプ・モデル 9105-42A を使用するマシン

マシン・コードのアップデートへのアクセスは、 IBM のポリシーと慣習に準拠した使用許諾とライセンスの検証が条件となります。 IBM は、お客様番号、シリアル番号、電子的制限、または IBM が独自の裁量で採用する他の手段や方式を通じて使用許諾を検証する場合があります。

マシンが仕様どおりの稼動状態でなくなったときに、ダウンロード可能なマシン・コードの適用により問題を解決できる場合、 IBM が指定するとおりにこれらの指定されたマシン・コードの変更をダウンロードし、インストールするのはお客様の責任で行っていただきます。必要に応じて、お客様はダウンロード可能なマシン・コードの変更のインストール作業を IBM に依頼することができますが、これは有料サービスとなる場合があります。

教育割引

教育割引: 資格要件を満たした教育機関のお客様は、割引料金を利用することができます。教育割引は、他の割引に加算することはできません。

この発表レターの製品に対する教育割引は 5% です。



Back to topBack to top

Prices

Top rule

For all local charges, contact your IBM representative.

Annual minimum maintenance charges

Not applicable

IBM Global Financing

IBM Global Financing offers competitive financing to credit-qualified clients to assist them in acquiring IT solutions. Offerings include financing for IT acquisition, including hardware, software, and services, from both IBM and other manufacturers or vendors. Offerings (for all client segments: small, medium, and large enterprise), rates, terms, and availability can vary by country. Contact your local IBM Global Financing organization or go to the IBM Global Financing website for more information.

IBM Global Financing offerings are provided through IBM Credit LLC in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension, or withdrawal without notice.

Financing solutions from IBM Global Financing can help you stretch your budget and affordably acquire the new product. But beyond the initial acquisition, our end-to-end approach to IT management can also help keep your technologies current, reduce costs, minimize risk, and preserve your ability to make flexible equipment decisions throughout the entire technology lifecycle.

Trademarks

IBM Consulting is a trademark of IBM Corporation in the United States, other countries, or both.

IBM, Power, PowerVM, AIX, IBM Cloud, IBM Z, PartnerWorld, IBM Research, IBM Watson, IBM Security, IBM Cloud Pak, QRadar, Resilient, i2, Guardium and MaaS360 are registered trademarks of IBM Corporation in the United States, other countries, or both.

The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive licensee of Linus Torvalds, owner of the mark on a world­wide basis.

Red Hat and OpenShift are registered trademarks of Red Hat Inc. in the U.S. and other countries.

Other company, product, and service names may be trademarks or service marks of others.

Terms of use

IBM products and services which are announced and available in your country can be ordered under the applicable standard agreements, terms, conditions, and prices in effect at the time. IBM reserves the right to modify or withdraw this announcement at any time without notice. This announcement is provided for your information only. Additional terms of use are located at

Terms of use

この製品発表レターは、IBM Corporation が発表した時点での製品発表レターの抄訳です。

For the most current information regarding IBM products, consult your IBM representative or reseller, or go to the IBM worldwide contacts page

IBM Japan