Hardware overview of a single rack

Review the overall system layout and hardware configuration.

The following picture shows the five different factory configurations for IBM Fusion HCI System with service node:
  • Ensure that IBM Fusion is installed in a restricted access location, such that the area is accessible only to skilled and instructed persons with proper authorization.
Server / Switch Description Rack position
Compute-only server (Model C00, C04) / Compute-storage server (Model C01, C05, C10, C14) See Compute-only and compute-storage server model servers RU2 to RU7
Service node See Service node RU23
(Optional) GPU Server (Model G03)

Optional GPU accelerated servers for AI workloads that make use of either the NVIDIA L40S or NVIDIA H100 NVL GPUs (1-8 GPUs per server). There is no minimum quantity GPU servers, and there is a maximum of four GPU servers in a single rack.

RU8 to RU13 and RU24 to RU32
(Optional) AFM Server Use the 9155-C10 server without any NVMe drives added to it for AFM features.

AFM gateway nodes may generate a large amount of network traffic between themselves and the home system to fetch and to synchronize files. To ensure the best performance and cluster stability, these separate nodes are used so that AFM traffic has its own physical adapter that is separate from the IBM Storage Scale cluster network and from servers that are used for other application workloads.

RU8 to RU13 and RU24 to RU32
Networks See Network layout and configuration
RU 18 and RU 19
48-port 1 GbE Management Ethernet Switch
RU 20 and RU 21
32-Port 200 GbE Ethernet high-speed switch

To know more about a single IBM Fusion HCI System MTM (Gen 1), see Hardware overview of a single IBM Fusion HCI System MTM .

Hardware configuration of the appliance

You can purchase a rack from IBM or arrange one yourself.

Base configuration:
  • 42U rack

    2x Ethernet high-speed switches

    2x Ethernet management switches

    6x compute-storage servers with 2-10 NVMe drives per server:
    • KVM is connected to the service node
    • Servers in RU2, RU3, and RU4 become the OpenShift control plane servers
    • 6x 32 core servers, or 6x 64 core servers, or 3x 32 core servers + 3x 64 core servers
    • 2x 7.68 TB NVMe PCle drives or 3.84 TB NVMe drives. Do not mix them within a rack.
Available options:
  • Additional compute-storage servers to a maximum of 16 (minus any GPU servers)
  • GPU servers with 1-8 GPU adapter cards
  • Increased storage by adding drives to compute-storage servers:
    • 7.68 TB NVMe PCle drives or server to a maximum of 10 drives
    • 3.84 TB NVMe PCIe drives to a maximum of 10 drives
    • No mixing of 7.68 TB and 3.84 TB drives is allowed within a rack.
  • Increased compute power by adding compute-only servers
  • 32-core compute-only nodes can be added and configured to provide AFM capabilities.

Physical configuration

For the physical configuration, limitations, and models, see Family 9155+01 IBM Fusion HCI System.

Power Distribution Unit (PDU) positions in IBM Fusion HCI System

Use this section as a guidance for PDU positions in the IBM Fusion HCI System. Additional PDUs may be added depending upon the configuration of the system. These additional PDUs get horizontally mounted in the space above PDU 6.

Figure 1. IBM Fusion HCI System PDU positions
IBM Spectrum Fusion PDU positions
Important: There may be more than 6 PDUs for racks that have one or more GPU servers. Those servers are added in pairs, horizontally mounted, and placed on top of PDUs 5 and 6.

All the PDUs that are installed in the rack are needed and all of them must be connected to power.

Note: For independent or redundant power feeds, the two power sources must be split between the left and lower (odd numbered) PDUs and the right and upper (even numbered) PDUs.
There are two possible models of the PDU and the one you get depends on your power connection:
  • For all single-phase power and for wye-wired three-phase power there is one PDU feature code. See Supported PDU power cords are ECJN with Souriau inlet table in Supported PDU power cords
  • For delta-wired three-phase power (typically used only in N America) there is a different PDU feature code. See Supported PDU power cords for PDU feature codes ECJQ with Amphenol inlet table in Supported PDU power cords.
For more information about the line cords for power connections, see Supported PDU power cords. For more information about power prerequisites for IBM Fusion, see General power information.

To know more about power distribution in IBM Fusion HCI System MTM (Gen 1), see Hardware overview of a single IBM Fusion HCI System.

Drives and usable storage capacities

To calculate the usable storage capacities for drives, use the IBM Storage Modeler (StorM) tool.

Weight of miscellaneous rack parts

The following table lists the weight of the rack configs that include miscellaneous parts beyond individual components like cables, Constellation rack and so on:
Table 1. Weight of miscellaneous rack parts
Component Weight (lbs) Weight (kg) Model
SN3700V 200GbE switch 30.8 14.0 S05
SN2201 1GbE switch 16.3 7.4 S04
SN3700C switch 27.5 12.5 S01
SN3700V 200GbE switch 0.8 14.0 S05
SN2201 1GbE switch 16.3 7.4 S04
7316-TF5 console 12.0 5.5 TF5
AS4610 switch 11.8 5.4 S02
SR665 GPU server 85.5 38.9 G01
SR630 AFM server 41.9 19.0 F01
SR645-0 32-core 256GB server 39.1 17.8 C00
SR645-2 32-core 256GB server 40.2 18.3 C01 with 2 drives
SR645-10 32-core 256GB server 44.6 20.3 C01 with 10 drives
SR645-0 64-core 1024GB server 39.1 17.8 C04
SR645-2 64-core 1024GB server 40.2 18.3 C05 with 2 drives
SR645-10 64-core 1024GB server 40.2 18.3 C05 with 10 drives
SR630 V3 32-core 45.9 20.8 C10
SR630 V3 64-core 45.9 20.8 C14
Constellation rack 398.2 181.0 R42
Intelligent switched PDU+ 9.5 4.3 n/a
All cables, rails, and so on. 395.5 179.8 n/a
SR675 V3 GPU server 72.1 lbs 32.8 kg G03 with 1 GPUs
SR675 V3 GPU server 87.5 lbs 39.8 kg G03 with 8 GPUs