Hardware overview
Review the overall system layout and hardware configuration.
- Ensure that IBM® Spectrum Fusion is installed in a restricted access location, such that the area is accessible only to skilled and instructed persons with proper authorization.
System layout
The appliance rack details are as follows:
-
- RU2 to RU7 rack units
- The control and compute nodes that are available by default. A minimum of three control nodes
and three compute nodes with CPU of 32 cores and memory of 256 GB. Note: The RU7 is used as the provisioning node during the network step up installation stage.
-
- RU8 to RU17
- You can purchase and up size storage or compute nodes.
-
- RU18 to RU21
- Switches
-
- RU22
- Service console tray
-
- RU23 and RU24
- AFM nodes
-
- RU25 to RU28
- GPU servers
-
- RU29 to RU32
- You can purchase and up size storage or compute nodes.
Hardware configuration of compute or storage nodes is as follows:
- Lenovo SR645 server
- 2x AMD EPYC 7302 16C (32C total) 3.0 GHz or 3.3 GHz CPU
- 2x960 GB M.2 OS drives (RAID 1)
- 1x NVIDIA ConnectX-6 dual-port 100 GbE network adapter
- 1x NVIDIA ConnectX-4 dual-port 25 GbE network adapter
- 1x 1 GbE RJ45 4-port OpenShift® adapter
- 2-10x Samsung PM1733 7.68 TB NVMe PCIe 4.0 disks
- NVMe disks are added in pairs
- All storage or compute servers must have the same number of NVMe drives
- The maximum number or compute or storage servers is 20, reduced by the number of GPU servers installed
- 1U height
Hardware configuration of compute-only node is as follows:
- Lenovo SR645 server
- 2x AMD EPYC 7302 16C (32C total) 3.0 GHz or 3.3 GHz CPU
- 256 GB RAM (16x 16 GB DIMMs)
- 2x 960 GB M.2 OS drives (RAID 1)
- 1x NVIDIA ConnectX-6 dual-port 100 GbE network adapter
- 1x NVIDIA ConnectX-4 dual-port 25 GbE network adapter
- 1x 1 GbE RJ45 4-port OpenShift adapter
- Same specifications as the compute or storage server but with zero NVMe disks
- Field upgradeable to add NVMe drives to match the existing computer or storage servers
- 1U height
Hardware configuration
Base configuration:
- 42U rack
2x Ethernet high-speed switches
2x Ethernet management switches
6x Storage or Compute servers with 2 NVMe drives or server:- The server in RU7 is connected to the service console tray
- Servers in RU2, RU3, and RU4 become the OpenShift control plane servers
- 32 cores and 256 GB RAM
- 2x 7.68 TB NVMe PCle drives
Available options:
- Additional storage or compute servers to a maximum of 20 (minus any GPU servers)
- A pair of 2U GPU servers, each with 3x NVIDIA A100 GPUs
- Increased storage by adding drives to storage or compute servers: 7.68 TB NVMe PCle drives or server to a maximum of 10 drives or server
- Increased compute power by adding compute-only servers
- AFM (Active File Management) delivered as a pair of servers
Power Distribution Unit (PDU) positions in IBM Spectrum Fusion appliance
Use this section as a guidance for PDU positions in IBM Spectrum Fusion appliance.
Apply the following rules:
- All configurations require PDUs 1 and 2 to be connected to power.
- If more than six storage or compute servers exist, PDUs 3 and 4 must be connected to power.
- If AFM servers exist, PDUs 3 and 4 must be connected to power.
- If more than 14 storage or compute servers exist, PDUs 5 and 6 must be connected to power.
- If GPU servers exist, then PDUs 5 and 6 must be connected to power.
Note: For independent or redundant power feeds, the two power sources must be split between the
left and lower (odd numbered) PDUs and the right and upper (even numbered) PDUs.
For more
information about the line cords for power connections, see Supported PDU power cords. For more information about power
prerequisites for IBM Spectrum Fusion, see General power information.Drives and usable storage capacities
To calculate the usable storage capacities for drives, use the IBM Storage Modeler (StorM) tool.
Architecture
IBM Spectrum Fusion management software and your workloads are installed on amd64: 64-bit Intel or AMD x86 hardware architecture.