Minimum hardware requirements and precheck

This topic describes the minimum requirements for IBM Storage Scale Erasure Code Edition.

These hardware requirements are for the base operating system and the IBM Storage Scale Erasure Code Edition storage functions. Extra resources are needed when you run the IBM Storage Scale protocol software or other workloads on the IBM Storage Scale Erasure Code Edition storage servers, or to achieve specific performance goals.

There is a limit on the number listed in the following table of IBM Storage Scale Erasure Code Edition storage nodes in an IBM Storage Scale Erasure Code Edition recovery group and a cluster. These nodes can be configured as several recovery groups with total storage nodes in a cluster. Every server in a recovery group must have the same configuration regarding CPU, memory, and storage.
Note:
  • In a x86_64 environment, only a bare metal server is allowed as IBM Storage Scale Erasure Code Edition storage server.
  • Drives with hardware compression enabled are not supported.
  • Drives must have unique worldwide name (WWN).
  • Drives with volatile cache enabled are not supported. For more information, see Volatile write cache detection.
  • SED capable drives are not allowed if they are enrolled, or if they require a key after power-on to use.
  • Disk drives in expansion enclosures are not allowed.
  • Drives must be hot-swappable and can be replaced independently without having to shut down the storage server.
Note: You can use the ece_os_readiness open-source tool to check that your planned IBM Storage Scale Erasure Code Edition servers meet the minimum hardware requirements. This tool is available on the IBM Spectrum® Scale Tools GitHub (https://github.com/IBM/SpectrumScaleTools). Contact IBM® for further details.
Table 1. IBM Storage Scale Erasure Code Edition hardware requirements for each x86_64 storage server
Hardware Description
CPU architecture

Single or dual socket Intel or AMD x86_64-bit processor with a total of 16 or more processor cores. AMD CPU requires EPYC Rome and newer generations.

Note: Dual socket AMD CPU system might need tuning NUMA per socket (NPS®) for better performance.
Memory 64 GB or more for configurations up to 64 drives per node.
  • For NVMe configurations, you can use all available memory DIMM sockets to get optimal performance.
  • For server configurations with more than 64 drives per node, contact IBM support for memory requirements.
Server packaging Single server per enclosure. Multi-node server packaging with common hardware components that provide a single point of failure across servers is not supported currently.
Operating System See IBM Storage Scale FAQs for details of supported versions.
Drives per storage node A maximum of 64 drives per storage node is supported.
Drives per recovery group A maximum of 700 drives per recovery group is supported. However, an individual declustered array (DA) can contain up to 512 pdisks only. At least one declustered array (DA) must contain 12 or more drives and every DA must have four or more drives.
Note: A DA is a subset of the physical disks within a recovery group that matches in size and speed. A recovery group might contain multiple DAs, which are unique. That is, a pdisk must belong to exactly one DA. The minimum DA size is met by each node that contributes a uniform number of disks.
Nodes per recovery group A minimum of 3 and maximum of 32 nodes per recovery group is supported.
Storage nodes per cluster A maximum of 256 IBM Storage Scale Erasure Code Edition storage nodes per cluster is supported.
System drive A physical drive is needed for each server’s system disk. The storage should be RAID1 protected and the capacity should be 100GB or more.
SAS Data Drives SAS or NL-SAS HDD or SSDs in JBOD mode and these drives are connected to the supported SAS host bus adapters. SATA drives and shingled magnetic recording (SMR) drives are not supported as data drives currently.
NVMe Data Drives Enterprise class NVMe drives with U.2 form factor and connected to PCIe buses directly or by PCIe switch. NVMe drives that are connected to SAS host bus adapters are not supported as data drives currently.
Fast Drive Requirement At least one SSD or NVMe drive is needed in each server for IBM Storage Scale Erasure Code Edition logging. The total space of fast drive required on each node is at least 500G.
Network Adapter Mellanox ConnectX-4, ConnectX-5, ConnectX-6, or ConnectX-7 (Ethernet or InfiniBand).
Network Bandwidth 25 Gbps or more between storage nodes. Higher bandwidth might be needed depending on your workload requirements.
Network Latency Average latency must be less than 1 msec between any storage nodes.
Network Topology Use a dedicated storage network to achieve maximum performance for your workload. For other workloads, use a separate network.
Note: RoCE is supported on lossless network only.
SAS Storage Adapters/Controllers
LSI cards support JBOD mode can be detected and managed by StorCLI utility. You can use the following IBM verified models:
  • 12 Gb/s RAID Controller models: SAS3008, SAS3108, SAS3408, SAS3508, or SAS3516
  • 12 Gb/s Fusion-MPT Host Bus Adapter models: SAS3008, SAS3408, SAS3416, or SAS3616

Dell PowerEdge server with 12Gb/s PowerEdge RAID Controller: PERC H730P Mini, PERC H745 Front SAS, and PERC H755 Front SAS can be detected and managed by PercCLI utility.

Note:
  • The StorCLI utility for LSI or PercCLI utility for DELL is a prerequisite for managing these cards. Do not use mixed card types in one IBM Storage Scale Erasure Code Edition recovery group because they might cause performance issues.
  • The JBOD connection mode is needed for drives that are used for IBM Storage Scale Erasure Code Edition storage.