Minimum hardware requirements
This topic describes the minimum requirements for IBM Spectrum Scale Erasure Code Edition.
These hardware requirements are for the base operating system and the IBM Spectrum Scale Erasure Code Edition storage functions. Additional resources are required when running IBM Spectrum Scale protocol software or other workloads on the IBM Spectrum Scale Erasure Code Edition storage servers, or to achieve specific performance goals.
Each IBM Spectrum
Scale Erasure Code Edition recovery group must have at
least 4 servers, but there is a limit on the number of IBM Spectrum
Scale Erasure Code Edition storage nodes in a IBM
Spectrum Scale cluster. In this release, there can be up to 128
storage nodes in the cluster. These nodes can be configured as 4 recovery groups of 32 nodes each,
or 8 recovery groups of 16 nodes, or some other combination with 128 or fewer total storage nodes.
Every server in a recovery group must have the same configuration in terms of CPU, memory, and
storage.
Note:
- Drives with hardware compression enabled are not supported
- SED capable drives are not allowed if they have been enrolled, or if they require a key after power on to use.
- Disk drives in expansion enclosures are not allowed.
- For SSD and NVMe drives, it is recommended to use a file system block size of 4 M or less with 8+2P or 8+3P erasure codes, and 2M or less for 4+2P OR 4+3P erasure codes.
CPU architecture | x86 64 bit processor with 8 or more processor cores per socket. Server should be dual socket with both sockets populated |
Memory | 64 GB or more for configurations with up to 24 drives per node:
|
Server packaging | Single server per enclosure. Multi-node server packaging with common hardware components that provide a single point of failure across servers is not supported at this time. |
Operating system | RHEL 7.5 or 7.6. See IBM Spectrum™ Scale FAQ for details of supported versions. |
Drives per storage node | A maximum of 24 drives per storage node is supported. |
Drives per Recovery Group | A maximum of 512 drives per recovery group is supported. |
Nodes per Recovery Group | A maximum of 32 nodes per recovery group is supported. |
Storage nodes per cluster | A maximum of 128 ECE storage nodes per cluster is supported. |
System drive | A physical drive is required for each server’s system disk. It is recommended to have this RAID1 protected and have a capacity of 100 GB or more. |
SAS Host Bus Adapter | LSI SAS HBA, models SAS3108, or SAS3516. |
SAS Data Drives | SAS or NL-SAS HDD or SSDs in JBOD mode and connected to the supported SAS host bus adapters. SATA drives are not supported as data drives at this time. |
NVMe Data Drives | Enterprise class NVMe drives with U.2 form factor. |
Fast Drive Requirement | At least one SSD or NVMe drive is required in each server for IBM Spectrum Scale Erasure Code Edition logging. |
Network Adapter | Mellanox ConnectX-4 or ConnectX-5, (Ethernet or InfiniBand) |
Network Bandwidth | 25 Gbps or more between storage nodes. Higher bandwidth may be required depending on your workload requirements. |
Network Latency | Average latency must be less than 1 msec between any storage nodes. |
Network Topology | To achieve the maximum performance for your workload, a dedicated storage network is recommended. For other workloads, a separate network is recommended but not required. |
SAS Host Bus Adapter (HBA) | LSI SAS HBA, models SAS3108, SAS3508, or SAS3516 Note: The StorCLI utility is a required
prerequisite for managing these adapters
|