Hardware requirements

You can validate that your hardware meets GPFS requirements by taking the steps outlined in this topic.

  1. Consult the IBM Spectrum Scale FAQ in IBM® Knowledge Center (www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html) for the latest list of:
    • Supported server hardware
    • Tested disk configurations
    • Maximum cluster size
    • Additional hardware requirements for protocols
  2. Provide enough disks to contain the file system. Disks can be:
    • SAN-attached to each node in the cluster
    • Attached to one or more NSD servers
    • A mixture of directly-attached disks and disks that are attached to NSD servers
    For additional information, see Network Shared Disk (NSD) creation considerations.
  3. When doing network-based NSD I/O, GPFS passes a large amount of data between its daemons. For NSD server-to-client traffic, it is suggested that you configure a dedicated high-speed network solely for GPFS communications when the following are true:
    • There are NSD disks configured with servers providing remote disk capability.
    • Multiple GPFS clusters are sharing data using NSD network I/O.
    For additional information, see Disk considerations.

GPFS communications require static IP addresses for each GPFS node. IP address takeover operations that transfer the address to another computer are not allowed for the GPFS network. Other IP addresses within the same computer that are not used by GPFS can participate in IP takeover. To provide availability or additional performance, GPFS can use virtual IP addresses created by aggregating several network adapters using techniques such as EtherChannel or channel bonding.

Cluster Export Services (CES) have dedicated network addresses to support SMB, NFS, Object, and failover or failback operations. File and Object clients use the public IPs to access data on GPFS™ file systems.

For additional information, see CES network configuration.