Disk I/O
To optimize disk I/O performance, you should consider the following options for NSD servers or other GPFS nodes that are directly attached to a SAN over a Fibre Channel (FC) network:
- The storage server cache settings can impact GPFS performance if not set correctly.
- When the storage server disks are
configured for RAID5, some configuration settings can affect GPFS performance. These settings
include:
- GPFS block size
- Maximum I/O size of the Fibre Channel host bus adapter (HBA) device driver
- Storage server RAID5 stripe size
Note: For optimal performance, GPFS block size should be a multiple of the maximum I/O size of the FC HBA device driver. In addition, the maximum I/O size of the FC HBA device driver should be a multiple of the RAID5 stripe size. - These suggestions may avoid the performance penalty of read-modify-write
at the storage server for GPFS writes.
Examples of the suggested settings are:
- 8+P RAID5
- GPFS block size = 512K
- Storage Server RAID5 segment size = 64K (RAID5 stripe size=512K)
- Maximum IO size of FC HBA device driver = 512K
- 4+P RAID5
- GPFS block size = 256K
- Storage Server RAID5 segment size = 64K (RAID5 stripe size = 256K)
- Maximum IO size of FC HBA device driver = 256K
For the example settings using 8+P and 4+P RAID5, the RAID5 parity can be calculated from the data written and will avoid reading from disk to calculate the RAID5 parity. The maximum IO size of the FC HBA device driver can be verified using iostat or the Storage Server performance monitor. In some cases, the device driver may need to be patched to increase the default maximum IO size.
- 8+P RAID5
- The GPFS parameter maxMBpS can limit the maximum throughput of an NSD server or a single GPFS node that is directly attached to the SAN with a FC HBA. The default value is 2048. The maxMBpS parameter is changed by issuing the mmchconfig command. If this value is changed, restart GPFS on the nodes, and test the read and write performance of a single node and a large number of nodes.