Issues caused by the unhealthy state of the components used
This section discusses the issues caused due to the unhealthy state of the components used in the IBM Spectrum Scale stack
- Suboptimal performance due to failover of NSDs to secondary server - NSD server failure
In a shared storage configuration, failure of an NSD server might result in the failover of its NSDs to the secondary server, if the secondary server is active. This can reduce the total number of NSD servers actively serving the file system, which in turn impacts the file system's performance. - Suboptimal performance due to failover of NSDs to secondary server - Disk connectivity failure
In a shared storage configuration, disk connectivity failure on an NSD server might result in failover of its NSDs to the secondary server, if the secondary server is active. This can reduce the total number of NSD servers actively serving the file system, which in turn impacts the overall performance of the file system. - Suboptimal performance due to file system being fully utilized
As a file system nears full utilization, it becomes difficult to find free space for new blocks. This impacts the performance of the write, append, and create operations. - Suboptimal performance due to VERBS RDMA being inactive
IBM Spectrum Scale for Linux supports InfiniBand Remote Direct Memory Access (RDMA) using the Verbs API for data transfer between an NSD client and the NSD server. If InfiniBand (IB) VERBS RDMA is enabled on the IBM Spectrum Scale cluster, and if there is drop in the file system performance, verify whether the NSD client nodes are using VERBS RDMA for communication to the NSD server nodes. If the nodes are not using RDMA, then the communication switches to using the GPFS node’s TCP/IP interface, which can cause performance degradation.
Parent topic: Performance issues