Selecting quorum nodes
To configure a system with efficient quorum nodes, follow these rules.
- Select nodes that are likely to remain active
- If a node is likely to be rebooted or require maintenance, do not select that node as a quorum node.
- Select nodes that have different failure points such as:
- Nodes that are located in different racks
- Nodes that are connected to different power panels
- You should select nodes that GPFS administrative
and serving functions rely on such as:
- Network Shared Disk (NSD) servers
- Select an odd number of nodes as quorum nodes
- Clusters with no tiebreaker disks can be configured with up to nine quorum nodes. Clusters with tiebreaker disks are limited to a maximum of eight quorum nodes.
- Having a large number of quorum nodes might increase the time required for startup and failure
recovery.
- Having more than seven quorum nodes does not guarantee higher availability.
- All quorum nodes must have access to all of the tiebreaker disks.
The /var/mmfs directory on quorum nodes cannot be on
tmpfs or ramfs file system.
Note: When adding a quorum node by
using either mmcrcluster or mmchnode or when you enable CCR
from server-based cluster configuration method, then the system checks whether the
/var/mmfs directory on the designated quorum node is not on
tmpfs or ramfs. If /var/mmfs is from
tmpfs or ramfs, the command output prints an error message.