I need some clarification for tiebreaker disks usage for Quorum as follow:
1) Can tiebreaker disk be used to hold user data? or can we use existing NSD disk containing some data on it as Tiebreaker disk?
2) Can we use some internal disk (not shared) as Tiebreaker disk ?
The problem is that we are going to install 3 nodes cluster and I am thinking it would be expensive to dedicate on SAN attached disk as Tiebreaker if we can not use it to store other user data or can not use as NSD disk for GPFS file systems. In that case I was thinking to have Quorum on all 3 nodes instead of having Tiebreaker disk and Quorum on 2 nodes.
I would appreciate your comments and help on this.
This topic has been locked.
2 replies Latest Post - 2007-05-23T18:07:08Z by SystemAdmin
Pinned topic TieBreaker disk question
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2007-05-23T18:07:08Z at 2007-05-23T18:07:08Z by SystemAdmin
gcorneau 0100003X4E138 PostsACCEPTED ANSWER
Re: TieBreaker disk question2007-05-23T12:43:21Z in response to SystemAdminTo answer your questions:
1) Yes, you can (and everyone does) use standard file system NSDs for the tie breaker function. These can be either dataOnly, metadataOnly or dataAndMetadata disks. The TB function happens on the reserved sectors of the disk used for GPFS purposes (think of it similar to the VGDA space on standard AIX volume groups if you know the concept).
2) No, don't use internal disks for TB function. Tiebreaker disks are, by definition, shared. They must be accessible from all quorum nodes. If you used internal disk, it would require that the node hosting the disk be up at all times!
And for a 3 node cluster, if you're using GPFS 3.1, I suggest you define all three nodes as quorum nodes and go with tiebreaker disks for quorum versus multi-node quorum. The reason?
If you use TB disks, you can lose 2 of the 3 nodes and the file systems remain mounted on the last node. If you use multi-node quorum, then you can lose at most, one node.
Hope this helps!
IBM System p Advanced Technical Support