Enabling and disabling Persistent Reserve
GPFS can use Persistent Reserve (PR) functionality to improve failover times (with some restrictions).
The following restrictions apply to the use of PR:
- PR is supported on both AIX® and Linux® nodes. However, note the
following:
- If the disks have defined NSD servers, then the NSD server nodes must all be running AIX, or they must all be running Linux.
- If the disks are SAN-attached to all nodes, then the SAN-attached nodes in the cluster must all be running AIX, or they must all be running Linux.
- The disk subsystems must support PR
- GPFS supports a mix of PR disks and other disks. However, you will only realize improved failover times if all disks in the cluster support PR.
- GPFS only supports PR in the local cluster. Remote mounts must access the disks through an NSD server.
- When you enable or disable PR, you must stop GPFS on all nodes.
- Before enabling PR, make sure all disks are in the same initial state.
- Before enabling PR, disks must be removed from any CCR tiebreaker disk configuration. After the enablement is finished, disks can be added back to the CCR tiebreaker disk configuration.
To enable Persistent Reserve, enter the following
command:
mmchconfig usePersistentReserve=yes
To disable Persistent Reserve, enter the following
command:
mmchconfig usePersistentReserve=no
For fast recovery times with Persistent Reserve, you should also
set the failureDetectionTime configuration
parameter. For fast recovery, a recommended value would be 10. You
can set this by issuing the command:
mmchconfig failureDetectionTime=10
To determine if the disks on the servers and the disks of a specific
node have PR enabled, issue the following command from the node:
mmlsnsd -X
The
system responds with something similar to:Disk name NSD volume ID Device Devtype Node name Remarks
----------------------------------------------------------------------------------------
gpfs10nsd 09725E5E43035A99 /dev/hdisk6 hdisk k155n14.kgn.ibm.com server node,pr=yes
gpfs10nsd 09725E5E43035A99 /dev/hdisk8 hdisk k155n16.kgn.ibm.com server node,pr=yes
gpfs10nsd 09725E5E43035A99 /dev/hdisk6 hdisk k155n17.kgn.ibm.com directly attached pr=yes
If the GPFS daemon
has been started on all the nodes in the cluster and the file system
has been mounted on all nodes that have direct access to the disks,
then pr=yes should be on all hdisks. If
you do not see this, there is a problem. Refer to the IBM
Storage Scale: Problem
Determination Guide for additional
information on Persistent Reserve errors.