Setting up an active-passive GPFS configuration
This example demonstrates how to configure an active-passive GPFS cluster.
To establish an active-passive storage replication GPFS cluster as shown in Figure 1 of An active-passive GPFS cluster,
consider the configuration:
- Production site
- Consists of:
- Nodes – nodeP001, nodeP002, nodeP003, nodeP004, nodeP005
- Storage subsystems – Storage System P
- LUN IDs and disk volume names – lunP1 (hdisk11), lunP2 (hdisk12), lunP3 (hdisk13), lunP4 (hdisk14)
- Recovery site
- Consists of:
- Nodes – nodeR001, nodeR002, nodeR003, nodeR004, nodeR005
- Storage subsystems – Storage System R
- LUN ids and disk volume names – lunR1 (hdisk11), lunR2 (hdisk12), lunR3 (hdisk13), lunR4 (hdisk14)
- Establish synchronous PPRC volume pairs by using the copy entire volume option:
lunP1-lunR1 (source-target) lunP2-lunR2 (source-target) lunP3-lunR3 (source-target) lunP4-lunR4 (source-target)
- Create the recovery cluster selecting nodeR001 as the primary cluster data server node,
nodeR002 as the secondary cluster data server nodes, and the nodes in the cluster contained
in the file NodeDescFileR. The NodeDescFileR file contains the node descriptors:
Issue this command:nodeR001:quorum-manager nodeR002:quorum-manager nodeR003:quorum-manager nodeR004:quorum-manager nodeR005
mmcrcluster –N NodeDescFileR –p nodeR001 –s nodeR002
- Create the GPFS production cluster
selecting nodeP001 as the primary cluster data server node, nodeP002 as the secondary
cluster data server node, and the nodes in the cluster contained in the file NodeDescFileP.
The NodeDescFileP file contains the node descriptors:
Issue this command:nodeP001:quorum-manager nodeP002:quorum-manager nodeP003:quorum-manager nodeP004:quorum-manager nodeP005
mmcrcluster –N NodeDescFileP –p nodeP001 –s nodeP002
- At all times the peer clusters must see a consistent image of the mirrored file system's
configuration state contained in the mmsdrfs file. After the initial creation of the file
system, all subsequent updates to the local configuration data must be propagated and imported into
the peer cluster. Execute the mmfsctl syncFSconfig command to resynchronize the
configuration state between the peer clusters after each of these actions in the primary GPFS cluster:
- Addition of disks through the mmadddisk command
- Removal of disks through the mmdeldisk command
- Replacement of disks through the mmrpldisk command
- Modifications to disk attributes through the mmchdisk command
- Changes to the file system's mount point through the mmchfs -T command
To automate the propagation of the configuration state to the recovery cluster, activate and use the syncFSconfig user exit. Follow the instructions in the prolog of /usr/lpp/mmfs/samples/syncfsconfig.sample.
- From a node in the production cluster, start the GPFS daemon on all nodes:
mmstartup -a
- Create the NSDs at the production site. The disk descriptors contained
in the file DiskDescFileP are:
Issue this command:/dev/hdisk11:nodeP001:nodeP002:dataAndMetadata:-1 /dev/hdisk12:nodeP001:nodeP002:dataAndMetadata:-1 /dev/hdisk13:nodeP001:nodeP002:dataAndMetadata:-1 /dev/hdisk14:nodeP001:nodeP002:dataAndMetadata:-1
mmcrnsd –F DiskDescFileP
- Create the GPFS file system
and mount it on all nodes at the production site:
mmcrfs /gpfs/fs0 fs0 -F DiskDescFileP