NFS configuration requirements to allow specification of replicas
You must be an administrator to enable, disable, or specify root replicas.
chnfs -R {on|off|host[+host]}
In order to specify replicas, the server must be
configured with chnfs -R (chnfs -R on) to issue volatile NFSv4
file handles. A file handle is an identifier that NFS servers issue to clients to identify a file or
directory on the server. By default, the server issues persistent file handles. Switching between
file handle types can result in errors to applications at NFSv4 clients that are actively using the
server when the switch is performed. In order to change the file handle mode with chnfs
-R, no file systems can be exported for NFSv4 access. Setting the file handle disposition
should be done with a newly-provisioned NFS server or done when NFS activity can be minimized or
stopped. For clients actively connected to servers when the mode is changed, it may be necessary to
unmount and remount NFSv4 mounts on those clients. To minimize this action, the number of client
mounts can be reduced to a small number of mounts that mount the top-level directories of an NFSv4
server's exported file space.The NFSv4 client cannot fail over to replicas with different export access properties. Administrators must make sure that all replicas are specified with the same export access controls and access mode (read-only or read-write). With the possible exception of exported GPFS, it is expected that replicated data will be exported read-only. It is also the administrator's responsibility to maintain the data content at all replica locations. Directory trees and all data content should be kept identical. Updates to data content will need to be performed in a manner that is most compatible with the applications that will be using the data.
With replicas, you can use the exname export option to hide details of the server's local file system namespace from NFSv4 clients. For more details, see the exportfs command and the /etc/exports file.
You can use the replicas option with the export cluster file systems such as General Parallel File System (GPFS) to specify multiple NFS server nodes that see the same GPFS view. This is a configuration where exporting the data for read-write access may be valid. However, with read-write replicas, if a replica failover occurs while write operations are in progress, applications performing the write may encounter unrecoverable errors. Similarly, a mkdir or exclusive file create operation running during a failover may encounter an EXISTS error.
serverA
wants to export /webpages, and
there is a replica of /webpages on serverB
in the
/backup/webpages directory, the following entry in the
/etc/exports file will export /webpages from
serverA
and inform clients that there is a copy of the file system on
serverB
at /backup/webpages:
/webpages -vers=4,ro,replicas=/webpages@serverA:
/backup/webpages@serverB
Both
/webpages on serverA
and /backup/webpages
on serverB
are assumed to be the root directories of their file systems. If
serverA
had not been listed in the export, it would have been silently added as the
first replica location. This is because the server exporting the data is assumed to be the preferred
server for the data it is exporting.Replicas are only used by the NFSv4 protocol. The above export could have specified NFSv3
(vers=3:4
), but the replication information would not be available to NFSv3
clients. Clients using NFSv3 can, however, access the information in /webpages
on serverA
, but they will not fail over to the replica if serverA
becomes unavailable.