Linux export considerations
Linux does not allow a file system to be NFS V4 exported unless it supports POSIX ACLs. For more information, see Linux ACLs and extended attributes.
For Linux nodes only, issue the exportfs -ra command to initiate a reread of the /etc/exports file.
/gpfs/dir1 cluster1(rw,fsid=745)
- The values must be unique for each file system.
- The values must not change after reboots. The file system should be unexported before any change is made to an already assigned fsid.
- Entries in the /etc/exports file are
not necessarily file system roots. You can export multiple directories
within a file system. In the case of different directories of the
same file system, the fsids must be different.
For example, in the GPFS file
system /gpfs, if two directories are exported
(dir1 and dir2),
the entries might look like this:
/gpfs/dir1 cluster1(rw,fsid=745) /gpfs/dir2 cluster1(rw,fsid=746)
- If a GPFS file system is exported from multiple nodes, the fsids should be the same on all nodes.
- Define the root of the overall exported file system (also referred
to as the pseudo root file system) and the pseudo file system tree.
For example, to define /export as the pseudo root and export /gpfs/dir1 and /gpfs/dir2 which
are not below /export, run:
In this example, /gpfs/dir1 and /gpfs/dir2 are bound to a new name under the pseudo root using the bind option of the mount command. These bind mount points should be explicitly umounted after GPFS is stopped and bind-mounted again after GPFS is started. To unmount, use the umount command. In the preceding example, run:mkdir –m 777 /export /export/dir1 /export/dir2 mount --bind /gpfs/dir1 /export/dir1 mount –-bind /gpfs/dir2 /export/dir2
umount /export/dir1; umount /export/dir2
- Edit the /etc/exports file. There must be one line for
the pseudo root with fsid=0. For the preceding example:
The two exported directories (with their newly bound paths) are entered into the /etc/exports file./export cluster1(rw,fsid=0) /export/dir1 cluster1(rw,fsid=745) /export/dir2 cluster1(rw,fsid=746)
Large installations with hundreds of compute nodes and a few login nodes or NFS-exporting nodes require tuning of the GPFS parameters maxFilesToCache and maxStatCache with the mmchconfig command.
- If the user does not specify values for maxFilesToCache and maxStatCache, the default value of maxFilesToCache is 4000, and the default value of maxStatCache is 1000.
- On upgrades to GPFS 4.1 from GPFS 3.4 or earlier, the existing defaults (1000 for maxFilesToCache and 4000 for maxStatCache) remain in effect.
If the user specifies a value for maxFilesToCache but does not specify a value for maxStatCache, the default value of maxStatCache changes to 4*maxFilesToCache.
If you are running at SLES 9 SP 1, the kernel defines the sysctl variable fs.nfs.use_underlying_lock_ops, which determines whether the NFS lockd is to consult the file system when granting advisory byte-range locks. For distributed file systems like GPFS, this must be set to true (the default is false).
sysctl fs.nfs.use_underlying_lock_ops
sysctl -p
Because the fs.nfs.use_underlying_lock_ops variable is currently not available in SLES 9 SP 2 or later, when NFS-exporting a GPFS file system, ensure that your NFS server nodes are at the SP 1 level (unless this variable is made available in later service packs).
For additional considerations when NFS exporting your GPFS file system, refer to File system creation considerations.