Linux export considerations

Linux does not allow a file system to be NFS V4 exported unless it supports POSIX ACLs. For more information, see Linux ACLs and extended attributes.

For Linux nodes only, issue the exportfs -ra command to initiate a reread of the /etc/exports file.

Starting with Linux kernel version 2.6, an fsid value must be specified for each GPFS file system that is exported on NFS. For example, the format of the entry in /etc/exports for the GPFS directory /gpfs/dir1 might look like this:
/gpfs/dir1 cluster1(rw,fsid=745)
The administrator must assign fsid values subject to the following conditions:
  1. The values must be unique for each file system.
  2. The values must not change after reboots. The file system should be unexported before any change is made to an already assigned fsid.
  3. Entries in the /etc/exports file are not necessarily file system roots. You can export multiple directories within a file system. In the case of different directories of the same file system, the fsids must be different. For example, in the GPFS file system /gpfs, if two directories are exported (dir1 and dir2), the entries might look like this:
    /gpfs/dir1 cluster1(rw,fsid=745)
    /gpfs/dir2 cluster1(rw,fsid=746)
  4. If a GPFS file system is exported from multiple nodes, the fsids should be the same on all nodes.
Configuring the directories for export with NFSv4 differs slightly from the previous NFS versions. To configure the directories, do the following:
  1. Define the root of the overall exported file system (also referred to as the pseudo root file system) and the pseudo file system tree. For example, to define /export as the pseudo root and export /gpfs/dir1 and /gpfs/dir2 which are not below /export, run:
    mkdir –m 777 /export /export/dir1 /export/dir2
    mount --bind /gpfs/dir1 /export/dir1
    mount –-bind /gpfs/dir2 /export/dir2
    In this example, /gpfs/dir1 and /gpfs/dir2 are bound to a new name under the pseudo root using the bind option of the mount command. These bind mount points should be explicitly umounted after GPFS is stopped and bind-mounted again after GPFS is started. To unmount, use the umount command. In the preceding example, run:
    umount /export/dir1; umount /export/dir2
  2. Edit the /etc/exports file. There must be one line for the pseudo root with fsid=0. For the preceding example:
    /export cluster1(rw,fsid=0) 
    /export/dir1 cluster1(rw,fsid=745)
    /export/dir2 cluster1(rw,fsid=746)
    The two exported directories (with their newly bound paths) are entered into the /etc/exports file.

Large installations with hundreds of compute nodes and a few login nodes or NFS-exporting nodes require tuning of the GPFS parameters maxFilesToCache and maxStatCache with the mmchconfig command.

The general suggestion is for the compute nodes to set maxFilesToCache to about 200. The login or NFS nodes should set this parameter much higher, with maxFilesToCache set to 1000 and maxStatCache set to 50000.
Note: The stat cache is not effective on the Linux platform. Therefore, you need to set the maxStatCache attribute to a smaller value, such as 512, and the maxFilesToCache attribute to 50000.
This tuning is required for the GPFS token manager (file locking), which can handle approximately 1,000,000 files in memory. The token manager keeps track of a total number of tokens, which equals 5000 * number of nodes. This will exceed the memory limit of the token manager on large configurations. By default, each node holds 5000 tokens:
  • If the user does not specify values for maxFilesToCache and maxStatCache, the default value of maxFilesToCache is 4000, and the default value of maxStatCache is 1000.
  • On upgrades to GPFS 4.1 from GPFS 3.4 or earlier, the existing defaults (1000 for maxFilesToCache and 4000 for maxStatCache) remain in effect.

If the user specifies a value for maxFilesToCache but does not specify a value for maxStatCache, the default value of maxStatCache changes to 4*maxFilesToCache.

If you are running at SLES 9 SP 1, the kernel defines the sysctl variable fs.nfs.use_underlying_lock_ops, which determines whether the NFS lockd is to consult the file system when granting advisory byte-range locks. For distributed file systems like GPFS, this must be set to true (the default is false).

You can query the current setting by issuing the command:
sysctl fs.nfs.use_underlying_lock_ops 
Alternatively, the fs.nfs.use_underlying_lock_ops = 1 record can be added to /etc/sysctl.conf. This record must be applied after initially booting the node, and after each reboot, by issuing the command:
sysctl -p

Because the fs.nfs.use_underlying_lock_ops variable is currently not available in SLES 9 SP 2 or later, when NFS-exporting a GPFS file system, ensure that your NFS server nodes are at the SP 1 level (unless this variable is made available in later service packs).

For additional considerations when NFS exporting your GPFS file system, refer to File system creation considerations.