CES NFS limitations

Describes the limitations of IBM Storage Scale CES NFS protocol.

IBM CES NFS stack limitations

CES NFS limitations are described here:
  • Changes to the IBM® CES NFS global configuration are not dynamic. NFS services automatically restart during the execution of the mmnfs export load and mmnfs config change commands. During this time, an NFS client with a soft mount might lose connectivity. This might result in an application failure on the client node. An NFS client with a hard mount might "stall" during the NFS restart.
  • Whenever NFS is restarted, a grace period will ensue. The NFS grace period is user configurable, and the default NFS grace period is 90 seconds. If NFS global configuration changes are performed sequentially, then NFS services will be restarted multiple times, leading to a cumulative extended grace period. This might prevent NFS clients from reclaiming their locks, possibly leading to an application failure on the client node.
  • Start of changeA maximum of 4000 NFS connections per protocol node is recommended.End of change
  • The maximum number of NFS exports supported per protocol cluster is 1000.
  • Exporting symbolic links is not supported in CES NFS.
  • Start of changeIf NFS starts before a filesystem is mounted, exports from the filesystem might not be available. Therefore, to make the exports function work, you need to restart the nfs-ganesha service.End of change

NFS protocol node limitations

When mounting an NFSv3 file system on a protocol node, the Linux® kernel lockd daemon registers with the rpcbind, preventing the CES NFS lock service from taking effect. If you need to mount an NFSv3 file system on a CES NFS protocol node, use the -o nolock mount option to prevent invoking the Linux kernel lockd daemon.

NFS Ganesha protocol server's client limitation

  • Microsoft Windows are not supported and tested client for CES NFS Ganesha protocol server. If you want to use Windows as client, CES SMB is the recommended protocol.
  • Start of changeRHEL 7 clients using IPv6 cannot communicate with the quota server on CES nodes. (Red Hat Bugzilla 2217437)End of change

Limitations while using nested exports with NFS

Creating nested exports (such as /path/to/folder and /path/to/folder/subfolder) is not recommended as this might lead to serious issues in data consistency. Remove the higher-level export that prevents the NFSv4 client from descending through the NFSv4 virtual filesystem path. In case nested exports cannot be avoided, ensure that the export with the common path, called as the top-level export, has all the permissions for this NFSv4 client. Also, NFSv4 client that mounts the parent (/path/to/folder) export does not see the child export subtree (/path/to/folder/inside/subfolder) unless the same client is explicitly allowed to access the child export as well. Start of change If the parent folder and its subfolder are exported, access to the parent folder must be granted to the clients of the subfolder.End of change

NFS export considerations for versions prior to NFSv4

For NFS exported file systems, the version of NFS you are running with may have an impact on the number of inodes you need to cache, as set by both the maxStatCache and maxFilesToCache parameters on the mmchconfig command. The performance of the ls command in NFSv3 in part depends on the caching ability of the underlying file system. Setting the cache large enough will prevent rereading inodes to complete an ls command, but will put more of a CPU load on the token manager.

Also, the clocks of all nodes in your GPFS cluster must be synchronized. If this is not done, NFS access to the data, as well as other GPFS file system operations, may be disrupted.

NFS V4 export considerations

For information on NFS V4, refer to NFS Version 4 Protocol and other information found in the Network File System Version 4 (nfsv4) section of the IETF Datatracker website (datatracker.ietf.org/wg/nfsv4/documents).

To export a GPFS file system using NFS V4, there are two file system settings that must be in effect. These attributes can be queried using the mmlsfs command, and set using the mmcrfs and mmchfs commands.
  1. The -D nfs4 flag is required. Conventional NFS access would not be blocked by concurrent file system reads or writes (this is the POSIX semantic). NFS V4 however, not only allows for its requests to block if conflicting activity is happening, it insists on it. Since this is an NFS V4 specific requirement, it must be set before exporting a file system.
    flag value          description
    ---- -------------- -----------------------------------------------------
     -D nfs4            File locking semantics in effect
    
  2. The -k nfs4 flag is required. To export a file system by using NFS V4, NFS V4 ACLs must be enabled. Since NFS V4 ACLs are vastly different and affect several characteristics of the file system objects (directories and individual files), they must be explicitly enabled. This is done by specifying -k nfs4.
    
    flag value          description
    ---- -------------- -----------------------------------------------------
     -k  nfs4            ACL semantics in effect
    
  3. For NFS users with more than 16 groups, set MANAGE_GIDS=TRUE, else user can not get access to NFS exports.
Start of change

Considerations for NFS clients

When you use NFS clients in an IBM Storage Scale environment, the following considerations apply.

  • If you mount the same NFS export on one client from two different IBM Storage Scale NFS protocol nodes, data corruption might occur.
  • IBM Storage Scale allows concurrent access to the same file data by using SMB, NFS, and native POSIX access. For concurrent access to the same data, some limitations apply. For more information, see Multiprotocol data access considerations.
  • The NFS protocol version that is used as the default on a client operating system might differ from what you expect. If you are using a client that mounts NFSv3 by default, and you want to mount NFSv4, then you must explicitly specify the relevant NFSv4.0 or NFSv4.1 version in the mount command. For more information, see the mount command for your client operating system. If you are planning to use NFSv4.1 version, ensure that the client supports the version.
  • It is recommended to use the CES IP address of the IBM Storage Scale system to mount the NFS export on an NFS client.
    • You can use the mmces address list command to view the CES IPs.
    • You can use the mmlscluster --ces command to determine which nodes host which CES IPs.
  • For items that might affect NFS access, seeAuthentication limitations.
End of change

CES NFS scaling considerations

When planning for scaling the protocols function, consider the maximum supported or maximum recommended number of protocol nodes and client connections. For detailed information, see Scaling considerations.