NFS mount issues

This topic provides information on how to verify and resolve NFS mount errors.

There are several possible NFS mount error conditions, including
  • Mount times out
  • NFS mount fails with a No such file or directory error
  • NFS client cannot mount NFS exports.

Mount times out

Description

The user is trying to do an NFS mount and receives a timeout error.

Verification

When a timeout error occurs, check the following.

  1. Check to see whether the server is reachable by issuing either or both of the following commands:
    ping <server-ip>
    ping <server-name>
    The expected result is that the server responds.
  2. Check to see whether portmapper, NFS, and mount daemons are running on the server.
    1. On a IBM Spectrum Scale CES node, issue the following command:
      mmces service list
      The expected results are that the output indicates that the NFS service is running as in this example:
      Enabled services: SMB NFS
      SMB is running, NFS is running

    2. On the NFS server node, issue the following command:
      rpcinfo -p
      The expected result is that portmapper, mountd, and NFS are running as shown in the following sample output.
      program  vers  proto  port  service
      100000      4   tcp    111  portmapper
      100000      4   tcp    111  portmapper
      100000      3   tcp    111  portmapper
      100000      2   tcp    111  portmapper
      100000      4   upd    111  portmapper
      100000      3   upd    111  portmapper
      100000      2   upd    111  portmapper
      100024      1   upd  53111  status
      100024      1   tcp  58711  status
      100003      3   upd   2049  nfs
      100003      3   tcp   2049  nfs
      100003      4   upd   2049  nfs
      100003      4   tcp   2049  nfs
      100005      1   upd  59149  mountd
      100005      1   tcp  54013  mountd
      100005      3   upd  59149  mountd
      100005      3   tcp  54013  mountd
      100021      4   upd  32823  nlockmgr
      100021      4   tcp  33397  nlockmgr
      100011      1   upd  36650  rquotad
      100011      1   tcp  36673  rquotad
      100011      2   upd  36650  rquotad
      100011      2   tcp  36673  rquotad
      
  3. Check to see whether the firewall is blocking NFS traffic on Linux systems by issuing the following command on the NFS client and the NFS server:
    iptables -L 
    Then check whether any hosts or ports that are involved with the NFS connection are blocked (denied).

    If the client and the server are running in different subnets, then a firewall could be running on the router also.

  4. Check to see whether the firewall is blocking NFS traffic on the client or router, using the appropriate commands.

NFS mount fails with a No such file or directory error

Description

The user is trying to do an NFS mount on Linux and receives this message:
No such file or directory

Following are the root causes of this error.

Root cause #1 - Access type is none

An NFS export was created on the server without a specified access type. Therefore, for security reasons, the default access is none, mounting does not work.

Solution

On the NFS server, specify an access type (for example, RW for Read and Write) for the export. If the export has been created already, you can achieve this by issuing the mmnfs export change command. See the following example. The backslash (\) is a line continuation character:
mmnfs export change /mnt/gpfs0/nfs_share1 \
--nfschange "*(Access_Type=RW,Squash=NO_ROOT_SQUASH)"

Verification

To verify the access type that is specified for the export, issue the mmnfs export list on the NFS server. For example:
mmnfs export list --nfsdefs /mnt/gpfs0/nfs_share1
The system displays output similar to this:


Path               Delegations Clients Access_Type Protocols Transports Squash         Anonymous_uid Anonymous_gid SecType PrivilegedPort Export_id DefaultDelegation Manage_Gids NFS_Commit
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/mnt/gpfs0/_share1 none        *       RW          3,4       TCP        NO_ROOT_SQUASH -2            -2            KRB5    FALSE          2         none              FALSE       FALSE

"NONE" indicates the root cause; the access type is none .

"RO" or "RW" indicates that the solution was successful.

Root cause # 2 - Protocol version that is not supported by the server

Solution

On the NFS server, specify the protocol version needed by the client for export (for example, 3:4). If the export already exists, you can achieve this by issuing the mmnfs export change command. For example:

mmnfs export change /mnt/gpfs0/nfs_share1 --nfschange "* (Protocols=3:4)"

Verification

To verify the protocols that are specified for the export, issue the mmnfs export change command. For example:
mmnfs export list --nfsdefs /mnt/gpfs0/nfs_share1
The system displays output similar to this:

Path                  Delegations Clients Access_Type Protocols Transports Squash         Anonymous_uid Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids NFS_Commit
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
/mnt/gpfs0/nfs_share1 none        *       RW          3,4       TCP        NO_ROOT_SQUASH -2            -2            SYS     FALSE          none               FALSE       FALSE

NFSv4 client cannot mount NFS exports

Problem

The NFS client cannot mount NFS exports. The mount command on the client either returns an error or times out. Mounting the same export using NFSv3 succeeds.

Determination

The export is hidden by a higher level export. Try mounting the server root \ and navigate through the directories.

Solution

Creating nested exports (such as /path/to/folder and /path/to/folder/subfolder) is not recommended as this might lead to serious issues in data consistency. Remove the higher-level export that prevents the NFSv4 client from descending through the NFSv4 virtual filesystem path. In case nested exports cannot be avoided, ensure that the export with the common path, called as the top-level export, has all the permissions for this NFSv4 client. Also NFSv4 client that mounts the parent (/path/to/folder) export does not see the child export subtree (/path/to/folder/inside/subfolder) unless the same client is explicitly allowed to access the child export as well.

  1. Ensure that the NFS server is running correctly on all of the CES nodes and that the CES IP address used to mount is active in the CES cluster. To check the CES IP address and the NFS server status run:
    mmlscluster --ces
    mmces service list -a
  2. Ensure that the firewall allows NFS traffic to pass through. In order for this, the CES NFS service must be configured with explicit NFS ports so that discrete firewall rules can be established. On the client, run:
    rpcinfo -t <CES_IP_ADDRESS> nfs
  3. Verify that the NFS client is allowed to mount the export. In NFS terms, a definition exists for this client for the export to be mounted. To check NFS export details, enter the following command:
    mmnfs export list --nfsdefs <NFS_EXPORT_PATH>
    The system displays output similar to this:
    
    Path                  Delegations Clients Access_Type Protocols Transports Squash              Anonymous_uid Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids NFS_Commit
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    /mnt/gpfs0/nfs_share1 none        *       RW          3,4       TCP        NO_ROOT_SQUASH      -2            -2            SYS     FALSE          none               FALSE       FALSE
    On an NFSv3 client, run:
    showmount -e <CES_IP_ADDRESS>
    On an NFSv4 client: Mount the server virtual file-system root /. Navigate through the virtual file system to the export.
If you have a remote cluster environment with an owning cluster and an accessing cluster, and the accessing cluster exports the file system of the owning cluster through CES NFS, IP failback might occur before the remote file systems are mounted. This can cause I/O failures with existing CES NFS client mounts and can also cause new mount request failures. To avoid I/O failures, stop and start CES NFS on the recovered node after you run the mmstartup and mmmount <remote FS> commands. To stop and start CES NFS, use these commands:
mmces service stop nfs
mmces service start nfs