The Network File System (NFS) is a well-used distributed file system that
enables file operations to be performed on a server from remote clients.
The server makes its directories or filesystems available to the rest of
the world using an operation known as
To access these directories, the client
the exported directories or filesystems to its local directory hierarchy.
Inside the mounted directory, clients access the remote files as if they
were stored locally on the machine. Currently, NFS sports three available
versions for exporting and mounting of directories or filesystems:
versions 2, 3, and 4.
In this article, we show you how to use a generic NFS mount to consolidate exporting and mounting of all existing NFS version into a single, seamless mechanism. Let's consider a scenario where the server has exported directory entries for all three versions of NFS. Currently, for the client to access all these entries, it has to separately mount each of these entries at different mount points. Although NFS version 4 provides a pseudo-tree mechanism that enables a single mount of all NFSv4 exported entries, it is applicable only to the entries made by that version. The client has to separately mount version 2 and 3 entries along with a single mount for version 4 (in case the pseudo-tree exists).
The generic NFS mount utility is essentially a wrapper for the
mount command that enables the user to mount
all possible exported entries from a particular server using a single
command. Since any change on the NFS server is not desirable, internally
this wrapper performs separate
mounts on the
client machine that are transparent to the user.
Consider the directory hierarchy on the NFS server as shown in Figure 1:
Figure 1. Directory hierarchy on the server
In this current scenario, the following
required on the client machine:
- One NFSv4 mount: This mounts the NFSv4 pseudo-tree
(Fileset1 and Fileset2). The NFSv4 pseudo-tree
feature allows the NFSv4 client to perform only one
mountoperation for all exported entries in the pseudo-tree.
- Two NFSv3 mounts: This mounts Tools and Docs.
- One NFSv3 mount: This mounts Binaries.
Using the generic NFS mount utility, these mounts are reduced to a single mount. Here is the command:
gennfsmount <NFS server> <mountpoint>
The advantages of using the generic NFS mount include:
- Users need to do only one mount operation to access all the information from the server; this was not previously supported.
- Consider a server that exports old installation filesets in versions 2 and 3 and new filesets in version 4. Clients can access all these filesets and perform installations using a single operation.
- Searching a file on a particular server and in a particular export directory/filesystem can be enhanced using this generic utility.
- Because the utility handles automatic segregation of version-based NFS exports, it reduces the users' efforts to manage these different mount points.
- NFS administrators can retain the old NFS version exports on the
server since they would be available along with the newer version data
items in a single
Similar output can be obtained using the
automount utility, but it requires
administrative overhead to configure the
automount map files for the desired NFS mounts.
hostmap feature of
automount claims to
mount all exported entries from a server
without necessitating any administrative configuration, but problems have
been identified when it comes to mounting NFSv4 entries using
hostmap. In addition, this operation is
performed for all the servers listed in the /etc/hosts file. No solution
exists for mounting all exported entries of one particular server at a
Overall, the generic NFS mount mechanism is convenience for the user. Next, let's look at some of the design decisions when implementing a generic NFS mounter.
Design implementation details
Let's look at the basic architecture of a generic NFS mount system. The generic NFS mounter internally sends a request to the server asking for all the exported entries.
Figure 2. Generic NFS mount requests all exported entries
On receiving a reply from the server, the algorithm in Listing 1 is followed:
Listing 1. Algorithm for generic NFS mount
Start Create temporary directories for all versions. Initialize list of mount security flavors. Embed this list in each internal mount operation. For each item in the export-list Do Mount internally for every NFSv2 and NFSv3 export Update internal log End for Mount internally only once for NFSv4 export. Update internal log Stop
security flavors is a comma-separated list
of security methods (sys, krb5, krb5i, and so on) used during
operations under the mount point. The list is used to match the security
method supported by the server and employed during subsequent system calls
under this mount point. A similar mechanism can be employed for the NFS
version of the corresponding exported entry.
When all the internal mounts are completed, a unification of these internal directories is done using UnionFS as shown in Listing 2.
Listing 2. Unifying internal directories with UnionFS
mount -t unionfs -o dirs=<temp_dir1>[:<temp_dir2>...] none <mount-point>
Remember the previous scenario? The one NFSv4 mount
for v4 pseudo-tree/two NFSv3 mounts for Tools and Docs/one NFSv3 mount for
Binaries were on the client machine and we used the generic NFS mount
gennfsmount <NFS server> <mountpoint>—to reduce them to a single mount. Now in this case, the following
temporary directories are created:
These are then combined using
unionfs as in
Listing 3. Merging using unionfs
mount -t unionfs -o dirs=/tmp/NFSv4:/tmp/NFSv3:/tmp/NFSv2 none /mnt
This results in the following directory hierarchy:
Figure 3. Hierarchy resulting from unionfs
Next, let's look at how we can use the system.
Using the system
In this scenario, the NFS server exports different entries of different NFS versions. At the client side though, only one mount is performed: the generic NFS mount. Figure 4 shows exported entries at the server.
Figure 4. Exports at the server
Here the server exports five NFS entries with different NFS versions. nfs4_A and nfs4_B form a NFSv4 pseudo-tree (/nfs4_A and /nfs4_A/nfs4_B). The remaining are versions 2 and 3 NFS exports.
Figure 5 shows files existing at the server.
Figure 5. Files existing at the server
In the current scenario, all these files would be available to the client under different mounted directories through individual mount operations. However, the proposed system enables the user to access all these files in a single hierarchy with a single mount operation.
Figure 6 shows the scenario as seen from the client after the generic NFS mount:
Figure 6. Output of generic NFS mount
As you can see, Figure 6 shows multiple internal mounts performed by the generic NFS mounter. The directory where all the NFS mounts are merged is /mnt.
We've shown you the architecture and the mechanism behind a generic NFS mounter, a utility that will undoubtedly help the NFS clients by providing easier, one-point access to the files on the NFS server and by offering a more consolidated view of the NFS space.
- "Unionfs: Bringing Filesystems Together" (Linux Journal, December 2004) gives details on how UnionFS works.
- For more on UnionFS, see:
- "Remote computing with a Linux application server farm" (February 2007, developerWorks).
- "Anatomy of Linux flash file systems" (May 2008, developerWorks).
- "Assess system security using a Linux LiveCD" (July 2005, developerWorks).
- The SourceForge "Linux NFS HOWTO" describes best practices for configuring Linux NFS properly in production environments (including server and client configuration, as well as security and performance tuning).
- In the developerWorks Linux zone, find more resources for Linux developers (including developers who are new to Linux), and scan our most popular articles and tutorials.
- See all Linux tips and Linux tutorials on developerWorks.
- Stay current with developerWorks technical events and Webcasts.
Get products and technologies
- With IBM trial software, available for download directly from developerWorks, build your next development project on Linux.
- Get involved in the developerWorks community through blogs, forums, podcasts, and spaces.