Setting up Network File System (NFS) for IBM® Db2 Warehouse
The easiest way to set up a cluster file system for Db2® Warehouse is to use NFS. For example, you can use NFS with a cluster file system that you set up on network-attached storage (NAS).
Procedure
-
An NAS or NFS share must be exported to all Db2 Warehouse NFS client using the following export
options:
For example, if you plan to use the head node as the NFS server, issue the following commands on the head node. For the echo command, specify a clause that consists of a data node IP address followed by (rw,sync,no_root_squash,no_all_squash) for each data node. Separate each of those clauses with a space.rw,sync,no_root_squash,no_all_squash
yum -y install nfs-utils echo "/mnt/clusterfs data_node1_IP address(rw,sync,no_root_squash,no_all_squash) data_node2_IP address(rw,sync,no_root_squash,no_all_squash) [... data_nodeN_IP address(rw,sync,no_root_squash,no_all_squash)]" >> /etc/exports systemctl start rpcbind nfs-server systemctl enable rpcbind nfs-server
-
On each data node, issue the following commands to install the NFS client driver:
yum -y install nfs-utils systemctl start rpcbind systemctl enable rpcbind
-
On each data node, add the following mount options to /etc/fstab:
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,intr,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,nolock 0 0
Note:- If you are using an enterprise NAS storage controller, refer to the vendor documentation for the
rsize
andwsize
settings that provide optimal performance. - Unlike NFS Version 4, NFS Version 3 is a stateless protocol. This difference has implications
when a scale-out NAS controller is used, because with NFS Version 3 a storage cluster fail-over that
causes storage cluster IP changes is transparent and does not disrupt established NFS sessions.
Therefore, when using NFS protocol with Db2 Warehouse you must use NFS Version 3. This is the reason
for the
vers=3
option. - Because a Db2 Warehouse MPP deployment employs a
shared-nothing architecture, to avoid the locking overhead and potential locking conflicts, NLM
locking must be disabled. This is the reason for the
nolock
option. - An NFS soft mount might give up after the NFS timeout is exceeded and might not correctly return
the error back to the caller. This can lead to data corruption in cluster applications, and is the
reason for the
hard
option.
For example, you can use the following shell snippet to make the update:cat <<'EOF' > /etc/fstab # The NFS/NAS clustered mount for Db2 Warehouse persistent data hostname_OR_IP_address_of_NAS/NFS_server:NAS/NFS_share_exported /mnt/clusterfs/ nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,intr,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,nolock 0 0 EOF
- If you are using an enterprise NAS storage controller, refer to the vendor documentation for the
-
Test your setup:
- On the head node, create a file by issuing the following
command:
echo "test" > /mnt/clusterfs/testfile
- On a data node, verify that you can read the file. Issue the following command:
cat /mnt/clusterfs/testfile
- On the head node, create a file by issuing the following
command: