Topic
No replies
astdenis
astdenis
18 Posts
ACCEPTED ANSWER

Pinned topic Inode Prefetch Tuning

‏2013-06-27T19:04:37Z |

Hi,

We run a CNFS cluster (22 nodes, direct attached, GPFS 3.5.0.10, metadata and data not sharing LUNs, metadata on 14 RAID1 DC5300 LUNS, 4mb blocks everywhere). Our users work with directories holding >100000 files. Looking at iohist, I noticed it only takes a single "find" run on such directories from a NFS client to cause a large amount of inode prefetch, which in turn cause the metadata LUNs to become extremely busy (as shown by sar -d). The immediate effect is a general slowdown of all metadata operations (a "ls -l" will take much longer, for example) which impacts all our users. This is only with a single "find". You can imagine what kind of responsiveness we get if many such "find"s are run concurrently..

We are using rather conservative values for MaxFileToCache (1000) and MaxStatCache (30000) and I wonder if it would be of any use to crank those up to values >100000 given for a file system with 62 million files and many very large directories...

The topic says "inode prefetch tuning". I looked for documentation about the following tunables:

  • inodePrefetchFirstDirblock
  • inodePrefetchThreshold
  • inodePrefetchWindow

but coulnd't find any. Could those help improve the metadata LUNs responsiveness?

Thanks.