Topic
  • 6 replies
  • Latest Post - ‏2012-10-09T13:07:51Z by HajoEhlers
Theeraph
Theeraph
110 Posts

Pinned topic slow performance for ls -> inode locking?

‏2012-10-05T13:36:42Z |
Hi,
.
GPFS 3.4.0.14 on Linux 3.0.13-0.27 (SUSE)
.
The BP tried to run the following:
pexec /gpfs/test/iotest.sh &
ls -al /gpfs
.
And they noticed that it took quite a long time for ls to respond (38 sec), despite /gpfs does not contain many files (#used inodes=4413).
.
Note: 1. pexec will run this same script on all 7 nodes concurrently.
.
2. This is the script "iotest":
===
#!/bin/bash

thread=1
for i in `seq -w $thread`; do
# write 10G
dd if=/dev/zero of=/gpfs/test/$HOSTNAME.$i bs=256k count=39000 >> /root/test/iotest-w.log &
# read
dd if=/gpfs/test/file10G.${HOSTNAME} of=/dev/null bs=256k >> /root/test/iotest-r.log &
done
wait
===
.
I tried commenting out the write test and ls -al response is very fast (1 sec), so it looks like this must have something to do with inode locking (when GPFS creates new inode).
.
Are there any way to reduce the response time for ls in this case?
.
Thank you very much,
Theeraphong
Updated on 2012-10-09T13:07:51Z at 2012-10-09T13:07:51Z by HajoEhlers
  • sxiao
    sxiao
    37 Posts

    Re: slow performance for ls -> inode locking?

    ‏2012-10-08T01:33:22Z  
    Have you played with -E and -S setting on the filesystem? They will affect performance of stat() operation. You can change them via mmchfs command.
  • Theeraph
    Theeraph
    110 Posts

    Re: slow performance for ls -> inode locking?

    ‏2012-10-08T08:29:13Z  
    • sxiao
    • ‏2012-10-08T01:33:22Z
    Have you played with -E and -S setting on the filesystem? They will affect performance of stat() operation. You can change them via mmchfs command.
    Hi,

    1. The current setting is -E yes and -S no, so I changed to -E no, it is the same.

    I changed both -E no and -S yes, and it is also the same...

    2. Is this behaviour expected? Or should I open a PMR?

    Thank you very much,
    Theeraphong
  • YuanZhengcai
    YuanZhengcai
    9 Posts

    Re: slow performance for ls -> inode locking?

    ‏2012-10-08T08:49:22Z  
    "iotest" on all 7 nodes concurrently creating new files inside the same directory "/gpfs/test", and "ls -al /gpfs" on one node found very slow. I want to double check that you ran "ls -al /gpfs" instead of "ls -al /gpfs/test" on the node, is it right? If "ls -al" on the shared directory, there will be token-acquire, dirty data flush, token-relinquish RPC exchange. Otherwise would you please collect debug data during the slowness and open PMR, then we can check what's the root cause.
  • Theeraph
    Theeraph
    110 Posts

    Re: slow performance for ls -> inode locking?

    ‏2012-10-08T19:43:47Z  
    "iotest" on all 7 nodes concurrently creating new files inside the same directory "/gpfs/test", and "ls -al /gpfs" on one node found very slow. I want to double check that you ran "ls -al /gpfs" instead of "ls -al /gpfs/test" on the node, is it right? If "ls -al" on the shared directory, there will be token-acquire, dirty data flush, token-relinquish RPC exchange. Otherwise would you please collect debug data during the slowness and open PMR, then we can check what's the root cause.
    Hi,

    Both ls -al /gpfs and ls -al /gpfs/test are slow...

    Besides gpfs.snap, what kind of commands and what options should we run (eg. mmtrace, ...)?

    Thank you very much,
    Theeraphong
  • YuanZhengcai
    YuanZhengcai
    9 Posts

    Re: slow performance for ls -> inode locking?

    ‏2012-10-09T10:10:16Z  
    • Theeraph
    • ‏2012-10-08T19:43:47Z
    Hi,

    Both ls -al /gpfs and ls -al /gpfs/test are slow...

    Besides gpfs.snap, what kind of commands and what options should we run (eg. mmtrace, ...)?

    Thank you very much,
    Theeraphong
    Please have trace collected on the node where you ran "ls -al". Also make sure the partition of /tmp has enough free space before enable the trace, otherwise it would fulfilled by trace and affect other process who working on the same partition.

    1. mkdir -p /tmp/mmfs ; mmtrace trace=def
    2. "time ls -al /gpfs"
    3. mmtrace stop
  • HajoEhlers
    HajoEhlers
    253 Posts

    Re: slow performance for ls -> inode locking?

    ‏2012-10-09T13:07:51Z  
    • Theeraph
    • ‏2012-10-08T19:43:47Z
    Hi,

    Both ls -al /gpfs and ls -al /gpfs/test are slow...

    Besides gpfs.snap, what kind of commands and what options should we run (eg. mmtrace, ...)?

    Thank you very much,
    Theeraphong
    I assume that the information requested by a "ls -la" is stored within the metadata.

    If this is correct i would say:
    Doing an IO performance test against a slow disk subsystem could IMHO result in a slow response for metadata information.

    So could you provide the following information as well
    - What kind of disk subsystem is use ? Raid level, type of disk, amount of disks ?
    - How "large" are the test target directories ( ls -ld /gpfs/ /gpfs/test ) ?
    - Have you dedicated metadata disks ?
    - Have you checked the IO queue on your test LUNs ( assuming SAN storage is used ) ?

    cheers
    Hajo