I have a direfernt disk systems running GPFS each of them pointed out to one big FS. both of them using 4 nsd servers witch 8Gbps FC connections for each disk subsystem. Each LUN/NSD contains about 10 physical disk the first disk system is a SA2900 DDN RAID6 the other one is formed by two IBM DCS3700 RAID 5.
The DDN servers netwotk interface are 10Gbps while the DCS3700 seaver have a 4x1Gbps bonding (the network in both cases works fine doing an iper test)
Checking the Fiber directy using a dd command, we obtain a good values for both cases
[root@xxxx~]# dd if=/dev/sdd of=/dev/null bs=512K count=50240
50240+0 records in
50240+0 records out
26340229120 bytes (26 GB) copied, 44.5139 s, 592 MB/s (very close for bs=1024K) for both disk subsytems (550MB/s --600MB/s)
But when we look at the agregated througtput runing GPFS for the servers the DDN is showing about 500-700MB/s while th FS managed by DCS3700 has his limit on 200MB/s running the same job number.
The configuration is the same for all the servers, the unique difference is the network (10Gbps vs 4x1Gbps bonding).
The maxMBpS is set to 1600 (it was set to 3200 with the same result) for servers and 200 for client nodes.
I do not know if with this configuration the ignorePrefetchLUNCount may help to increase the rate.
Any Idea about that?