mmgetlocation
For IBM Storage Scale 4.2.3, mmgetlocation supports the -Y option.
Synopsis
mmgetlocation {[-f filename] | [-d directory]}
[-r {1|2|3|all}]
[-b] [-L] [-l] [-Y] [--lessDetails]
[-D [diskname,diskname,...]]
[-N [nodename,nodename,...]]
Parameters
- -f filename
- Specifies the file whose block location you want to query. It should be absolute file path. For one file, the system displays the block/chunk information and the file block summary information.
- -d directory
- Specifies the directory whose block location you want to query. All files under <directory>
will be checked and summarized together. <directory> must be the absolute
directory path. The system displays one block summary for each file and one directory summary with
the block information. The options
-f
and-d
are exclusive.Note: The sub-directories under <directory> won't be checked. - [-r {1|2|3|all}]
- Specifies the replica that you want to query for the block location. 2 means replica 1 and replica 2. By default, the value is set to all.
- -b
- The block location is considered as file system block or as FPO chunk (file system block size * blockGroupFactor). By default, the value is set to no.
- -L
- Displays the detailed information (NSD ID and NSD failure group) of one block or chunk. This option impacts only the output of the block information for one file. It is not applicable when option -d is specified.
- -l
- Lists the NSD and total replica number on the NSD in summary of file or directory.
- -Y
- Displays headers and displays the output in a colon-separated fields format.
- -D {NSD[,NSD...]}
- Displays only the file block and chunk information and the summary of the specified NSDs.
- -N {Node[,Node...]}
- Displays only the file block and chunk information and the summary of the specified nodes.
- --lessDetails
- Does not display the file summary of each file in <directory> when option -d is specified. If option -f is specified, does not display the block details.
Notes
- Only tested over Linux®.
- Does not recursively process the subdirectories if option -d is specified.
- For FPO, if both -D and -N are specified, the -N option must be with only one node because no two NSDs in FPO belong to the same node.
- For mmgetlocation -Y, the system displays the output in the following formats:
-
mmgetlocation:fileSummary:filepath:blockSize:metadataBlockSize:dataReplica:metadataReplica: storagePoolName:allowWriteAffinity:writeAffinityDepth:blockGroupFactor:(-Y -L specified)
-
mmgetlocation:fileDataInfor:chunkIndex:offset:NSDName:NSDServer:diskID:failureGroup: reserved:NSDName:NSDServer:diskID:failureGroup:reserved:NSDName:NSDServer:diskID:failureGroup: reserved: if there are 2 or 3 replicas, repeat "nsdName:nsdServer:diskID:failureGroup:reserved:" if the option "-L" is not specified, the value of "diskID" and "failureGroup" will be blank
-
mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:nsdName:blocks:(-l specified) mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:blocks:(-l not specified) if there are more than 1 NSD for replica #, each one will be output as one line if the value of "nsdName" in one line is "all", that means, the option "-l" is not given.
-
mmgetlocation:dirSummary:path:replicaIndex:nsdServer:nsdName:blocks:(-l specified)
Note: If the value of nsdName in one line is all, the option -l is not given. So, for the option -f, the output is:a b c
For the option -d, the output is:c for each file d
-
Examples
l /usr/lpp/mmfs/samples/fpo/mmgetlocation -f /sncfs/file1G
[FILE /sncfs/file1G INFORMATION]
FS_DATA_BLOCKSIZE : 1048576 (bytes)
FS_META_DATA_BLOCKSIZE : (bytes)
FS_FILE_DATAREPLICA : 3
FS_FILE_METADATAREPLICA : 3
FS_FILE_STORAGEPOOLNAME : fpodata
FS_FILE_ALLOWWRITEAFFINITY : yes
FS_FILE_WRITEAFFINITYDEPTH : 1
FS_FILE_BLOCKGROUPFACTOR : 128
chunk(s)# 0 (offset 0) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n02_sdc c3m3n02] [data_c3m3n04_sdc c3m3n04]
chunk(s)# 1 (offset 134217728) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n04_sdc c3m3n04] [data_c3m3n02_sdc c3m3n02]
chunk(s)# 2 (offset 268435456) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n02_sdc c3m3n02] [data_c3m3n04_sdc c3m3n04]
chunk(s)# 3 (offset 402653184) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n04_sdc c3m3n04] [data_c3m3n02_sdc c3m3n02]
chunk(s)# 4 (offset 536870912) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n02_sdc c3m3n02] [data_c3m3n04_sdc c3m3n04]
chunk(s)# 5 (offset 671088640) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n04_sdc c3m3n04] [data_c3m3n02_sdc c3m3n02]
chunk(s)# 6 (offset 805306368) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n02_sdc c3m3n02] [data_c3m3n04_sdc c3m3n04]
chunk(s)# 7 (offset 939524096) : [data_c3m3n03_sdd c3m3n03] [data_c3m3n04_sdc c3m3n04] [data_c3m3n02_sdc c3m3n02]
[FILE: /sncfs/file1G SUMMARY INFO]
replica1:
c3m3n03: 8 chunk(s)
replica2:
c3m3n04: 4 chunk(s)
c3m3n02: 4 chunk(s)
replica3:
c3m3n04: 4 chunk(s)
c3m3n02: 4 chunk(s)
From the summary at the end of the output, you can know, for the file /sncfs/file1G,
8 chunks of the 1st replica are located on the node c3m3n03.
The 8 chunks of the 2nd replica are located on the node c3m3n04 and c3m3n02,
The 8 chunks of the 3nd replica are located on the node c3m3n04 and c3m3n02.
l /usr/lpp/mmfs/samples/fpo/mmgetlocation -d /sncfs/t2 -L -Y
mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:blocks:
mmgetlocation:fileDataSummary:/sncfs/t2/_partition.lst:1:c3m3n04:1:
mmgetlocation:fileDataSummary:/sncfs/t2/_partition.lst:2::1:
mmgetlocation:fileDataSummary:/sncfs/t2/_partition.lst:3::1:
mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:blocks:
mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:blocks:
mmgetlocation:fileDataSummary:/sncfs/t2/part-r-00000:1:c3m3n04:2:
mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:blocks:
mmgetlocation:fileDataSummary:/sncfs/t2/part-r-00002:1:c3m3n04:2:
mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:blocks:
mmgetlocation:fileDataSummary:/sncfs/t2/part-r-00001:1:c3m3n02:2:
mmgetlocation:dirDataSummary:path:replicaIndex:nsdServer:blocks:
mmgetlocation:dirDataSummary:/sncfs/t2/:1:c3m3n04:5:
mmgetlocation:dirDataSummary:/sncfs/t2/:1:c3m3n02:2:
l /usr/lpp/mmfs/samples/fpo/mmgetlocation -f /sncfs/file1G -Y -L
mmgetlocation:fileSummary:filepath:blockSize:metadataBlockSize:dataReplica:metadataReplica:
storagePoolName:allowWriteAffinity:writeAffinityDepth:blockGroupFactor:
mmgetlocation:fileSummary:/sncfs/file1G:1048576::3:3:fpodata:yes:1:128:
mmgetlocation:fileDataInfor:chunkIndex:offset:NSDName:NSDServer:diskID:failureGroup:
reserved:NSDName:NSDServer:diskID:failureGroup:reserved:NSDName:NSDServer:diskID:failureGroup:reserved:
mmgetlocation:fileDataInfor:0:0):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n02_sdc:c3m3n02:3:1,0,
0::data_c3m3n04_sdc:c3m3n04:9:2,0,0::
mmgetlocation:fileDataInfor:1:134217728):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n04_sdc:c3m3n04:9:2,0,
0::data_c3m3n02_sdc:c3m3n02:3:1,0,0::
mmgetlocation:fileDataInfor:2:268435456):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n02_sdc:c3m3n02:3:1,0,
0::data_c3m3n04_sdc:c3m3n04:9:2,0,0::
mmgetlocation:fileDataInfor:3:402653184):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n04_sdc:c3m3n04:9:2,0,
0::data_c3m3n02_sdc:c3m3n02:3:1,0,0::
mmgetlocation:fileDataInfor:4:536870912):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n02_sdc:c3m3n02:3:1,0,
0::data_c3m3n04_sdc:c3m3n04:9:2,0,0::
mmgetlocation:fileDataInfor:5:671088640):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n04_sdc:c3m3n04:9:2,0,
0::data_c3m3n02_sdc:c3m3n02:3:1,0,0::
mmgetlocation:fileDataInfor:6:805306368):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n02_sdc:c3m3n02:3:1,0,
0::data_c3m3n04_sdc:c3m3n04:9:2,0,0::
mmgetlocation:fileDataInfor:7:939524096):data_c3m3n03_sdd:c3m3n03:5:3,0,0::data_c3m3n04_sdc:c3m3n04:9:2,0,
0::data_c3m3n02_sdc:c3m3n02:3:1,0,0::
mmgetlocation:fileDataSummary:path:replicaIndex:nsdServer:blocks:
mmgetlocation:fileDataSummary:/sncfs/file1G:1:c3m3n03:8:
mmgetlocation:fileDataSummary:/sncfs/file1G:2:c3m3n04:4:
mmgetlocation:fileDataSummary:/sncfs/file1G:2:c3m3n02:4:
mmgetlocation:fileDataSummary:/sncfs/file1G:3:c3m3n04:4:
mmgetlocation:fileDataSummary:/sncfs/file1G:3:c3m3n02:4:
l For IBM Spectrum Scale earlier than 4.2.2.0 perform the following steps to get block location of files.
1. cd /usr/lpp/mmfs/samples/fpo/
g++ -g -DGPFS_SNC_FILEMAP -o tsGetDataBlk -I/usr/lpp/mmfs/include/ tsGetDataBlk.C -L/usr/lpp/mmfs/lib/ -lgpfs
2. ./tsGetDataBlk <filename> -s 0 -f <data-pool-block-size * blockGroupFactor> -r 3
3. Check the output of the program tsGetDataBlk:
[root@gpfstest2 sncfs]# /usr/lpp/mmfs/samples/fpo/tsGetDataBlk /sncfs/test -r 3
File length: 1073741824, Block Size: 2097152
Parameters: startoffset:0, skipfactor: META_BLOCK, length: 1073741824, replicas 3
numReplicasReturned: 3, numBlksReturned: 4, META_BLOCK size: 268435456
Block 0 (offset 0) is located at disks: 2 4 6
Block 1 (offset 268435456) is located at disks: 2 4 6
Block 2 (offset 536870912) is located at disks: 2 4 6
Block 3 (offset 805306368) is located at disks: 2 4 6
4. In the above example, the block size of data pool is 2Mbytes, the blockGroupFactor of the
data pool is 128. So, the META_BLOCK (or chunk) size is 2MB * 128 = 256Mbytes. Each output line represents one chunk.
For example, Block 0 in the above is located in the disks with disk id 2, 4 and 6 for 3 replica.
In order to know the node on which the three replicas of Block 0 are located, check the mapping between disk ID and nodes:
Check the mapping between disks and nodes by mmlsdisk (the 9th column is the disk id of NSD) and mmlsnsd:
[root@gpfstest2 sncfs]# mmlsdisk sncfs –L
disk driver sector failure holds holds avail- storage
name type size group metadata data status ability disk id pool remarks
------------ -------- ------ ----------- -------- ----- ------- --------- ------- --------- ---------
node1_sdb nsd 512 1 Yes No ready up 1 system desc
node1_sdc nsd 512 1,0,1 No Yes ready up 2 datapool
node2_sda nsd 512 1 Yes No ready up 3 system
node2_sdb nsd 512 2,0,1 No Yes ready up 4 datapool
node6_sdb nsd 512 2 Yes No ready up 5 system desc
node6_sdc nsd 512 3,0,1 No Yes ready up 6 datapool
node7_sdb nsd 512 2 Yes No ready up 7 system
node7_sdd nsd 512 4,0,2 No Yes ready up 8 datapool
node11_sdb nsd 512 3 Yes No ready up 9 system desc
node11_sdd nsd 512 1,1,1 No Yes ready up 10 datapool desc
node9_sdb nsd 512 3 Yes No ready up 11 system
node9_sdd nsd 512 2,1,1 No Yes ready up 12 datapool
node10_sdc nsd 512 4 Yes No ready up 13 system desc
node10_sdf nsd 512 3,1,1 No Yes ready up 14 datapool
node12_sda nsd 512 4 Yes No ready up 15 system
node12_sdb nsd 512 4,1,2 No Yes ready up 16 datapool
[root@gpfstest2 sncfs]# mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
sncfs node1_sdb gpfstest1.cn.ibm.com
sncfs node1_sdc gpfstest1.cn.ibm.com
sncfs node2_sda gpfstest2.cn.ibm.com
sncfs node2_sdb gpfstest2.cn.ibm.com
sncfs node6_sdb gpfstest6.cn.ibm.com
sncfs node6_sdc gpfstest6.cn.ibm.com
sncfs node7_sdb gpfstest7.cn.ibm.com
sncfs node7_sdd gpfstest7.cn.ibm.com
sncfs node11_sdb gpfstest11.cn.ibm.com
sncfs node11_sdd gpfstest11.cn.ibm.com
sncfs node9_sdb gpfstest9.cn.ibm.com
sncfs node9_sdd gpfstest9.cn.ibm.com
sncfs node10_sdc gpfstest10.cn.ibm.com
sncfs node10_sdf gpfstest10.cn.ibm.com
sncfs node12_sda gpfstest12.cn.ibm.com
sncfs node12_sdb gpfstest12.cn.ibm.com
The three replicas of Block 0 are located in disk id 2 (NSD name node1_sdc, node name is
gpfstest1.cn.ibm.com), disk id 4 (NSD name node2_sdb, node name is gpfstest2.cn.ibm.com),
and disk id 6 (NSD name node6_sdc, node name is gpfstest6.cn.ibm.com). Check each block of
the file to see if the blocks are located correctly. If all blocks are not located correctly,
fix the data locality