We have many clusters, they usually are very similar. But one I created the file systems for a new cluster I was building, I had an issue where one of the file systems created would not mount. It was name mm. Its mount point was /mm, but when you try to mount it, it give you the oddest error message (below) and won't mount.
Below i have done mmlsfs of the file system, shown that it is not mounted (df -h) and then tried to mount it.
My question is, what record stores the individual information for how a file system is built? I would like to look at it (or them) and see if it has the incorrect mount point some place.
Here is the output (with the error message in bold at the bottom):
[root@rso-isfile001a ~]# mmlsfs mm
flag value description
------------------- ------------------------ -----------------------------------
-f 8192 Minimum fragment size in bytes
-i 512 Inode size in bytes
-I 16384 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 128 Estimated number of nodes that will mount file system
-B 262144 Block size
-Q none Quotas enforced
none Default quotas enabled
--filesetdf No Fileset df enabled?
-V 13.01 (184.108.40.206) File system version
--create-time Wed May 15 16:40:40 2013 File system creation time
-u Yes Support for large LUNs?
-z No Is DMAPI enabled?
-L 4194304 Logfile size
-E Yes Exact mtime mount option
-S No Suppress atime mount option
-K whenpossible Strict replica allocation option
--fastea Yes Fast external attributes enabled?
--inode-limit 11591168 Maximum number of inodes
-P system;mm_pool Disk storage pools in file system
-d nsd7;nsd107 Disks in file system
--perfileset-quota no Per-fileset quota enforcement
-A yes Automatic mount option
-o none Additional mount options
-T /mm Default mount point
--mount-priority 0 Mount priority
[root@rso-isfile001a ~]# df -h
Filesystem Size Used Avail Use% Mounted on
251G 6.7G 232G 3% /
tmpfs 12G 8.0K 12G 1% /dev/shm
/dev/sda1 485M 33M 427M 8% /boot
/dev/asthin 512G 2.3M 512G 1% /asthin
/dev/cnfs 10G 2.3M 10G 1% /cnfs
/dev/dbashare 82G 11M 82G 1% /dbashare
/dev/em 21T 131G 21T 1% /em
/dev/emthin 21T 2.3M 21T 1% /emthin
/dev/isthin 26T 2.3M 26T 1% /isthin
/dev/mmthin 7.3T 2.3M 7.3T 1% /mmthin
/dev/nethome 300G 469M 300G 1% /nethome
/dev/pd 25T 2.3M 25T 1% /pd
/dev/pdthin 17T 2.3M 17T 1% /pdthin
/dev/pgthin 615G 2.3M 615G 1% /pgthin
/dev/vmexports 6.6T 211G 6.4T 4% /vmexports
[root@rso-isfile001a ~]# mmmount mm
Fri Jul 5 13:42:00 GMT 2013: mmmount: Mounting file systems ...
mount: mount point gpfs does not exist
mmmount: Command failed. Examine previous error messages to determine cause.