Topic
  • 4 replies
  • Latest Post - ‏2013-05-17T14:11:52Z by rpbala
rpbala
rpbala
9 Posts

Pinned topic File system mount issue

‏2013-05-16T11:32:16Z |

Hello,

I have set up test server and try to mount GPFS on it but getting I/O error as below. Below was the step i actually performed.

1. Flash storage luns from PROD to TEST server.

2. Installed same version of GPFS on the test server as PROD.

3. Created cluster with different name from PROD cluster name and registered license.

4. Imported file system using mmimportfs (using mmexportfs command output file from PROD).

5. After that i tried to start but getting following error for all the file systems and file systems can't able to mount.

Thu May 16 14:44:20.892 2013: Node 127.0.0.1 (dev01) appointed as manager for dev1.
Thu May 16 14:44:21.064 2013: Global NSD disk, nsddev1, not found.
Thu May 16 14:44:21.065 2013: Failed to read a file system descriptor.
Thu May 16 14:44:21.066 2013: File system manager takeover failed.
Thu May 16 14:44:21.065 2013: Input/output error
Thu May 16 14:44:21.066 2013: File System dev1 unmounted by the system with return code 212 reason code 5
Thu May 16 14:44:21.065 2013: The current file system manager failed and no new manager will be appointed.
Thu May 16 14:44:21.066 2013: Failed to open dev1.
Thu May 16 14:44:21.067 2013: Input/output error
Thu May 16 14:44:21.066 2013: Command: err 5: mount dev1
Thu May 16 14:44:21.067 2013: Input/output error
mount: /dev/dev1: can't read superblock
Thu May 16 14:44:21 UDT 2013: finished mounting /dev/dev1

 

Is there anything went wrong here? Please advise if the above steps are wrong and help to resolve the issue.

Thanks in Advance.

 

 

  • rpbala
    rpbala
    9 Posts
    ACCEPTED ANSWER

    Re: File system mount issue

    ‏2013-05-17T14:11:52Z  
    • gcorneau
    • ‏2013-05-17T14:02:27Z

    The behavior you're seeing from "mmexportfs" is correct (take a peek at the man page), it will remove the file system definition(s) from the original source cluster.

    What you need to use is the "mmfsctl syncFSconfig" command which will extract (but not remove) the file system definition from the source cluster and import it into the destination cluster.  This is discussed in the GPFS Advanced Administration Guide in the section on setting up Disaster Recovery configurations:

    http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.0.7.gpfs200.doc/bl1adv_flashxmp.htm

    Note that the IBM Terms (FlashCopy) can also apply to other vendor's point-in-time copy methods for storage LUNs.

    You absolutely need to get the file system definitions back into the Production cluster.

    Thank yu so much for your help.

  • gcorneau
    gcorneau
    160 Posts

    Re: File system mount issue

    ‏2013-05-16T13:42:46Z  

    > 1. Flash storage luns from PROD to TEST server.

    Did you run "mmfsctl suspend" before initiating the Flash operation?  This is highly recommended as it flushes all in-memory buffers/etc to disk to put the file system into a consistent on-disk state.  Use "mmfsctl resume" after the copy to resume I/O operations.

    Does the file system definition show up properly on the TEST cluster?

    # mmlsfs

    Is the GPFS NSD found by the operating system?

    # mmdevdiscover

    If this is AIX, I'd also check the "lspv" output.  Do you see "nsddev1" in the VG column?

    Is the Flashed disk available in read/write mode to the TEST node?

    If you have a support contract, then if none of this helps, give them a call.

  • rpbala
    rpbala
    9 Posts

    Re: File system mount issue

    ‏2013-05-17T08:15:14Z  
    • gcorneau
    • ‏2013-05-16T13:42:46Z

    > 1. Flash storage luns from PROD to TEST server.

    Did you run "mmfsctl suspend" before initiating the Flash operation?  This is highly recommended as it flushes all in-memory buffers/etc to disk to put the file system into a consistent on-disk state.  Use "mmfsctl resume" after the copy to resume I/O operations.

    Does the file system definition show up properly on the TEST cluster?

    # mmlsfs

    Is the GPFS NSD found by the operating system?

    # mmdevdiscover

    If this is AIX, I'd also check the "lspv" output.  Do you see "nsddev1" in the VG column?

    Is the Flashed disk available in read/write mode to the TEST node?

    If you have a support contract, then if none of this helps, give them a call.

    Hello,

    Thanks for your support.

    I have resolved the issue by taking the "mmexportfs all -o export.out" ouput from another server in the same PROD cluster group and used that output to import "mmimportfs all -i export.out" in the test server. Now when i start gpfs cluster it is able to mount all the file system. Hope there should be some issue when i issued mmexportfs command earlier.

    But i just want to clarify, after the mmexportfs in the server i noticed that the entries related to file systems removed from /var/mmfs/gen/mmsdrfs. Is it normal and will it impact the PROD server ? Please advise.

     

    Thanks.

     

  • gcorneau
    gcorneau
    160 Posts

    Re: File system mount issue

    ‏2013-05-17T14:02:27Z  
    • rpbala
    • ‏2013-05-17T08:15:14Z

    Hello,

    Thanks for your support.

    I have resolved the issue by taking the "mmexportfs all -o export.out" ouput from another server in the same PROD cluster group and used that output to import "mmimportfs all -i export.out" in the test server. Now when i start gpfs cluster it is able to mount all the file system. Hope there should be some issue when i issued mmexportfs command earlier.

    But i just want to clarify, after the mmexportfs in the server i noticed that the entries related to file systems removed from /var/mmfs/gen/mmsdrfs. Is it normal and will it impact the PROD server ? Please advise.

     

    Thanks.

     

    The behavior you're seeing from "mmexportfs" is correct (take a peek at the man page), it will remove the file system definition(s) from the original source cluster.

    What you need to use is the "mmfsctl syncFSconfig" command which will extract (but not remove) the file system definition from the source cluster and import it into the destination cluster.  This is discussed in the GPFS Advanced Administration Guide in the section on setting up Disaster Recovery configurations:

    http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.0.7.gpfs200.doc/bl1adv_flashxmp.htm

    Note that the IBM Terms (FlashCopy) can also apply to other vendor's point-in-time copy methods for storage LUNs.

    You absolutely need to get the file system definitions back into the Production cluster.

  • rpbala
    rpbala
    9 Posts

    Re: File system mount issue

    ‏2013-05-17T14:11:52Z  
    • gcorneau
    • ‏2013-05-17T14:02:27Z

    The behavior you're seeing from "mmexportfs" is correct (take a peek at the man page), it will remove the file system definition(s) from the original source cluster.

    What you need to use is the "mmfsctl syncFSconfig" command which will extract (but not remove) the file system definition from the source cluster and import it into the destination cluster.  This is discussed in the GPFS Advanced Administration Guide in the section on setting up Disaster Recovery configurations:

    http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.0.7.gpfs200.doc/bl1adv_flashxmp.htm

    Note that the IBM Terms (FlashCopy) can also apply to other vendor's point-in-time copy methods for storage LUNs.

    You absolutely need to get the file system definitions back into the Production cluster.

    Thank yu so much for your help.