Topic
  • 14 replies
  • Latest Post - ‏2014-12-09T19:28:43Z by oester
Rajbatra
Rajbatra
21 Posts

Pinned topic GPFS Disk Descriptor

‏2010-02-24T18:15:07Z |
Hi,
We are using GPFS 3.2.1 on our environment. let us know the following :

1. What is disk descriptor?

2. If any nsd disk descriptor corrupted, how can we recover?

3. What should we analysis to check disk descriptor has corrupted?

4. What are the symptoms of disk descriptor corruption?

5. What logs we should check to find out the disk descriptor problem to identify the exact area of the problem?
Regards
RajBatra
Updated on 2011-12-03T14:59:04Z at 2011-12-03T14:59:04Z by gpfs@us.ibm.com
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: GPFS Disk Descriptor

    ‏2010-02-24T20:00:58Z  
    In the first few sectors of a disk that GPFS has been given are descriptors of the IDs that the disk can have.

    Sector 2 has the NSD id in it which GPFS can match with a GPFS disk name in the /var/mmfs/gen/mmsdrfs file. This is written when mmcrnsd is run.

    Sector 1 has a "FS unique id" which is assigned when the NSD disk is part of a filesystem. This id is matched in the Filesystem Descriptor (FSDesc) to a GPFS disk names. This is written on mmcrfs, mmadddisk, or mmrpldisk.

    Sectors 8+ have a copy of the FSDesc, but it may not be the most current copy. This is written on mmcrfs, mmadddisk, or mmrpldisk. A small subset (1, 3, 5, or 6) of the disks in the filesystem have the most current version of the FSDesc. These are called the "quorum" or "desc" disks, and can be seen in mmlsdisk -L output.

    When GPFS starts up or is told that there are disk changes, it scans all the disks it has locally attached to see which ones have which NSD ids. (There is a hint file from the last search in /var/mmfs/gen/nsdmap). If it does not see an NSD id on a disk it assumes it is not a GPFS disk. A mount request will check again that the physical disk it sees has the correct NSD id and also if it has the correct "FS unique id" from the most recent FSDesc.

    You can see the descriptors on a physical disk (GPFS AIX 3.2.1.9 or later, Linux 3.2.1.16/3.3.0.2 or later) using:
    
    mmfsadm test readdescraw /dev/$devname
    


    There have been many ways in which customers have clobbered these sectors.
    1) Creating an EXT filesystem on top of a GPFS disk.
    2) Linux install on a new machine that can see the disks with options that allow it to "take" the disk as a swap space, or disk to install system images on.
    3) Create an AIX Logical Volume over a raw disk.
    4) Assign the disk to an Oracle raw volume on which a database is created.
    5) Run mmcrnsd specifing -v no so that GPFS skips the check which would tell you the disk is already in use. This puts a new NSD id on the disk and the old identity is lost.
    6) Run mmcrfs, mmadddisk, mmrpldisk specifying -v no so that GPFS skips the check which would tell you the disk is already in use. This clobbers the FS unique id and the FSDesc on the disk and may scatter other user data or system metadata around the disk.

    GPFS will not notice that the descriptor sectors have been clobbered until the next time it needs to mount the filesystem.

    The only one that is recoverable is 5 which only clobbered the NSD id (assuming the disk was not then used in a filesystem). Only do this under supervision of a GPFS expert.

    If you are very lucky (and are extremely careful) and still have a good nsdmap file or backed up version of the mmsdrfs file, you can either force rewrite of the old NSD id back to sector 2, or change the mmsdrfs file manually to use the new NSD id for the existing GPFS disk name. The mmsdrfs file may have "(free)" line for the mmcrnsd command that was issued, that needs to be cleaned up as well.
    Lesson: Never use "-v no" on mmcrnsd, mmcrfs, mmadddisk, or mmrpldisk. You will then get an appropriate warning about the disk usage. This protects against specifying the wrong physical disk because they have different names on different nodes, typos, and various other forgetful things that humans do.

    The only useful purpose of "-v no" is for situations in which you were able to run mmdeldisk -p or mmdelfs -p successfully, but because some disks were unavailable, GPFS couldn't clobber the old descriptor sectors saying they were no longer a part of some filesystem. But use it extremely carefully in this situation too.
  • Rajbatra
    Rajbatra
    21 Posts

    Re: GPFS Disk Descriptor

    ‏2010-02-26T10:01:18Z  
    Hi dlmcnabb,

    Thanks for sharing good information, recently we faced this problem on one nsd in our one of mount point.

    We got the following error on GPFS logs and Server logs, we checked storage and server logs, both are ok, there we did not get any error related to Storage DISK.
    We don't know, why this problem occured, when storage DISK are ok, I did not get any root cause on this , can you share your views if you faced like this problem, so we could avoid this situtation in future.

    GPFS logs
    =======
    Thu Feb 11 15:17:40.066 2010: Command: mount wgdata1
    Thu Feb 11 15:17:40.075 2010: Volume label of disk gpfs668nsd is corrupt.
    Thu Feb 11 15:17:40.088 2010: Disk failure. Volume wgdata1. rc = 48. Physical volume gpfs668nsd.
    Thu Feb 11 15:17:40.090 2010: File System wgdata1 unmounted by the system with return code 5 reason code 0
    Thu Feb 11 15:17:40.091 2010: Input/output error
    Thu Feb 11 15:17:40.090 2010: Failed to open wgdata1.
    Thu Feb 11 15:17:40.091 2010: Input/output error
    Thu Feb 11 15:17:40.090 2010: Command: err 666: mount wgdata1
    Thu Feb 11 15:17:40.091 2010: Input/output error

    OS Logs

    ===========

    Feb 11 14:42:46 dd008 mmfs: Error=MMFS_DISKFAIL, ID=0x9C6C05FA, Tag=7588300: Disk failure. Volume wgdata1. rc = 48. Physical volume gpfs668nsd
    Feb 11 14:42:46 dd008 mmfs: Error=MMFS_SYSTEM_UNMOUNT, ID=0xC954F85D, Tag=7588301: Unrecoverable file system operation error. Status code 5. Volume wgdata1
    ===================
    Regards
    RajBatra
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: GPFS Disk Descriptor

    ‏2010-02-26T15:50:23Z  
    • Rajbatra
    • ‏2010-02-26T10:01:18Z
    Hi dlmcnabb,

    Thanks for sharing good information, recently we faced this problem on one nsd in our one of mount point.

    We got the following error on GPFS logs and Server logs, we checked storage and server logs, both are ok, there we did not get any error related to Storage DISK.
    We don't know, why this problem occured, when storage DISK are ok, I did not get any root cause on this , can you share your views if you faced like this problem, so we could avoid this situtation in future.

    GPFS logs
    =======
    Thu Feb 11 15:17:40.066 2010: Command: mount wgdata1
    Thu Feb 11 15:17:40.075 2010: Volume label of disk gpfs668nsd is corrupt.
    Thu Feb 11 15:17:40.088 2010: Disk failure. Volume wgdata1. rc = 48. Physical volume gpfs668nsd.
    Thu Feb 11 15:17:40.090 2010: File System wgdata1 unmounted by the system with return code 5 reason code 0
    Thu Feb 11 15:17:40.091 2010: Input/output error
    Thu Feb 11 15:17:40.090 2010: Failed to open wgdata1.
    Thu Feb 11 15:17:40.091 2010: Input/output error
    Thu Feb 11 15:17:40.090 2010: Command: err 666: mount wgdata1
    Thu Feb 11 15:17:40.091 2010: Input/output error

    OS Logs

    ===========

    Feb 11 14:42:46 dd008 mmfs: Error=MMFS_DISKFAIL, ID=0x9C6C05FA, Tag=7588300: Disk failure. Volume wgdata1. rc = 48. Physical volume gpfs668nsd
    Feb 11 14:42:46 dd008 mmfs: Error=MMFS_SYSTEM_UNMOUNT, ID=0xC954F85D, Tag=7588301: Unrecoverable file system operation error. Status code 5. Volume wgdata1
    ===================
    Regards
    RajBatra
    Those errors indicate that while the NSD id said this was a certain GPFS disk, the FS unique id or the FS Descriptor on that disk were invalid (clobbered), or did not match the GPFS disk or filesystem name.

    It is hard to tell which was corrupted, but they are inconsistent and therefore GPFS refuses to use that disk in the filesystem.

    Show me the "mmfsadm test readdescraw /dev/$devicename" output for that disk. If this is on Linux, then make sure you have 3.2.1.16 or later so that it reads the raw disk sectors. Earlier versions of GPFS may have shown Linux buffered sectors which may no longer match what is physically on the disk.

    Or attach the raw output from
    Linux: dd if=/dev/$devicename count=1000 iflag=direct > raw.$devicename
    AIX: dd if=/dev/r$devicename count=1000 > raw.$devicename
    I can run the readdescraw from the raw file.
  • Rajbatra
    Rajbatra
    21 Posts

    Re: GPFS Disk Descriptor

    ‏2010-03-02T13:36:35Z  
    Hi dlmcnabb,

    Thanks for your reply, i am attaching sde_1.readdescraw file for sde disk output.

    Here GPFS version 3.2.1-14

    We have run this command to verify PVID output. this id is having which LUN's.

    root@csmmg02 ~# dsh -a tspreparedisk -S | grep C0A8026D47441615
    dd016.geopic.com: bash: tspreparedisk: command not found
    dd003.geopic.com: C0A8026D47441615 /dev/sde generic
    dd017.geopic.com: bash: tspreparedisk: command not found
    dd010.geopic.com: C0A8026D47441615 /dev/sde generic
    dd007.geopic.com: C0A8026D47441615 /dev/sde generic
    dd014.geopic.com: C0A8026D47441615 /dev/sde generic
    dd012.geopic.com: C0A8026D47441615 /dev/sde generic
    dd005.geopic.com: C0A8026D47441615 /dev/sde generic
    dd008.geopic.com: C0A8026D47441615 /dev/sde generic
    ddns001.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns003.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns002.geopic.com: C0A8026D47441615 /dev/sdz generic
    ddns005.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns007.geopic.com: C0A8026D47441615 /dev/sdc generic
    ddns006.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns004.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns008.geopic.com: C0A8026D47441615 /dev/sdi generic
    ddjs001.geopic.com: C0A8026D47441615 /dev/sdj generic
    dd011.geopic.com: C0A8026D47441615 /dev/sde generic

    Regards
    RajBatra
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: GPFS Disk Descriptor

    ‏2010-03-02T23:06:30Z  
    • Rajbatra
    • ‏2010-03-02T13:36:35Z
    Hi dlmcnabb,

    Thanks for your reply, i am attaching sde_1.readdescraw file for sde disk output.

    Here GPFS version 3.2.1-14

    We have run this command to verify PVID output. this id is having which LUN's.

    root@csmmg02 ~# dsh -a tspreparedisk -S | grep C0A8026D47441615
    dd016.geopic.com: bash: tspreparedisk: command not found
    dd003.geopic.com: C0A8026D47441615 /dev/sde generic
    dd017.geopic.com: bash: tspreparedisk: command not found
    dd010.geopic.com: C0A8026D47441615 /dev/sde generic
    dd007.geopic.com: C0A8026D47441615 /dev/sde generic
    dd014.geopic.com: C0A8026D47441615 /dev/sde generic
    dd012.geopic.com: C0A8026D47441615 /dev/sde generic
    dd005.geopic.com: C0A8026D47441615 /dev/sde generic
    dd008.geopic.com: C0A8026D47441615 /dev/sde generic
    ddns001.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns003.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns002.geopic.com: C0A8026D47441615 /dev/sdz generic
    ddns005.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns007.geopic.com: C0A8026D47441615 /dev/sdc generic
    ddns006.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns004.geopic.com: C0A8026D47441615 /dev/sdh generic
    ddns008.geopic.com: C0A8026D47441615 /dev/sdi generic
    ddjs001.geopic.com: C0A8026D47441615 /dev/sdj generic
    dd011.geopic.com: C0A8026D47441615 /dev/sde generic

    Regards
    RajBatra
    So it looks like something stomped on sector 1 which has the FileSystem's identity id in it. With a little hex editing we can construct a new sector 1 and dd it back into place.

    To make sure you have the right disk, please verify that in /var/mmfs/gen/mmsdrfs, the gpfs668nsd line has NSDid C0A8026D47441615.

    Send me the raw dd of the first 64 sectors of the bad disk and one of the good disks in that filesystem (gpfs669nsd for example). I can use the good one to get an example of sector 1, and then modify it for you to put on the bad disk.

    What you have to worry about is what stomped on this disk. And if they stomped on that sector what else on the disk was overwritten. Sectors 2, and 8-30 seem to be OK, but I cannot vouch for anything else.

    When you get this disk back up, I would recommend putting it in "suspend" state:
    
    mmchdisk wgdata1 suspend -d gpfs668nsd
    

    so that nothing new is allocated on it. Any data there is still readable, and updates in place will continue OK.

    Since the disk holds metadata on it, you should run offline mmfsck to see if any of it was damaged:
    
    mmfsck wgdata1 -v -n > mmfsck.out 2>&1
    

    Then we can check the output for the extent of any damage. If it looks OK to fix, then run mmfsck with -y -v to fix it.

    Then you can see what files have data on that disk using:
    
    mmfileid wgdata1 -d :gpfs668nsd > suspect.files
    

    Owners of those files should look at these files to make sure something has not overwritten parts of them. Or you can retrieve backup copies of these files and compare them.
    When you are happy with the data use mmchdisk to "resume" the disk.
  • Rajbatra
    Rajbatra
    21 Posts

    Re: GPFS Disk Descriptor

    ‏2010-03-03T10:03:07Z  
    Hi dlmcnabb,

    You found some error in File systme descriptor.
    it looks like something stomped on sector 1 which has the FileSystem's identity id in it.

    one thing let me know, wgdata1 was working fine since last 1 year, suddenly gpfs668nsd Filesystem identity corrupted and gpfs668nsd is showing OK, what is
    the resaon of nsd disk descprtior corruption. can you tell us ?

    gpfs668nsd: uid C0A8026B:47443B47, status InUse, availability OK,

    I will share with you good and bad nsd details.
    Regards
    RajBatra
  • Rajbatra
    Rajbatra
    21 Posts

    Re: GPFS Disk Descriptor

    ‏2010-03-05T13:18:29Z  
    Hi dlmcnabb,

    pls.find the good disk and bad disk descriptor and output of mmfsck -v -n.

    Regards
    RajBatra
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: GPFS Disk Descriptor

    ‏2010-03-09T00:05:04Z  
    • Rajbatra
    • ‏2010-03-05T13:18:29Z
    Hi dlmcnabb,

    pls.find the good disk and bad disk descriptor and output of mmfsck -v -n.

    Regards
    RajBatra
    Sorry, I need the raw disk sectors, not the output of readdescraw.
    dd if=/dev/$devname count=64 > raw.$devname
    from the bad disk and one of the good filesystem disks.
    If your Linux is fairly current, use the "iflag=direct" option on dd.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: GPFS Disk Descriptor

    ‏2011-12-02T19:51:07Z  
    • dlmcnabb
    • ‏2010-02-24T20:00:58Z
    In the first few sectors of a disk that GPFS has been given are descriptors of the IDs that the disk can have.

    Sector 2 has the NSD id in it which GPFS can match with a GPFS disk name in the /var/mmfs/gen/mmsdrfs file. This is written when mmcrnsd is run.

    Sector 1 has a "FS unique id" which is assigned when the NSD disk is part of a filesystem. This id is matched in the Filesystem Descriptor (FSDesc) to a GPFS disk names. This is written on mmcrfs, mmadddisk, or mmrpldisk.

    Sectors 8+ have a copy of the FSDesc, but it may not be the most current copy. This is written on mmcrfs, mmadddisk, or mmrpldisk. A small subset (1, 3, 5, or 6) of the disks in the filesystem have the most current version of the FSDesc. These are called the "quorum" or "desc" disks, and can be seen in mmlsdisk -L output.

    When GPFS starts up or is told that there are disk changes, it scans all the disks it has locally attached to see which ones have which NSD ids. (There is a hint file from the last search in /var/mmfs/gen/nsdmap). If it does not see an NSD id on a disk it assumes it is not a GPFS disk. A mount request will check again that the physical disk it sees has the correct NSD id and also if it has the correct "FS unique id" from the most recent FSDesc.

    You can see the descriptors on a physical disk (GPFS AIX 3.2.1.9 or later, Linux 3.2.1.16/3.3.0.2 or later) using:
    <pre class="jive-pre"> mmfsadm test readdescraw /dev/$devname </pre>

    There have been many ways in which customers have clobbered these sectors.
    1) Creating an EXT filesystem on top of a GPFS disk.
    2) Linux install on a new machine that can see the disks with options that allow it to "take" the disk as a swap space, or disk to install system images on.
    3) Create an AIX Logical Volume over a raw disk.
    4) Assign the disk to an Oracle raw volume on which a database is created.
    5) Run mmcrnsd specifing -v no so that GPFS skips the check which would tell you the disk is already in use. This puts a new NSD id on the disk and the old identity is lost.
    6) Run mmcrfs, mmadddisk, mmrpldisk specifying -v no so that GPFS skips the check which would tell you the disk is already in use. This clobbers the FS unique id and the FSDesc on the disk and may scatter other user data or system metadata around the disk.

    GPFS will not notice that the descriptor sectors have been clobbered until the next time it needs to mount the filesystem.

    The only one that is recoverable is 5 which only clobbered the NSD id (assuming the disk was not then used in a filesystem). Only do this under supervision of a GPFS expert.

    If you are very lucky (and are extremely careful) and still have a good nsdmap file or backed up version of the mmsdrfs file, you can either force rewrite of the old NSD id back to sector 2, or change the mmsdrfs file manually to use the new NSD id for the existing GPFS disk name. The mmsdrfs file may have "(free)" line for the mmcrnsd command that was issued, that needs to be cleaned up as well.
    Lesson: Never use "-v no" on mmcrnsd, mmcrfs, mmadddisk, or mmrpldisk. You will then get an appropriate warning about the disk usage. This protects against specifying the wrong physical disk because they have different names on different nodes, typos, and various other forgetful things that humans do.

    The only useful purpose of "-v no" is for situations in which you were able to run mmdeldisk -p or mmdelfs -p successfully, but because some disks were unavailable, GPFS couldn't clobber the old descriptor sectors saying they were no longer a part of some filesystem. But use it extremely carefully in this situation too.
    dlmcnabb, I realize this post is nearly two years old, I hope you don't mind the question:

    We consistently run into a scenario where disks are removed from the filesystem due to our automated process which will remove disks that are marked down and cannot be started (due to whatever reason). The disks are actually good, as the logs do not indicate any hardware issues. This is mainly a fault of some past assumptions we are quickly correcting. The problem, is that sometimes even after a successful mmdeldisk, we cannot re-add the disks to the filesystem, usually with a "Invalid argument" error, which we are pretty sure is due to the disks having already been in the filesystem. From your post below, using the -v no option on mmadddisk would not be advisable, or at best use at own risk. The problem we have, is the disks are still good, and we want to add them back. Our disks are partitioned and used in two different GPFS filesystems, and in the other filesystem they are still listed as up/ready. It would seem our only recourse is to use the -v no option.

    Thanks for posting this, I only wish I had seen it earlier, it really helped make sense of what we were seeing, sometimes we could add the disks back without issue, and other times, it would fail and we would have to use -v no.

    Thanks for your time,

    Dave
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: GPFS Disk Descriptor

    ‏2011-12-02T21:55:02Z  
    dlmcnabb, I realize this post is nearly two years old, I hope you don't mind the question:

    We consistently run into a scenario where disks are removed from the filesystem due to our automated process which will remove disks that are marked down and cannot be started (due to whatever reason). The disks are actually good, as the logs do not indicate any hardware issues. This is mainly a fault of some past assumptions we are quickly correcting. The problem, is that sometimes even after a successful mmdeldisk, we cannot re-add the disks to the filesystem, usually with a "Invalid argument" error, which we are pretty sure is due to the disks having already been in the filesystem. From your post below, using the -v no option on mmadddisk would not be advisable, or at best use at own risk. The problem we have, is the disks are still good, and we want to add them back. Our disks are partitioned and used in two different GPFS filesystems, and in the other filesystem they are still listed as up/ready. It would seem our only recourse is to use the -v no option.

    Thanks for posting this, I only wish I had seen it earlier, it really helped make sense of what we were seeing, sometimes we could add the disks back without issue, and other times, it would fail and we would have to use -v no.

    Thanks for your time,

    Dave
    
    mmfsadm test readdescraw /dev/$devname > desc.$devname
    


    This will show you if there is an FS descriptor on the disk and what filesystem it belongs to. Only use -v no if you are absolutely sure that disk is really not used in a filesystem anymore.

    The most common error is to use the wrong device name on mmcrnsd because they are named differently on different nodes. On mmcrnsd, you must use the device name that belongs to the first NSD server in the list, not the local node's device name.

    Mmadddisk will only reject the disk if it has any FS desc on it.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: GPFS Disk Descriptor

    ‏2011-12-03T01:44:49Z  
    • dlmcnabb
    • ‏2011-12-02T21:55:02Z
    <pre class="jive-pre"> mmfsadm test readdescraw /dev/$devname > desc.$devname </pre>

    This will show you if there is an FS descriptor on the disk and what filesystem it belongs to. Only use -v no if you are absolutely sure that disk is really not used in a filesystem anymore.

    The most common error is to use the wrong device name on mmcrnsd because they are named differently on different nodes. On mmcrnsd, you must use the device name that belongs to the first NSD server in the list, not the local node's device name.

    Mmadddisk will only reject the disk if it has any FS desc on it.
    That's good news, since we are confident the mmdeldisk had been run on the disk and the disk is no longer used by the filesystem. Typically, when this occurs, the NSD had been removed as well, so we would just create a file for mmcrnsd, then run against the edited file with mmadddisk with the -v no (but only if the -v yes fails). Occasionally, the process gets halted before it removes the NSD (still looking into why), so we either delete the NSD's and start from scratch, or we just build a disk file and run mmadddisk against it.

    I did have one more quick question, after I re-read your post, I noticed you had mmdeldisk -p, but I cannot seem to find any reference to the -p option (man page, admin guide, usage, etc..). I saw a few references on the web, but most of them point back to this thread <g>. We are using 3.2.1-19 on Linux, wasn't sure if that was an AIX specific option or what.

    Thanks again,

    Dave
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    98 Posts

    Re: GPFS Disk Descriptor

    ‏2011-12-03T14:59:04Z  
    That's good news, since we are confident the mmdeldisk had been run on the disk and the disk is no longer used by the filesystem. Typically, when this occurs, the NSD had been removed as well, so we would just create a file for mmcrnsd, then run against the edited file with mmadddisk with the -v no (but only if the -v yes fails). Occasionally, the process gets halted before it removes the NSD (still looking into why), so we either delete the NSD's and start from scratch, or we just build a disk file and run mmadddisk against it.

    I did have one more quick question, after I re-read your post, I noticed you had mmdeldisk -p, but I cannot seem to find any reference to the -p option (man page, admin guide, usage, etc..). I saw a few references on the web, but most of them point back to this thread <g>. We are using 3.2.1-19 on Linux, wasn't sure if that was an AIX specific option or what.

    Thanks again,

    Dave
    The -p option is for recovery from damaged disks. This is something service may direct you to do in appropriate circumstances. You can find a reference to it in the Problem Determination Guide under "Disk media failure" (note the severe warnings associated with it there). It allows for the deletion of the disk WITHOUT MIGRATION OF THE DATA in cases where there is too much damage. You should not use it unless directed to do so by service.
  • chr78
    chr78
    132 Posts

    Re: GPFS Disk Descriptor

    ‏2014-12-09T19:26:20Z  
    • dlmcnabb
    • ‏2010-03-09T00:05:04Z
    Sorry, I need the raw disk sectors, not the output of readdescraw.
    dd if=/dev/$devname count=64 > raw.$devname
    from the bad disk and one of the good filesystem disks.
    If your Linux is fairly current, use the "iflag=direct" option on dd.

    Dan, would you mind in helping restore clobbered  nsd descriptor?

    ( a berserk controller started to initialize...)

  • oester
    oester
    117 Posts

    Re: GPFS Disk Descriptor

    ‏2014-12-09T19:28:43Z  
    • chr78
    • ‏2014-12-09T19:26:20Z

    Dan, would you mind in helping restore clobbered  nsd descriptor?

    ( a berserk controller started to initialize...)

    Dan has retired from IBM - but if you open up a PMR they can help you thru it. (coming from personal experience).

    Sorry to see Dan gone too - he was a great resource for the GPFS community.

    Bob