Topic
3 replies Latest Post - ‏2012-11-08T11:47:34Z by SystemAdmin
SystemAdmin
SystemAdmin
6907 Posts
ACCEPTED ANSWER

Pinned topic Harddisk1 => Defined

‏2012-11-07T18:02:03Z |
I have an IBM 9110-51A p Series Server with AIX 5.3.
It has 2 scsi hard disks hdisk0 and hdisk1 (and 4 on disk array).
hdisk1 is is "Defined".
When hdisk1 began "defined" i have a new one unknown disk "available" which called hdisk6.
The left led of the case of hdisk1 on the server is green, the right is off.
When i run

lsvg -p rootvg

I get "hdisk1 removed".
I cant take a response from hdisk1 from the #diag because it shows me the hdisk1 as missed (M).
I have HMC where i can get more infos .
Can someone help me find and fix the problem so hdisk1 become again available?
What i have to do?
I have a new one IBM scsi hard disk if needed.

Thank you in advance.
Updated on 2012-11-08T11:47:34Z at 2012-11-08T11:47:34Z by SystemAdmin
  • unixgrl
    unixgrl
    185 Posts
    ACCEPTED ANSWER

    Re: Harddisk1 => Defined

    ‏2012-11-07T18:26:49Z  in response to SystemAdmin
    It sounds like hdisk1 failed and was replaced by hdisk6 without removing the ODM definition for hdisk1 first.

    If this is what happened, rmdev hdisk6, rmdev hdisk1. Then, cfgmgr and the new replacement disk should show up as hdisk1. hdisk6 would be gone.

    You may have to force hdisk1 out of rootvg before you can rmdev it. And, you'd have to put the new hdisk1 back in rootvg after the cfgmgr.

    Whenever you have a disk failure, you need to remove it out of whatever disk group it is in, rmdev it from the system and then replace it.
    • SystemAdmin
      SystemAdmin
      6907 Posts
      ACCEPTED ANSWER

      Re: Harddisk1 => Defined

      ‏2012-11-07T19:37:06Z  in response to unixgrl
      cfgmgr can't run at server and get an error.
      When i saw a green led light at the front of hdisk1 i put it out and then i put it back in the server. Do you think this in and out from the server caused the hdisk6? (hdisk1 and hdisk6 have the same location code)
      HMC show me an error for the hard disk.
      Can you write me the steps i have to do? (check if hdisk1 is mirrored, create the same groups if i have to replace the hdisk1 with a new one so i will have again the server in the same state before disk failure)
      I can give you any info from a command run at the server.

      Thank you
  • SystemAdmin
    SystemAdmin
    6907 Posts
    ACCEPTED ANSWER

    Re: Harddisk1 => Defined

    ‏2012-11-08T11:47:34Z  in response to SystemAdmin
    I used the command codermdev -dl hdisk6[/code] to remove hdisk6.
    Any idea what to do please?
    These are the results from some commands (from one of two servers i have, which has the problem):

    code# lsdev -Ccdisk[/code]
    hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive
    hdisk1 Defined 06-08-01-8,0 16 Bit LVD SCSI Disk Drive
    hdisk2 Available 01-09-02 1814 DS4700 Disk Array Device
    hdisk3 Available 01-09-02 1814 DS4700 Disk Array Device
    hdisk4 Available 01-09-02 1814 DS4700 Disk Array Device
    hdisk5 Available 01-08-02 1814 DS4700 Disk Array Device
    code# lscfg -l 'hdisk*'[/code]
    hdisk0 U788C.001.AAA9427-P1-T11-L5-L0 16 Bit LVD SCSI Disk Drive (73400 MB)
    hdisk5 U788C.001.AAA9427-P1-C14-C2-T1-W200500A0B8297BE0-L3000000000000 1814 DS4700 Disk Array Device
    hdisk2 U788C.001.AAA9427-P1-C14-C2-T1-W200500A0B8297BE0-L0 1814 DS4700 Disk Array Device
    hdisk3 U788C.001.AAA9427-P1-C14-C2-T1-W200500A0B8297BE0-L1000000000000 1814 DS4700 Disk Array Device
    hdisk4 U788C.001.AAA9427-P1-C14-C2-T1-W200500A0B8297BE0-L2000000000000 1814 DS4700 Disk Array Device
    code# lspv[/code]
    hdisk0 00c05fc0221a5c26 rootvg active
    hdisk2 00c05fc024ec4a05 tsmvg active
    hdisk3 00c05fc024c90231 hbvg
    hdisk4 00c05fc024d0a3a1 medvg active
    hdisk5 00c05fc024e78073 tdlabvg
    code# lspv -l hdisk0[/code]
    hdisk0:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    hd1 1 1 00..00..01..00..00 /home
    hd10opt 16 16 00..00..16..00..00 /opt
    hd4 1 1 00..00..01..00..00 /
    hd2 26 26 00..00..26..00..00 /usr
    hd9var 4 4 00..00..04..00..00 /var
    hd3 4 4 00..00..04..00..00 /tmp
    hd5 1 1 01..00..00..00..00 N/A
    hd6 16 16 00..16..00..00..00 N/A
    paging00 16 16 00..00..16..00..00 N/A
    hd8 1 1 00..00..01..00..00 N/A
    code# lspv -l hdisk1[/code]
    hdisk1:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    hd1 1 1 00..00..01..00..00 /home
    hd10opt 16 16 00..00..16..00..00 /opt
    fslv00 63 63 00..63..00..00..00 /fixes
    hd4 1 1 00..00..01..00..00 /
    hd2 26 26 00..00..26..00..00 /usr
    hd9var 4 4 00..00..04..00..00 /var
    hd3 4 4 00..00..04..00..00 /tmp
    hd5 1 1 01..00..00..00..00 N/A
    hd6 16 16 00..16..00..00..00 N/A
    paging00 16 16 00..00..16..00..00 N/A
    hd8 1 1 00..00..01..00..00 N/A
    code# lspv -l hdisk2[/code]
    hdisk2:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    tsmlv 112 112 24..24..23..24..17 /tsm
    code# lspv -l hdisk3[/code]
    0516-010 : Volume group must be varied on; use varyonvg command.
    code# lspv -l hdisk4[/code]
    hdisk4:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    medlv3 611 611 93..88..144..193..93 /u3
    medlv1 101 101 00..101..00..00..00 /u1
    medlv2 153 153 100..04..49..00..00 /u2
    code# lspv -l hdisk5[/code]
    0516-010 : Volume group must be varied on; use varyonvg command.
    code# lsvg[/code]
    rootvg
    hbvg
    medvg
    tdlabvg
    tsmvg
    code# lsvg -l rootvg[/code]
    rootvg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    hd5 boot 1 2 2 closed/stale N/A
    hd6 paging 16 32 2 open/stale N/A
    paging00 paging 16 32 2 open/stale N/A
    hd8 jfs2log 1 2 2 open/stale N/A
    hd4 jfs2 1 2 2 open/stale /
    hd2 jfs2 26 52 2 open/stale /usr
    hd9var jfs2 4 8 2 open/stale /var
    hd3 jfs2 4 8 2 open/stale /tmp
    hd1 jfs2 1 2 2 open/stale /home
    hd10opt jfs2 16 32 2 open/stale /opt
    fslv00 jfs2 63 63 1 closed/syncd /fixes
    code# lsvg -p rootvg[/code]
    rootvg:
    PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
    hdisk1 removed 546 397 109..30..40..109..109
    hdisk0 active 546 460 109..93..40..109..109
    code# lsfs[/code]
    Name Nodename Mount Pt VFS Size Options Auto Accounting
    /dev/hd4 -- / jfs2 262144 -- yes no
    /dev/hd1 -- /home jfs2 262144 -- yes no
    /dev/hd2 -- /usr jfs2 6815744 -- yes no
    /dev/hd9var -- /var jfs2 1048576 -- yes no
    /dev/hd3 -- /tmp jfs2 1048576 -- yes no
    /proc -- /proc procfs -- -- yes no
    /dev/hd10opt -- /opt jfs2 4194304 -- yes no
    /dev/fwdump -- /var/adm/ras/platform jfs2 -- -- no no
    /dev/cd0 -- /cdrom cdrfs -- ro no no
    /dev/tsmlv -- /tsm jfs2 29360128 rw no no
    /dev/medlv1 -- /u1 jfs2 26476544 rw no no
    /dev/medlv2 -- /u2 jfs2 40108032 rw no no
    /dev/medlv3 -- /u3 jfs2 160169984 rw no no
    /dev/tdlablv4 -- /u4 jfs2 -- rw no no
    /dev/tdlablv5 -- /u5 jfs2 -- rw no no
    /dev/tsmlv -- /tsm jfs2 29360128 rw no no
    /dev/fslv00 -- /fixes jfs2 -- rw no no
    code# df -g[/code]
    Filesystem GB blocks Free %Used Iused %Iused Mounted on
    /dev/hd4 0,12 0,05 63% 4984 30% /
    /dev/hd2 3,25 1,12 66% 44152 15% /usr
    /dev/hd9var 0,50 0,35 31% 1446 2% /var
    /dev/hd3 0,50 0,27 46% 5809 9% /tmp
    /dev/hd1 0,12 0,12 1% 53 1% /home
    /proc - - - - - /proc
    /dev/hd10opt 2,00 1,20 41% 22687 8% /opt
    /dev/tsmlv 14,00 4,68 67% 19 1% /tsm
    /dev/medlv1 12,62 6,11 52% 72128 5% /u1
    /dev/medlv2 19,12 5,58 71% 24 1% /u2
    /dev/medlv3 76,38 7,14 91% 21884 2% /u3