Topic
IC4NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
4 replies Latest Post - ‏2012-10-09T13:02:40Z by LinuxL0ver
LinuxL0ver
LinuxL0ver
4 Posts
ACCEPTED ANSWER

Pinned topic GPFS filesystem not mouting on second node

‏2012-10-09T06:43:52Z |
Hi,
I just configured two node gpfs cluster by following word by word the below link

https://www.ibm.com/developerworks/mydeveloperworks/blogs/mhhaque/entry/install_configure_gpfs_3_2_on_red_hat_enterprise_linux_5_41?lang=en
gfs filesystem mounts on the node1 (where nsd disk are local disks)

but gfs file system /orabin does not mount on node2.

In node2 log files below message appears

node2:
Sat Oct 6 09:06:19.015 2012: Global NSD disk, oradata, not found.
Sat Oct 6 09:06:19.016 2012: Disk failure. Volume orabin. rc = 19. Physical volume oradata.
Sat Oct 6 09:06:19.015 2012: File System orabin unmounted by the system with return code 19 reason code 0
Sat Oct 6 09:06:19.016 2012: No such device
Sat Oct 6 09:06:19.015 2012: Failed to open orabin.
Sat Oct 6 09:06:19.016 2012: No such device
Sat Oct 6 09:06:19.015 2012: Command: err 666: mount orabin
Sat Oct 6 09:06:19.016 2012: No such device
One thing I want to clarify that the disks which I used ( sdb,sdc,sdd and sde ) are only local disk of node1 and does not coming from SAN.

Are these can be available on node2 after mounting OR it is mandatory that disk need to available from SAN to all nodes.?
Updated on 2012-10-09T13:02:40Z at 2012-10-09T13:02:40Z by LinuxL0ver
  • SystemAdmin
    SystemAdmin
    2092 Posts
    ACCEPTED ANSWER

    Re: GPFS filesystem not mouting on second node

    ‏2012-10-09T09:53:06Z  in response to LinuxL0ver
    The instructions you followed require the LUNs to be visible/attached to both hosts. Your configuration otherwise will make no sense as your node 1 becomes a single point of failure, defeating the high-availability purpose of GPFS.

    However, if you do not need high availability you can change the configuration. You will need to remove the tie-breaker disk configuration and you need to change the NSD configuration of your LUNs to add node1 as NSD server. Your configuration will work in this case with some caveats: Performance on node 2 will be limited to your network speed and if node 1 is down your filesystem will be unavailable.
  • YuanZhengcai
    YuanZhengcai
    9 Posts
    ACCEPTED ANSWER

    Re: GPFS filesystem not mouting on second node

    ‏2012-10-09T10:02:14Z  in response to LinuxL0ver
    If the disk is not directly attached to all nodes and you want to all nodes can mount the fs, you need to specify a NSD server list when creating the new nsd.

    From man page of mmcrnsd:

    DiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
    ServerList
    If you do not define a ServerList, GPFS assumes that the disk is SAN-attached to all nodes in the cluster. If all nodes in the
    cluster do not have access to the disk, or if the file system to which the disk belongs is to be accessed by other GPFS clus-
    ters, you must specify a ServerList.
  • LinuxL0ver
    LinuxL0ver
    4 Posts
    ACCEPTED ANSWER

    Re: GPFS filesystem not mouting on second node

    ‏2012-10-09T11:03:18Z  in response to LinuxL0ver
    Hi YuanZhengcai & markus_b

    Thanks for your value feedback. I am exactly looking the same information. Actually I am new to gpfs and just want practice it, There is no SAN available in my test environment.. Now I will try to configure it by using NSD server as you people suggested.
  • LinuxL0ver
    LinuxL0ver
    4 Posts
    ACCEPTED ANSWER

    Re: GPFS filesystem not mouting on second node

    ‏2012-10-09T13:02:40Z  in response to LinuxL0ver
    HI Folks,
    I am able to mount the gpfs file system on both nodes by using below nodes.

    Created a file using below parameters
    DeviceName:PrimaryNSDServer:SecondaryNSDServer:DiskUsage:FailureGroup

    1. vi descfile

    /dev/hdc:node1::dataAndMetadata:-1

    mmcrnsd -F /tmp/descfile -v no

    mmcrfs /orabin orabin -F descfile -A yes -B 512k

    #mmmount all -a

    1. df -h
    /dev/orabin 8.0G 152M 7.9G 2% /orabin

    Now file system /orabin is mounted on both nodes.