Topic
IC4NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
5 replies Latest Post - ‏2013-01-24T23:22:12Z by SystemAdmin
SystemAdmin
SystemAdmin
2092 Posts
ACCEPTED ANSWER

Pinned topic Remote cluster over SAN

‏2013-01-18T14:46:15Z |
I've seen it referenced in a few docs, but only seen procedures for IP. Is there a doc for setting up mounting a remote cluster's filesystems over SAN? I have 2 clusters at 2 separate sites. We have a SAN that stretches between the sites, but the IP network does not have VLANs that stretch between sites. The cluster on the primary site has a private non-routeable vlan that it uses for dedicated cluster communication, then application traffic goes through a separate routeable vlan. The setup is the same over at the other site. Is there a way the lun's from site A can be presented to site B, and have site B use the san for data traffic, and the IP only for talking to A for locking, etc only? Thanks!
Updated on 2013-01-24T23:22:12Z at 2013-01-24T23:22:12Z by SystemAdmin
  • SystemAdmin
    SystemAdmin
    2092 Posts
    ACCEPTED ANSWER

    Re: Remote cluster over SAN

    ‏2013-01-22T14:26:10Z  in response to SystemAdmin
    At this point I'm still even trying to get the two clusters talking (and hoping gpfs will use the disks on the backend) I've done all of the auth commands, and setup the remote clusters, and the remotefs command. When I try to do the mount, the foreign cluster is sending its internal IP address, rather than the public/routable one. On both clusters I have the daemon IP's set to the interal networks to keep cluster chatter on the private vlan, and I have the admin interface set to the public IP's, so the 2 clusters should be able to talk. Any thoughts on why the node from clusterB is telling the managers on clusterA it's internal address and not the public one?
    • gcorneau
      gcorneau
      149 Posts
      ACCEPTED ANSWER

      Re: Remote cluster over SAN

      ‏2013-01-22T15:37:51Z  in response to SystemAdmin
      To use GPFS multi-cluster, you have to define the GPFS nodes on the external interfaces:

      http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.0.7.gpfs200.doc/bl1adv_admmcch.htm
      "Each node in the GPFS cluster requiring access to another cluster's file system must be able to open a TCP/IP connection to every node in the other cluster."

      You can then use the subnets parameter to make sure that internal GPFS communication happens across the internal private networks.

      Glen Corneau
      IBM Power Systems Advanced Technical Skills
      • SystemAdmin
        SystemAdmin
        2092 Posts
        ACCEPTED ANSWER

        Re: Remote cluster over SAN

        ‏2013-01-24T19:27:47Z  in response to gcorneau
        I have them on routable networks now, and am able to remote mount the fs on clusterA from clusterB, but I suspect it's doing it all over IP. I see that the mmnsddiscovery command has a -C option to choose other clusters, but that doesn't return anything, nor does the mmlsnsd -F option. The LUNs for cluster A are presented to the clusterB server, and I can see them with the OS, but gpfs doesn't seem to know anything about them as far as I can tell. I suppose I can get away with running this over IP, as it uses 2 10Gb adapters in a port channel, but I would really prefer to use the SAN for the disk access.
        • gcorneau
          gcorneau
          149 Posts
          ACCEPTED ANSWER

          Re: Remote cluster over SAN

          ‏2013-01-24T20:39:28Z  in response to SystemAdmin
          If GPFS isn't discovering the disks, it may be because of the device type (seems to be prevalent on Linux moreso than AIX).

          Check out the mmdevdiscover and nsddevices user exit:
          http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r5.0.7.gpfs100.doc%2Fbl1adm_uxtnsdds.htm

          Once GPFS discovers the NSDs locally, you can use mmnsddiscover to pick up the proper path to the NSDs.

          Glen Corneau
          IBM Power Systems Advanced Technical Skills
        • SystemAdmin
          SystemAdmin
          2092 Posts
          ACCEPTED ANSWER

          Re: Remote cluster over SAN

          ‏2013-01-24T23:22:12Z  in response to SystemAdmin
          You don't have to guess. mmlsdisk -m will show which disk path GPFS is using on a given node. mmdevdiscover will show the list of block devices (and their types) that GPFS will look at. tspreparedisk -s will show the list of local block devices with valid GPFS NSD descriptors on them.

          yuri