Topic
12 replies Latest Post - ‏2012-05-11T17:28:41Z by iban
iban
iban
81 Posts
ACCEPTED ANSWER

Pinned topic Server for two GPFS Clusters

‏2012-05-08T10:16:52Z |
Dear,

We have 6 new GPFSs servers all of them have Infiniband and Ethernet connection.
The idea is to share these servers over two different GPFS clusters (one using the ethernet and the Other over Iband connection)
Is posible to run two gpfs instances on the same node? Is it configuration suitable?

Regards
Updated on 2012-05-11T17:28:41Z at 2012-05-11T17:28:41Z by iban
  • SystemAdmin
    SystemAdmin
    2092 Posts
    ACCEPTED ANSWER

    Re: Server for two GPFS Clusters

    ‏2012-05-08T15:23:03Z  in response to iban
    No. Not without using virtual machines.

    You may have just one instance of GPFS per OS instance.

    But you may have several file systems per GPFS cluster.

    A GPFS cluster may usefully employ ethernet and IB networks.

    See the GPFS planning guide and the Advanced Admin Guide. In particular I notice that the Adv. Admin Guide shows a way to use IB and Ethernet together.
    • iban
      iban
      81 Posts
      ACCEPTED ANSWER

      Re: Server for two GPFS Clusters

      ‏2012-05-09T09:52:41Z  in response to SystemAdmin
      Dear Marc,
      I've been reading the documentation for GPFS 3.4 version, buI I do not find any reference to "only one cluster" with both connections (IB and Eth).
      Please could you be more specific in wich part of the guides Should I have a look?

      Thanks in advance.
      • bhartner
        bhartner
        58 Posts
        ACCEPTED ANSWER

        Re: Server for two GPFS Clusters

        ‏2012-05-09T14:04:11Z  in response to iban
        See "Chapter 1. Accessing GPFS file systems from other GPFS clusters" in the advanced admin guide where it discusses use of mmchconfig subnets. (that was from 3.5 docs).
  • iban
    iban
    81 Posts
    ACCEPTED ANSWER

    Re: Server for two GPFS Clusters

    ‏2012-05-10T14:50:28Z  in response to iban
    Dear,
    Thanks for the info. The GPFS: Advanced Administration Guide point n 1, I think that is more related to multicluster connections looking at Figure 3. Multi-cluster configuration with multiple NSD servers.

    Reading the "Daemon Communication: Using InfiniBand RDMA (x86 Linux Only)" located at http://www.ibm.com/developerworks/wikis/display/hpccentral/GPFS+Network+Communication+Overview

    I understand that I can have a unique cluster, using IB connection if it exists or the default Ethernet connection in the nodes that have no IB.

    Is this true?

    Regards,
    • SystemAdmin
      SystemAdmin
      2092 Posts
      ACCEPTED ANSWER

      Re: Server for two GPFS Clusters

      ‏2012-05-10T16:09:42Z  in response to iban
      Okay, now let's go back to the beginning. Please clarify or expand on your first post. What are your goals, vis'a'vis GPFS? What equipment is available and what are the applications you're going to run? What are the requirements and constraints of the applications? Other requirements or constraints imposed by politics, contracts, regulations?
      • iban
        iban
        81 Posts
        ACCEPTED ANSWER

        Re: Server for two GPFS Clusters

        ‏2012-05-10T18:29:18Z  in response to SystemAdmin
        OK Marc.

        Actual situation:
        1 Cluster in production (GPFS 3.2.1-29) 8 NSD servers (4 for 1 large fs 800TB [1] the others for 2 small FS's 100 [2] and 50 [3] TB) 2x FC connection to disk subsystem and 10Gb Ethernet connection to LAN and about 200 clients 1 Gb Ethernet to LAN.
        1.5GB Throughput Network is something usual.

        New equipment:
        6 servers, 2xFC 8Gbps, 10Gb Ethernet and 1xIB QDR, 200 nodes IB connection and 1Gb Ethernet
        1 PB DISK (2 DS3600 & 2 DS3524)
        GOALS:
        Upgrade the Cluster to latest GPFS version(3.4, 3.5??)
        The 6 New servers must replace the 4 that are manager of [2] and [3]
        Change the smalls fs's ([2] and [3]) to the new storage system
        Create a 2/3 New FS's with high throughput that should be mounted at IB nodes.
        The Big FS [1] should be accessible/mounted from IB nodes too with good rate

        Comments:
        The IB nodes and Ethernet nodes run different applications and they are managed by deferents batch systems.
        The location is the same. A few meters between both systems.
        If you need more info, please let me know.

        Regards.
        • SystemAdmin
          SystemAdmin
          2092 Posts
          ACCEPTED ANSWER

          Re: Server for two GPFS Clusters

          ‏2012-05-10T19:18:21Z  in response to iban
          If you have or can obtain enough hardware, why not put the 6 servers in one GPFS cluster, each able to access any of the disks via FC.
          Use IB for server to server communications. Client nodes connect whichever way is convenient. Yes, I'm assuming you have enough switch ports and adapters and enough switch capacity to do this.
          • iban
            iban
            81 Posts
            ACCEPTED ANSWER

            Re: Server for two GPFS Clusters

            ‏2012-05-11T14:16:53Z  in response to SystemAdmin
            Hi Marc,
            I do not have FC switches, The Servers are directly connected to disk subsystem chassis, and there is no FC ports available for the large FS [1]. There are enough 1Gb/10Gb eth ports but not the same for upgrade all the nodes to IB connections (no more switches, no more IB HBA's )
            My Question is, which is the best solution for this hardware, try to configure only one cluters (IB + ETH), two clusters (Eth and IB) and then do a remote mount? And which would be the best way to do this.

            Regards,
            • SystemAdmin
              SystemAdmin
              2092 Posts
              ACCEPTED ANSWER

              Re: Server for two GPFS Clusters

              ‏2012-05-11T14:57:17Z  in response to iban
              Sorry, I still don't see/know what constraints we're working with.

              Let's try it this way:
              How many ports of each type on each of your servers and disk controllers?
              No switches between servers and disks?
              You do have sufficient switches and ports to interconnect your all of your servers with IB and/or Ethernet?

              Is there a concern that might be address by partitioning your servers into two cluster? (e.g. security, administration, non-interference, performance)
              We can go another "round" on this board -- but this forum is not a substitute for hiring a consultant/technician to configure and setup, test, re-configure as necessary...
              • iban
                iban
                81 Posts
                ACCEPTED ANSWER

                Re: Server for two GPFS Clusters

                ‏2012-05-11T15:25:31Z  in response to SystemAdmin
                Hi Marc,
                Thanks a lot.
                I will open a PMR to ask these questions.

                Regards
                • SystemAdmin
                  SystemAdmin
                  2092 Posts
                  ACCEPTED ANSWER

                  Re: Server for two GPFS Clusters

                  ‏2012-05-11T15:45:47Z  in response to iban
                  The easiest is to put all servers in the same cluster. That will work and actually gives you more flexibility
                  to reconfigure your filesystems to disks mappings in future!

                  OTOH, if you have the disks split so that all disks for filesystem1 are only connected to servers A,B
                  and disks for filesystem2 are on servers C,D,E,F
                  then defining two clusters probably makes sense.

                  BUT you're talking about 3 filesystems and 6 servers and you're not saying how and why you intend to partition disks, servers and so on...
                  • iban
                    iban
                    81 Posts
                    ACCEPTED ANSWER

                    Re: Server for two GPFS Clusters

                    ‏2012-05-11T17:28:41Z  in response to SystemAdmin
                    Hi Marc,
                    File System [1] 800 TB is managed by 4 servers (2 FC 8Gbps port per server using multipath) directly attach to disk controllers (2 controller 4 FC ports per controller) this file system run very large files about 2 -4 GB per file.

                    The small fs's [2] some projects&[3]users home are on other system disk managed by other 4 servers with 2xFC 4 Gbps using rdac. This disk system has to by replaced by the new disk system (DS3600+DS2524) and the new 6 Servers (2xFC 8Gbps + 10GbEth + IB), but this system already has to support the "new IB" nodes too with at least 2 new FS's (one for scratch and one for other home users), the old and new nodes are for different projects and different access, but we want in some cases that the first FS [1] can be accessible through the new nodes for interactive use.
                    I understand that the easiest solution should be mount 2 different clusters and do a remote fs mount from each other and balance the new servers between cluster.

                    I only want to know if there is a smart solution.

                    Thanks in advance for your help, Iban