Topic
  • 2 replies
  • Latest Post - ‏2012-05-17T21:14:06Z by db808
SchelePierre
SchelePierre
20 Posts

Pinned topic GPFS 3.4 on DS3512+EXP, single storage pool, different NL-SAS disk types

‏2012-05-09T07:00:51Z |
Hello,
We're currently running GPFS 3.4.0.11 on RedHat Linux 6.1 NSD server nodes - each node dual attached (6Gbps SAS) to two dual controller DS3512 subsystems with EXP3512s.

In today's environment, we use 8+P+Q RAID6 LUNs of 1TB 7.2k RPM NL SAS drives, configured as dataOnly NSDs in the default GPFS system storage pool.
We're looking at capacity expansion options; one of those is to add another EXP35123 to the same DS3500 subsystems. We were thinking to put 8+P+Q 2TB 7.2k disks in the new expansion and add this RAID6 LUN as a new GPFS NSD in the same default system storage pool.

My questions thus become:
- Is it advisable / OK practice to populate different disk shelfs in the same DS3500 subsystem with different capacity NL-SAS disks (with same rotational speed)?
- I couldn't immediately find a straight answer to this: is there a performance difference to be expected (in negative way) when using / adding 2TB disk shelf to the current setup which only uses 1TB disks?
- Considering that these RAID6 LUNs are used as GPFS NSDs: is there a performance problem or impact expected when mixing 1TB 8+P+Q NSDs and 2TB 8+P+Q NSDs in the same (default system) storage pool?
- GPFS data replication is not used in our current setup, but are any impacts to be expected from the fact that NSDs will have different sizes (ie new NSD is twice the size) in this possibly future setup?

Thanks,
Pieter
Updated on 2012-05-17T21:14:06Z at 2012-05-17T21:14:06Z by db808
  • HajoEhlers
    HajoEhlers
    253 Posts

    Re: GPFS 3.4 on DS3512+EXP, single storage pool, different NL-SAS disk types

    ‏2012-05-09T08:41:37Z  
    ...
    > - Is it advisable / OK practice to populate different disk shelfs in the same DS3500 subsystem with
    > ifferent capacity NL-SAS disks (with same rotational speed)?

    This is a question you should ask your storage vendor.

    > - I couldn't immediately find a straight answer to this: is there a performance difference to be
    > expected (in negative way) when using / adding 2TB disk shelf to the current setup which only
    > uses 1TB disks?

    * Rebuild time is longer thus your whole array might suffer during the rebuild time.
    * In case you are using less disks the slowest LUNs will determine the FS speed.
    * The amount of IOPS per TB is only the half.
    > - ... fact that NSDs will have different sizes (ie new NSD is twice the size) in this possibly
    > future setup?

    The 1 TB Disk based Luns will fill up and then only the 2 TB disk based Luns will be used. Thus you are going to loose half of your disk speed ( You are back to what you have now ) and depending on how many 2 TB disk based LUNs you might create a NSD server bottleneck.

    Example:
    8 NSD Server
    - 8 * 1 TB disk based Luns
    - 4 * 2 TB disk based Luns

    1) All LUNs are empty - 12 Luns can be used - 8 NSD Server can serve
    2) All 1 TB d.b.l are inuse - 4 Luns are available - Only 4 NSD can serve

    ...
  • db808
    db808
    86 Posts

    Re: GPFS 3.4 on DS3512+EXP, single storage pool, different NL-SAS disk types

    ‏2012-05-17T21:14:06Z  
    Hi Pieter,

    We have significant experience with the DS3512's larger cousin, the 60-disk DCS3700. It shares the same controller as the DS3512, with most of the premium-cost options bundled, including the "turbo" option. We have also expanded from 1TB disks to 2TB disks in the past.

    There are too many combination of options to discuss effectively via the forum. You can private message me, and we can exchange contact information, and have a phone conversation offline. Then, you can post the final compromise solution into the forum as the answer.

    To put a stake in the ground, we use a 4MB GPFS block size for our large files. You can get ~240 MB/sec per 10-disk RAID group up to the limit of the controllers ... about 1400 - 1600 MB/sec with the "turbo" option.

    When you add your disks, I would recommend restriping the file system. This will take a while, but will evenly distribute the data across both (old+new), allowing better overall performance. Since the major GPFS allocation scheme is based on the "fullness" of the NSD, you will end up evenly filling up (old + new). This will give you about 12 TB of storage with 2x the performance that you have now. Once you go beyond 12TB, there will be no free space on the 1TB-based NSD, and you will be getting about 1x performance for the next 6TB. Not ideal, but no worse, performance-wise than what you have now.

    I would also make an alternative suggestion. What about the DS3524 expansion, with 20x 1TB 2.5" disks configured as 2 x (8+P+Q R6). A little more expensive, but you triple your IO performance once you restripe, and then maintain that performance until everything is full.

    If capacity is the most important, the 3TB 3.5" disks are also available.

    Regards,
    Dave B.