<   Previous Post  Storage Virtualizati...
The Storage behind...  Next Post:   >

Comments (48)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 kopper commented Permalink

interesting article....during these days I will be working on a v7000-SVC deployment.... waiting for the others articles <div>&nbsp;</div> thanks a lot

2 AngeloBernasconiBlog commented Permalink

Question: <br /> If "With V7000, always configure at least 4 arrays, and if possible, multiples of 4. " <br /> Can we assume that it could be a best practices to have at least 4 Mdisk or multiple of 4 Mdisk also in a SVC ?

3 orbist commented Permalink

Angelo, this is less important on SVC, as SVC does not perform the RAID functions, i.e. there is no XOR or mirroring overhead, and so less CPU is required.

 
In levels < 6.3 there is a small advantage to having > 4 mdisks per controller, as this will guarantee all 4 FC ports are being used. In levels >= 6.3 we now round robin I/O for IBM and high end storage systems and so even with 1 mdisk you will be using all 4 FC ports per node.
 
There are other considerations from a config perspective that apply to SVC and I will cover in the next few posts (vdisks etc)

4 Montecito commented Permalink

Very nice read. However I do have a question. <div>&nbsp;</div> I know that especially in case of V7000 the array width should be same for all the mdisk in a storage pool. <div>&nbsp;</div> However if we utilize easy tier and add SSD drives which do not reach the optimal number of 8 [raid 5]. Will it cause any performance issue ? <div>&nbsp;</div> Or because SSD's perform way better than sas drives having 5 drive array compared to 8 drive array for sas would compensate for it ? <div>&nbsp;</div> As SSD drives are costly, and putting all eggs in one basket doesnt seem right.

5 orbist commented Permalink

Montecito, <div>&nbsp;</div> The reson for *recommending* arrays of the same width in a given storage pool is due to the volume strping. i.e. every volume created in a pool is striped across all member arrays. Thus if you have arrays with smaller numbers of components, it may pull down the performance of the pool as a whole to the lowest performing array. <div>&nbsp;</div> In general, unless you are pushing all your arrays to the maximum, you probably wont see much of a difference if you have arrays with 1 or so less component disks. <div>&nbsp;</div> As for Easy Tier, its a different story. Because Easy Tier determines which extents to put onto the SSD, its not affecting the rest of the HDD arrays. i.e. they still all have the same performance capability. <div>&nbsp;</div> The SSD are several orders of magnitude better than the HDD wrt IOPs. So even a single 1+1 RAID-1 array will give you a significant boost it IOPs and reduction in latency when used with Easy Tier. Obviously RAID-5 will give you greater capacity value for your SSD, but remember the penalties above, and you will essentialy always be doing mixed read/write workloads to RAID-5. <br />

6 VahricMuhtaryan commented Permalink

We newly purchased v7000 and nice to discuss, nice to talk people like you and thanks for blogs <div>&nbsp;</div> We already have EMC and tested 3PAR, and question about some (also we are SVC and DS 500 user for a long time) <div>&nbsp;</div> 3PAR and EMC push that for low latency and performance use 3+1 Raid5, for optimal needs use 7+1 like you say. in v7000 i couldn't find something other then defaults, why, is it because IBM do not believe 3+1 or other story you have (Our systems mostly random read and write intensive-Mail and Cloud) <div>&nbsp;</div> We mostly have mail servers on windows 2003/2008 and default volumes are formatted default which is 4KB, can i configure raid5 with 4KB stripe size ? Also in GUI (from redbook) i did not see any stripe option to select <div>&nbsp;</div> via v7000 GUI or TPC, can we see that we are doing full stripe size writes to v7000 arrays ? <div>&nbsp;</div> if we need to create array equal number of core, AMD processors have more core then Intel, and we can create more array with same or multiple number of core , means if your words are linear, <br /> in feature if storwize use 6 core we have to create same number of array or multiple of 6 ? <div>&nbsp;</div> What do you think about traditional raid then 3PAR 'Chunklet-based RAID', its start to come so logic,no need to waste number of disk as a spare, yes of course space will be waste again but no any spare will be wait to get in action and flex configuration (creating mixed raid groups with using same disk) sounds good, what do you think ? <div>&nbsp;</div> Thanks you

7 MatthieuBonnard commented Permalink

Great article Barry, <br /> Just interrogation for me : <br /> We have a V7000 with 12 NSAS 3To configured with RAID6. <br /> Just one "Storage vMotion" (VMware) operation in destination of this array have a very big impact to the read latency of all the LUN on the same array. <br /> In the other hand, a big read operation on the same RAID6 array doesn't have the same impact for the other LUNs of the array. <div>&nbsp;</div> it's a real normal comportment of a NSAS RAID6 ? <br /> One RAID6 with just 11 disk is not enought ? <br /> Why massive write operation have more impact than a massive read operation ? <br /> If I create a SVC Pool with 4 of this RAID6 array, does I multiple by 4 the performance (IOPS, latency ?) <div>&nbsp;</div> With a 23 x SAS RAID5 we never observe this comportment. <br /> I'm very surprise of the performance difference between SAS et NSAS (or between RAID5 &amp; RAID6 ?)

8 orbist commented Permalink

Matthieu, <div>&nbsp;</div> So with it being a RAID-6 array, any write will be causing potentially the 6x overhead on the disks as above. As you have a 9+P+Q (11disk?) then you would want to do vMotion I/O of 9x256kb to guarantee full strides, and thus only have 1 write per disk for each 2304kb... <div>&nbsp;</div> Is the source and destination volumes on the same V7000? i.e. if you use vMotion with VAAI (xcopy) then you can offoad the clone to the V7000. <div>&nbsp;</div> Either way, when it comes down to it you probably have 11x 100 iops, so 1100 iops at the disk level. I also benchmark the 2TB NL-SAS at about 125MB/s when doing sequential writes. <div>&nbsp;</div> Its almost a catch 22, you want RAID-6 because you have NL-SAS drives, for protection, but NL-SAS have 100 ops vs 300 ops for 10K SAS - and RAID-6 adds 6x overhead on writes... <div>&nbsp;</div> It sounds like you are on the cusp, i.e. RAiD-5 could just cope with the vmotion where RAID-6 overloads the disks. One option would be to use the volume throttle feature (CLI only) where you can specify a peak limit on either IOPS or MB/s for each volume - set this temproarily during vmotion, however we recently noted that throttling + VAAI doesn't work -i.e. because we don't see the I/O at the upper layers, they can;t throttle. So if you use this option, (i.e. throttling) be sure to turn off VAAI)

9 orbist commented Permalink

Vahric, <div>&nbsp;</div> A few points here : <div>&nbsp;</div> 1. You can configure any array size you want, down to a 2+P IIRC, but you need to use the command line interafce for that, The GUI will use the defaults, or i think you can use the -configure storage- button on the -internal storage- page, select the number of component disks. Otherwise in the CLI : svctask mkarray -level raid5 -drives 0:1:2 -strip 128 <div>&nbsp;</div> 2. So you can go down to 128kb as a strip size, but again only in the CLI <div>&nbsp;</div> 3. I think EMC recommend 3+P due to the internal 756kb optimisation. Never got to the bottom fo whats magical about 756kb in their code, but seems to be a magic number. Hence 3x256kb + parity means full stride writes for 756kb incoming I/O <div>&nbsp;</div> 4. Its very rare for RAID to be stiped at smaller than say 32kb, so 4kb is always going to be a penalty if doing jus 4kb writes, need to read modify write 4kb in the old,new and parity blocks. <div>&nbsp;</div> 5. When we go to 6 core in V7000 (SVC CG8 nodes already ship with 6core CPU) yes, optimal will be 6 arrays+ for all the cores to be in use. AMD do have more cores per die, but tend to be a generation or so back on PCIe and memory support, and there are details like cache snooping etc that can be a nightmare. For now, the SVC based products will be sticking with Intel. <div>&nbsp;</div> 6. IBM has -chunklet- style RAID with the XIV products, and certainly as we move towards 4 and 5TB physical drives, traditional RAID starts to look creaky at the edges - wrt rebuild times of several days

10 MatthieuBonnard commented Permalink

Thanks a lot Barry, <br /> I just need your point of view : <div>&nbsp;</div> First, my theory <br /> My "11 x NSAS RAID6" is capable to 1100 IOPS. So a single host with a queue depth of 32 and an average response time of 20ms (on a NSAS ?) is probably capable to outmatch the raw capacity of my array (during a svmotion operation for example). Just one host is in capacity to kill the entire array. <br /> I suppose that it's the reason I can't observe the same comportment on my "24 x SAS RAID5" where a single host can't saturate the entire array. <div>&nbsp;</div> Then, the solution ? <br /> So, If I plan to create a Pool with 11 x RAID5 of 4 x NSAS (4 x Enclosure 3.5") for "Enclosure Loss Protection", I protect my array of a "single host massive operation" by keeping a little capacity to serve IO for other hosts. <div>&nbsp;</div> Do you think like me ?

11 VahricMuhtaryan commented Permalink

Thanks Berry, <div>&nbsp;</div> IBM point of view, what is the best raid5 option with 7+1 or 3+1 because 3PAR said that we active better performance then raid10 with 3+1, any workaround/test/case study about it on IBM site ? <div>&nbsp;</div> Until now i newer calculate and imagine stripe size and applications block size issues can cause problems because we have very mixed environment, windows servers default 4KB, vsphare5 new 8KB blocks , oracle customer can set other block sizes , then i believe storage venders have a option or function to consolidate writes and reads for become aligned each time or close to aligned isn't it or really any way to monitor what we are doing on V7000 and how optimize the alignment ? <div>&nbsp;</div> On SPC tests i saw that IBM put V7000s behind the SVC to gain max IOPS, i wonder which one is better put V7000 behind the SVC or but SVC and V7000 on same layer , because i believe that more controller , more memory better then put V7000 behind SVC, or not ? <div>&nbsp;</div> thanks

12 orbist commented Permalink

Matthieu <div>&nbsp;</div> As you say, a single host should be easily capable of saturating the R6 NL-SAS array. <div>&nbsp;</div> If you create a single pool with 11 arrays, you will still in theory be using all 11 arrays if you create a normal "striped" volume, as each array will be contributing to the volume. If you want to protect other workloads, the only way would be to create multiple pools, and then you will only impact volumes in the same pool. <div>&nbsp;</div> Note also "Enclosure Loss Protection", this was a conecpt introduced by FC-AL (DS) systems, where a single internal loop within an enclosure could take down all enclosures. <div>&nbsp;</div> The V7000 expansion controllers have no single point of failure. Everythin is dual redundant, the only single common component is the midplane and we carefully designed it to have no active components. <div>&nbsp;</div> You create strings, or strands of enclosures, down through each enclosure. You dont have the same complex contra-rotating cabling that was needed with DS4/5 (i.e. FC-AL) <div>&nbsp;</div> <div>&nbsp;</div> Do you think like me ?

13 orbist commented Permalink

Vahric, <div>&nbsp;</div> Because the strip size is 256kb by default (128kb if manually changed via CLI mkarray command) you will not see any gain from having 3+P over 7+P UNLESS you are doing large sequential host write operations. <div>&nbsp;</div> 3+P with strip size 256kb, would need 756kb host write to enable full stride. <br /> 8+P with strip size 128kb, would need 1MB host write for full stride etc. <div>&nbsp;</div> For random I/O smaller than the strip size, there is no performance advantage to configuring smaller numbers of component disks in an array. The same overhead is still needed, and infact if anything larger numbers of disks my help reduce latency on a busy array (as you have more disks providing the I/O and possibly lower chance of clashing I/O from another random workload) <div>&nbsp;</div> If you have 4KB and 8KB OS block sizes, and a general mixed workload, then really the array size is going to be dictated more by how many disks you have, how many spares you want to leave, and what divides as evenly as possible into a number like 4 or 8 etc <div>&nbsp;</div> We dont currently support SVC and V7000 being clustered together into the same cluster. A V7000 can be a remote copy target cluster for an SVC (and vice versa) and a V7000 can be storage behind an SVC. They cannot be peer nodes in the same cluster. <div>&nbsp;</div> With SVC, you want high performing high reliability storage at a reasonable cost. Therefore V7000 is IDEAL storage system to put behind SVC. The additional caching in the two layers, is like a L1/L2 CPU cache etc i.e. you get the most common data hits in top level (SVC) allowing the V7000 to potentially cache additional data that wouldn't have otherwise stayed in cache. <br />

14 VahricMuhtaryan commented Permalink

Hi, <div>&nbsp;</div> From your words i understood that penalty always happen independently 3+1 or 7+1 and yes big issue is we do not have seq load and for reduce the load create more big arrays like 15+1 , 31+1, I don't know what is the limit on V7000. <div>&nbsp;</div> Then instead of deal raid5 and number of disk in array, using raid10 with V7000 could be more right way ? I get a feedback from one of other vendor, if you use raid10 instead of raid5 then maybe you can use less number of disk then raid5 <div>&nbsp;</div> Regards <br /> VM

15 MatthieuBonnard commented Permalink

Thank you Barry, <br /> We are in the case of multiple V7000 behind a SVC cluster. <br /> I just look in the SVC Performance Guideline and I have seen that you recommend to configure extend size of 1GB on the V7000 pool presented on the SVC. <br /> Why this configuration ? Is-it really a best practice ? <br /> Matthieu.