<   Previous Post  IBM to acquire Texas...
IBM Virtual Briefing...  Next Post:   >

Comentários (23)

1 dcskinner comentou às Link permanente

"but maybe soon we can do something about dual parity raid rebuild times... " <div>&nbsp;</div> Is this where Dynamic Disk Pooling (DDP) fits into the equation? I was a bit surprised when our rep mentioned it as I hadn't heard much about it when it was announced. I'm interested to hear what others think about it.

2 orbist comentou às Link permanente

The DS3500 function is subtely different to what we are proposing with Storwize V7000 - real distributed raid ,...

3 alter-ego comentou às Link permanente

Hi Barry, very nice article, <div>&nbsp;</div> i have a very simple question, not sure if simple answer. <div>&nbsp;</div> Let's say i have a current competitor vendor storage, and i want to virtualize it using image mode (in principle not using stripping or other enhancements). What is the expected degradation (access time, IOPs read, write) if any compared the native performance? <div>&nbsp;</div> Thanks

4 orbist comentou às Link permanente

Alter-ego, <div>&nbsp;</div> In general there shouldn't be much if any of an impact, based on the workloads. <div>&nbsp;</div> For all writes, they now go into SVC/V7000 cache, so the response time should be the same, (or better) if the vritualized controller was struggling. <div>&nbsp;</div> For reads, read hits obviously the same as they are now coming from SVC/V7000 cache. For read misses there maybe minimal additional latency (50us) to get through the software stack. The additional hop on the fabric is almost un-noticable. But in general you have added more cache, so you may see better performance all round, from the dual layer caching giving you more chance of read cache hits - basically extending the cache capacity across the two systems. <div>&nbsp;</div> So in summary, you should expect the same performance, or better in most cases, and minimal additional latency on a complete read miss - which is going to be at disk latency anyway (so say 5-10ms + 50us)

5 alter-ego comentou às Link permanente

Thanks a lot Barry

6 Anand Gopinath comentou às Link permanente

Hi Barry, <div>&nbsp;</div> <div>&nbsp;</div> i need to build v7000s through cli . Request your help in finding out answers for the following queries i have <div>&nbsp;</div> <div>&nbsp;</div> <div>&nbsp;</div> 1. What is the best practise for creating RAID 5 &amp; RAID 10 mdisks through CLI ??? <div>&nbsp;</div> Disks from the same SAS Chain or disks split between chains <div>&nbsp;</div> Disks from Same enclosure or different enclosures ( like Enclosure loss protection in DS5000'S ) <div>&nbsp;</div> Any performance presets while creating mdisks through CLI ??? <div>&nbsp;</div> 2. How can we create RAID 10 arrays while ensuring " RAID Chain Balance " ??? Does this mean choosing half the no. of disks from each SAS Chain ?? How does <br /> v7000 decide which disks form the mirror pair ?? <div>&nbsp;</div> <div>&nbsp;</div> 3. Is the hot spares configured in a SAS Chain usable only in that chain only ??? What will happen to mdisks spread across the chains ??? <div>&nbsp;</div> 4.In one of our existing v7000, we have 7 enclosures 4 in One chain 1 and 3 in chain 2 . We have 5 spares configured in Chain 1 and 2 spares configured in <br /> Chain 2 . How will this affect the hotspare protection ??? DO we need to change this config ??

7 orbist comentou às Link permanente

Anand, <div>&nbsp;</div> &gt;&gt; 1. What is the best practise for creating RAID 5 &amp; RAID 10 mdisks through CLI ??? <br /> &gt;&gt; Disks from the same SAS Chain or disks split between chains <div>&nbsp;</div> In all testing we've done, from a performance perspective it makes no difference what chain, or enclosure you use to build the arrays. <div>&nbsp;</div> For RAID-10 you obviously have an advantage by building one set of the array on one chain, and the other set on the other chain. This means you could lose a whole chain and still have access. <div>&nbsp;</div> &gt;&gt;Disks from Same enclosure or different enclosures ( like Enclosure loss protection in DS5000'S ) <div>&nbsp;</div> There is no enclosure loss concept in SAS. The problem with the old DS4/5K was the FC-AL loops. Here a single disk could take down the whole loop (enclosure). V7000 is built with no single point of failure in an enclosure, so both controller nodes have access to every drive, via two SAS expanders. So you don't need to spread the disks down the enclosures. <div>&nbsp;</div> &gt;&gt;Any performance presets while creating mdisks through CLI ??? <div>&nbsp;</div> The CLI has no presets. <div>&nbsp;</div> &gt;&gt;2. How can we create RAID 10 arrays while ensuring " RAID Chain Balance " ??? <br /> &gt;&gt;Does this mean choosing half the no. of disks from each SAS Chain ?? <br /> &gt;&gt; How does v7000 decide which disks form the mirror pair ?? <div>&nbsp;</div> So yes, chain balance means you can lose half the array (on chain) and carry on. The GUI preset will ensure it picks half the disks from each chain. <br /> To do this on the CLI, the order you specify the drive id's tells the system which "pairs" to create in the array. i.e. : <div>&nbsp;</div> mkarray -drives 0:1:2:3 <div>&nbsp;</div> Would create a mirror between 0 and 1, 2 and 3 etc <div>&nbsp;</div> <div>&nbsp;</div> &gt;&gt;3. Is the hot spares configured in a SAS Chain usable only in that chain only ??? What will happen to mdisks spread across the chains ??? <div>&nbsp;</div> We recommend one hot spare per chain. Incase you have RAID-10's configured like above. Otherwise, if there is only a spare on the other chain it will be used. But local chain spares have preference. <div>&nbsp;</div> &gt;&gt;4.In one of our existing v7000, we have 7 enclosures 4 in One chain 1 and 3 in chain 2 . We have 5 spares configured in Chain 1 and 2 spares configured in <br /> Chain 2 . How will this affect the hotspare protection ??? DO we need to change this config ?? <div>&nbsp;</div> The GUI tends to be over-zealous with its sparing. But as said above, we will use any spare that fits in the local chain first, then the other chain. So you should be fine

8 JohnNuttle comentou às Link permanente

Any plans for part 3 or maybe a movie deal? :-) Great blog.

9 SamoJohn comentou às Link permanente

Hi Barry, <div>&nbsp;</div> I have a question for you .. <div>&nbsp;</div> The question is that when putting V7K behind SVC , should we avoid double striping stretegy ( like striping on V7k and striping on SVC both) or it is ok to have double striping? <div>&nbsp;</div> With double striping , 256 MB on V7K and 1 GB on SVC is fine , right? <div>&nbsp;</div> Also i wanted to know that with latest version of SVC ( SVC 6.4) , do we still need to create and present mdisks from V7K in multiple of available SVC node ports ( like 8 in case of 4 nodes SVC cluster) <div>&nbsp;</div> Thanks in advance <br /> John

10 SamoJohn comentou às Link permanente

Barry, <div>&nbsp;</div> In addition to this , i would like to ask another question. In your blogs , you mentioned that stripe size of V7K can be left at 256 MB while 1 GB extent size is recommended for SVC. <div>&nbsp;</div> Does this recommendation of 1 GB applies to those storage pools of SVC which are going to doing storage tiering with SSD drives/HDD drives or for all SVC storage pools ? I am asking because in recently published redbook on SVC 6.4 best practices, ( available on IBM Redbook website ) , they recommended to use 256 MB extent size on both SVC and V7000 layer. <div>&nbsp;</div> Appreciate if you can answer both of my questions... <div>&nbsp;</div> Many thanks <div>&nbsp;</div> John