<   Previous Post  IBM to acquire Texas...
IBM Virtual Briefing...  Next Post:   >

Comments (22)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 dcskinner commented Permalink

"but maybe soon we can do something about dual parity raid rebuild times... " <div>&nbsp;</div> Is this where Dynamic Disk Pooling (DDP) fits into the equation? I was a bit surprised when our rep mentioned it as I hadn't heard much about it when it was announced. I'm interested to hear what others think about it.

2 orbist commented Permalink

The DS3500 function is subtely different to what we are proposing with Storwize V7000 - real distributed raid ,...

3 alter-ego commented Permalink

Hi Barry, very nice article, <div>&nbsp;</div> i have a very simple question, not sure if simple answer. <div>&nbsp;</div> Let's say i have a current competitor vendor storage, and i want to virtualize it using image mode (in principle not using stripping or other enhancements). What is the expected degradation (access time, IOPs read, write) if any compared the native performance? <div>&nbsp;</div> Thanks

4 orbist commented Permalink

Alter-ego, <div>&nbsp;</div> In general there shouldn't be much if any of an impact, based on the workloads. <div>&nbsp;</div> For all writes, they now go into SVC/V7000 cache, so the response time should be the same, (or better) if the vritualized controller was struggling. <div>&nbsp;</div> For reads, read hits obviously the same as they are now coming from SVC/V7000 cache. For read misses there maybe minimal additional latency (50us) to get through the software stack. The additional hop on the fabric is almost un-noticable. But in general you have added more cache, so you may see better performance all round, from the dual layer caching giving you more chance of read cache hits - basically extending the cache capacity across the two systems. <div>&nbsp;</div> So in summary, you should expect the same performance, or better in most cases, and minimal additional latency on a complete read miss - which is going to be at disk latency anyway (so say 5-10ms + 50us)

5 alter-ego commented Permalink

Thanks a lot Barry

6 Anand Gopinath commented Permalink

Hi Barry, <div>&nbsp;</div> <div>&nbsp;</div> i need to build v7000s through cli . Request your help in finding out answers for the following queries i have <div>&nbsp;</div> <div>&nbsp;</div> <div>&nbsp;</div> 1. What is the best practise for creating RAID 5 &amp; RAID 10 mdisks through CLI ??? <div>&nbsp;</div> Disks from the same SAS Chain or disks split between chains <div>&nbsp;</div> Disks from Same enclosure or different enclosures ( like Enclosure loss protection in DS5000'S ) <div>&nbsp;</div> Any performance presets while creating mdisks through CLI ??? <div>&nbsp;</div> 2. How can we create RAID 10 arrays while ensuring " RAID Chain Balance " ??? Does this mean choosing half the no. of disks from each SAS Chain ?? How does <br /> v7000 decide which disks form the mirror pair ?? <div>&nbsp;</div> <div>&nbsp;</div> 3. Is the hot spares configured in a SAS Chain usable only in that chain only ??? What will happen to mdisks spread across the chains ??? <div>&nbsp;</div> 4.In one of our existing v7000, we have 7 enclosures 4 in One chain 1 and 3 in chain 2 . We have 5 spares configured in Chain 1 and 2 spares configured in <br /> Chain 2 . How will this affect the hotspare protection ??? DO we need to change this config ??

7 orbist commented Permalink

Anand, <div>&nbsp;</div> &gt;&gt; 1. What is the best practise for creating RAID 5 &amp; RAID 10 mdisks through CLI ??? <br /> &gt;&gt; Disks from the same SAS Chain or disks split between chains <div>&nbsp;</div> In all testing we've done, from a performance perspective it makes no difference what chain, or enclosure you use to build the arrays. <div>&nbsp;</div> For RAID-10 you obviously have an advantage by building one set of the array on one chain, and the other set on the other chain. This means you could lose a whole chain and still have access. <div>&nbsp;</div> &gt;&gt;Disks from Same enclosure or different enclosures ( like Enclosure loss protection in DS5000'S ) <div>&nbsp;</div> There is no enclosure loss concept in SAS. The problem with the old DS4/5K was the FC-AL loops. Here a single disk could take down the whole loop (enclosure). V7000 is built with no single point of failure in an enclosure, so both controller nodes have access to every drive, via two SAS expanders. So you don't need to spread the disks down the enclosures. <div>&nbsp;</div> &gt;&gt;Any performance presets while creating mdisks through CLI ??? <div>&nbsp;</div> The CLI has no presets. <div>&nbsp;</div> &gt;&gt;2. How can we create RAID 10 arrays while ensuring " RAID Chain Balance " ??? <br /> &gt;&gt;Does this mean choosing half the no. of disks from each SAS Chain ?? <br /> &gt;&gt; How does v7000 decide which disks form the mirror pair ?? <div>&nbsp;</div> So yes, chain balance means you can lose half the array (on chain) and carry on. The GUI preset will ensure it picks half the disks from each chain. <br /> To do this on the CLI, the order you specify the drive id's tells the system which "pairs" to create in the array. i.e. : <div>&nbsp;</div> mkarray -drives 0:1:2:3 <div>&nbsp;</div> Would create a mirror between 0 and 1, 2 and 3 etc <div>&nbsp;</div> <div>&nbsp;</div> &gt;&gt;3. Is the hot spares configured in a SAS Chain usable only in that chain only ??? What will happen to mdisks spread across the chains ??? <div>&nbsp;</div> We recommend one hot spare per chain. Incase you have RAID-10's configured like above. Otherwise, if there is only a spare on the other chain it will be used. But local chain spares have preference. <div>&nbsp;</div> &gt;&gt;4.In one of our existing v7000, we have 7 enclosures 4 in One chain 1 and 3 in chain 2 . We have 5 spares configured in Chain 1 and 2 spares configured in <br /> Chain 2 . How will this affect the hotspare protection ??? DO we need to change this config ?? <div>&nbsp;</div> The GUI tends to be over-zealous with its sparing. But as said above, we will use any spare that fits in the local chain first, then the other chain. So you should be fine

8 JohnNuttle commented Permalink

Any plans for part 3 or maybe a movie deal? :-) Great blog.

9 SamoJohn commented Permalink

Hi Barry, <div>&nbsp;</div> I have a question for you .. <div>&nbsp;</div> The question is that when putting V7K behind SVC , should we avoid double striping stretegy ( like striping on V7k and striping on SVC both) or it is ok to have double striping? <div>&nbsp;</div> With double striping , 256 MB on V7K and 1 GB on SVC is fine , right? <div>&nbsp;</div> Also i wanted to know that with latest version of SVC ( SVC 6.4) , do we still need to create and present mdisks from V7K in multiple of available SVC node ports ( like 8 in case of 4 nodes SVC cluster) <div>&nbsp;</div> Thanks in advance <br /> John

10 SamoJohn commented Permalink

Barry, <div>&nbsp;</div> In addition to this , i would like to ask another question. In your blogs , you mentioned that stripe size of V7K can be left at 256 MB while 1 GB extent size is recommended for SVC. <div>&nbsp;</div> Does this recommendation of 1 GB applies to those storage pools of SVC which are going to doing storage tiering with SSD drives/HDD drives or for all SVC storage pools ? I am asking because in recently published redbook on SVC 6.4 best practices, ( available on IBM Redbook website ) , they recommended to use 256 MB extent size on both SVC and V7000 layer. <div>&nbsp;</div> Appreciate if you can answer both of my questions... <div>&nbsp;</div> Many thanks <div>&nbsp;</div> John

11 orbist commented Permalink

Hi all - sorry for delays - been busy with V3700 and more... <div>&nbsp;</div> Part3 is in the works and should be up soon... <div>&nbsp;</div> As for the striping questions. I have asked for the Redbook to be ammended, the worst thing to do is use the same stripe size at both levels. 99% of the time it wont be an issue, but you can end up with the stripe over stripe actually ending up with a vdisk's extents on SVC all coming from the same mdisk on V7000 if all the starts (stripes) allign! <div>&nbsp;</div> Normal SVC rules apply for zoning storage to SVC. Each node in the SVC cluster must see every mdisk. You can subset the zoning, so you could zone 2 ports on V7000 to just 2 ports on one node, and so on - I'd not recommend any less than 2 ports / paths zoned up to each SVC node. <div>&nbsp;</div> Finally on the SSD Easy Tier pools, of course if you are running Easy Tier at the SVC level, you may want to conside what you do here. Possibly reversing the striping, i.e. 256MB at SVC and 1GB at V7000 for these pools. However we have found that 1GB Easy Tier works well for most transactional workloads - and infact that is the default on DS8000 storage pools with Easy Tier. <div>&nbsp;</div>

12 Czanella commented Permalink

Barry, very thanks for sharing this information. <br /> In your "Performance Comparison" chart, you said a SAS 15K driva can make 300-500 iops, but in the Seagate/Hitachi disk spec sheet thay said about 150-180 iops. Are you considering the benefits of V7K infratructure (cache, processors, ...) in this statement ?

13 orbist commented Permalink

Czanella, <div>&nbsp;</div> Not sure what drive specs you are looking at, but even 3.5" 15K drives have easily done 300 iops for many years. The 2.5" are faster due to smaller platters (less seek time) and even with a QD=1 my tests show (on the Seagate and Hitachi drives we support) at least 300 iops (random small block reads/writes)

14 OlegKorol commented Permalink

based on their own experience with IBM Storwize V7000, I can say that <br /> 300 iops with SAS 10K RPM is real and not just in the laboratory <br /> but also in the real-world business applications <br /> and depending on the load profile and performance tuning is possible and more <br /> I get to peak performance (production environment) on IBM Storwize V7000 over 100K iops without SSD, only on SFF 10K RPM HDD - http://www.slideshare.net/it_expert/ibm-storwize-v7000-ultimate-performance-eng <br />

15 orbist commented Permalink

Oleg, <div>&nbsp;</div> Thanks for the feedback, its great to know that our lab synthetic workloads are representative of real life workloads. I would think getting &gt; 100K iops that you are getting a reasonable amount of cache hits. For 240x 10K I internally measure about 72K read and 22K write iops, with a mix around 55K iops .