<   Previous Post  Introducing the IBM...
Next UK (EU) SVC...  Next Post:   >

Comments (72)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

46 3CJR_Óscar_Luis_Rojas_Fernánde commented Permalink

Hi Barry <div>&nbsp;</div> We have a SVC+VMWare solution that has been a blessing. It takes our old DS4000s and relieves us from the constrains with VMWare, including lack of true active-active mode and support for VAAI, besides it give us a performance boost that was not really needed, but is always welcome. <div>&nbsp;</div> We want to activate compression, but we need 6.4 for that and since we have Site Recovery Manager implemented it holds us back. <div>&nbsp;</div> Is there a RPQ or a release date, or maybe a workaround to have VMWare with SRM working with SVC 6.4? <div>&nbsp;</div> Thanks

47 tkim1 commented Permalink

Hi Barry, <div>&nbsp;</div> Question for VAAI with SVC/V7K. Is priority given to server IOs VS. VAAI IOs?

48 jstroh commented Permalink

Hi Barry, I have a customer that has a V7000 config which includes several 300 GB HDDs, two 450 GB SSDs (RAID-10) and one 600 GB HDD (to be used as a hot spare should one of the SSDs fail). Questions: 1) is this a recommended configure, 2) how does easy tier behave if an SSD is mirrored to an HDD, 3) would it be better to simply not include the 600 GB HDD in the config and not take the performance hit should an SSD fail, knowing the failed SSD needs to be replaced ASAP due to the SPOF. any insight into this configuration and best-practice would be welcome

49 sas234 commented Permalink

Hi Barry, <br /> on volume striped v7000 and AIX 6.1 , in term of performance reason which one is the best , 1 LUN(vdisk) and set the queue_depth=200 or 4 LUN(vdisk) and set the queue_depth=50 ? <br /> How many LUN(vdisk) minimal to get the best performance ? <br /> Which one more important is it response time or throughput on IO ? <br /> Thank you

50 Dr. Axel Koester commented Permalink

@ Óscar <div>&nbsp;</div> Did you consider running an SVC stretched cluster with VMware, instead of MetroMirror? (...which needs to be reversed and activated by Site Recovery Manager SRM)? The VM recovery in a stretched cluster is rather straightforward since storage looks identical in both datacenters, i.e. there is no passive datastore instance. SVC will take care of avoiding split brain scenarios by preventing datastore access in the "unsafe" or "minority" datacenter. This is done in hardware and requires an independent quorum site for full automation. <div>&nbsp;</div> Thus there is no dependency between SRM and SVC in a stretched cluster: You may skip the step "reverse the MetroMirror relationship" in SRM. You may still use SRM to define the correct startup sequence of VMs, and maybe check other conditions. But you can use any SVC code level since there is no "mirror reversal" interaction. <div>&nbsp;</div> Note that the SVC stretched cluster is more a HA approach, even though it has many DR elements. <div>&nbsp;</div> More here: http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_hasplitclusters_4ru96h.html

51 orbist commented Permalink

Oscar, <div>&nbsp;</div> Sorry for the long delay, SRM and 6.4 is supported, as per this technote : <br /> http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004003

52 orbist commented Permalink

tkim, <div>&nbsp;</div> VAAI vs Host I/O. The only one that generates work is the write same - which internally we implement with actually less overhead than a normal I/O (no actual data is passed). <div>&nbsp;</div> Recent drives on V7000 also support write same, and so we don't even pass the data to disk for a write same. <div>&nbsp;</div> For xcopy, this is just like issuing a migrate or VDM copy command, and it will run at the speed ther disks can cope with. If there is host I/O ongoing at the same time, then they will be serviced "in-order" with the generation of the xcopy I/Os. <br />

53 orbist commented Permalink

jstroh: <div>&nbsp;</div> 1 and 2) Having an HDD spare is obviously a cheaper option than having an SSD spare. However as it will be mirrored to, then it will slow down the Easy Tier I/O if its taken as a spare. This just means that its like turning off Easy Tier again - if the applications (customer/user) has become used to the speed difference, then it will cause a noticable impact. <div>&nbsp;</div> I'd probably (if it were me) go with option3), i.e. don't have the HDD as a spare, and just take the hit should an SSD fail, and get it fixed / replaced ASAP (ensuring I have a good backup strategy at the same time) That way, you won't have as much of an impact to I/O performance in the event of a failure.

54 orbist commented Permalink

sas234 <div>&nbsp;</div> Re: 1 vdisk with QD=200 vs 4 vdisk and QD=50. <div>&nbsp;</div> If you only have these 1 or 4 vdisks, then the 4 vdisk option is better as it will make better use of the internal CPU hardware in SVC/V7000. If you need best performance for these LUNs over everything else (assuming more vdisks) then I'd still create 4, and glue them at the OS. That is if you need more than say 50K IOPs in total. A single core, is about capable of 50K IOPs in V7000 and 100K in SVC. <div>&nbsp;</div> As for respone time vs IOPs, its alwats a trade off - and if you mean which one matters more, then it depends what you want from your system/ Some applications live and die by latency, and so in those cases optimising for response time is key, other times, just the sheer throughput matters - but the two attirbutes are intrinsictly linked - better response time - generally means more IOPs. <div>&nbsp;</div> In either case my 1 vs 4 statements apply.

55 sas234 commented Permalink

Hi Barry, <div>&nbsp;</div> Been Investigating v7000 easy tier : <br /> AIX 6.1 TL 7 , SAP on Oracle DB 11.2 Advanced Compression on ( ca. 1 TB data ) , PP size 64 MB. <br /> V7000 , 1 Pool , 6 X 6 300 15K SAS HDD Raid 5 , Extent size 16 MB , easy tier on (evaluation mode ), 1 vdisk for log and 1 vdisk for data , queue depth=20 . <br /> So Far the avg response time for the transaction is less 20 - 30% ( before we're using DS4700 4 X 8 147 GB 15K FC RAID 10 ) . <br /> it's been more than, 1 month according to the STAT tool , there are no Hot extent . <br /> Are there any way to "push" the extent to be hot ? otherwise my SSD disk will be useless ...:( . <br /> Thank you . <br />

56 kgs commented Permalink

Hi Barry, <br /> We currently have 4 stand alone V7000's (single i/o groups), and we've just purchased 2 more that we are considering clustering to form a 2 i/o group cluster. We've successfully formed a test cluster and they seem to work well without too much extra to learn. Initial testing of migration between i/o groups was successful. What we've not been able to find is any documentation or guidance on best practices around this. We know it's well supported in the SVC world, but what is the current thinking about doing this with V7000's? Any opinions, guidance, pointers to docs? Performance or stability concerns?

57 tabin commented Permalink

Hi Barry <div>&nbsp;</div> Is there any possibility to reset easy tier's statistics so the STAT will show only results for last period when ET was active? One customer tests it and he starts and stops easytier many times but he wants to see only statistics for the currently running period and not for all previous periods. <br /> He also noticed the dpa_heat file is generated not every 24h but every random nuber of hours :) Sometimes its 20h, sometimes 16h and sometimes over 23h. Why is taht? This is on SVC v6.4 <br /> And one more thing - does it make sense to implement easytier on volume copy target volumes? For example, when I want to make full volume copy (for test systems) every day, then easytier could (but not necessarily would) move all this volume to ssd even if the volume includes a lot of blocks which won't be used in test systems. But easytier can hold those block on ssd because they are all written to every day. I think this makes much sense with incremental fc. <br />

58 orbist commented Permalink

sas234, <div>&nbsp;</div> Sounds like your database is doing a good job of spreading the work across the capacity! <div>&nbsp;</div> If you put SSD in, although STAT is showing there are no real hot spots, EasyTier will still move those that are the hotest (even if that means 1 I/O more than all the rest) which may result in improvements, but it sounds like you probably don't need it. Unless you say "pin" a vdisk to a single mdisk or such (i.e. mkvdisk -mdisk X (limit the striping to 1 or more mdisks within the pool)

59 orbist commented Permalink

kgs, <div>&nbsp;</div> Multi-IO Group (clustered) V7000 has a couple of key considerations. <div>&nbsp;</div> 1. Because the SAS disks are physically attached to only one IO group, you can striipe volumes over disks from multiple IO Groups, but since a volume is owned by a single IO Group, this will result in I/O requests being forwarded between nodes in the cluster. <div>&nbsp;</div> On the surface this sounds bad, but (assuming you dont have an all SSD config) since V7000 is disk limited in its IOPs, i.e. can sustain way more IOPs than a max 240 drives can provide, testing showed that "lazy" provisioning makes no difference to IOPs. i.,e. one giant pool with arrays from all IO groups, and volumes stiped over this pool. <div>&nbsp;</div> Basically 2 IO Groups gives 2x the disk perfotrmance, i.e. 480. <br /> Going beyond 2 IO Groups may show some limits as we become FC port IOP limited <div>&nbsp;</div> 2. Bandwidth. The opposite is true for MB/s - because I/O is being forwarded, we are using up the FC port bandwidth, and so scaling beyond a single IO group, using lazy provisioning means you see no increase in MB/s over a single I/O group <div>&nbsp;</div> In this case, rigid provisioning is better. That is, create storage pools from arrays in a single IO group, and create volumes in the same IO Gorup. This does kind of give you a silo'd approach, and at this point the main benefit, over 2 single V7000 systems is single management. And the ability to migrate between IO groups, pools, snapshot to different pools etc. <div>&nbsp;</div> Hope this helps

60 orbist commented Permalink

tabin, <div>&nbsp;</div> The only way to reset heat information that I know of is to vdisk mirror a volume, and split off the new copy as the active one, free up the old extents etc This will reset the volume back to no history. <div>&nbsp;</div> No idea why the dpa_heat is generated at seemingly random intervals - will find out ! <div>&nbsp;</div> When you Volume copy, either VDM, or MM etc - if the copy remains, i.e. you are just re-syncing it every day (to the same volume id) then it will hold history from the previous use. The copy operation itself however won't mean Easy Tier thinks these are hot. Since it ignores large and sequential I/O patterns, the copy operations themself are ignored.