<   Previous Post  Introducing the IBM...
Next UK (EU) SVC...  Next Post:   >

Comments (72)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 LeeMc commented Permalink

Hi Barry, <br /> might be taking something from your next post, but I was wondering whether there are best practices to bear in mind in terms of how large a single storage pool can get before you might consider splitting it into multiple storage pools? <div>&nbsp;</div> Are there any rules of thumb that we should take into account when designing solutions - for instance would 100 SAS disks in single pool (in RAID5 MDisks) be a good idea or not? <div>&nbsp;</div> Thanks, <br /> Lee

2 talor commented Permalink

Hi Barry, <div>&nbsp;</div> There are a number of instances where we perform upgrades (typically adding more disk drives) to existing V7000s (or add more mdisks to an SVC storage pool) and using the SVCTools perl script (balance.pl) to balance vdisk extents across all the mdisks in the storage pool where possible. This is done after adding additional arrays (same drive type / number of drives per array) with the goal being to spread the extents as evenly as possible across all of the mdisks in the pool. <div>&nbsp;</div> Is the balance.pl still the best method to achieve this, or is there a better way? Are there any plans to have this re-striping function built into SVC/V7000 in the future? <div>&nbsp;</div> Thanks, <div>&nbsp;</div> Talor.

3 jgrace2 commented Permalink

Hi Barry, <br /> Any chance that SVC can be modified to remove the "SVC and it's storage on the core switch" restriction? The core is very espensive real estate. <br /> Thanks, <br /> Joe

4 orbist commented Permalink

Lee, <div>&nbsp;</div> Part3 will cover this in detail. But its really a performance vs risk decision. The more "disks" available in a pool, the better the random IOP performance. The more disks the more chance of a failure...

5 orbist commented Permalink

Talor, <div>&nbsp;</div> Thanks, and good point. At this time the rebalance script is the best way to redistributre the extents within a pool, the other option is to start a new pool, but there are reasons you may not want to do that. <div>&nbsp;</div> It is something we'd like to integrate into the product at some point !

6 orbist commented Permalink

Jgrace, <div>&nbsp;</div> You can put the storage on another switch, but the real issue is if you have congestion over ISL ports. You need to make sure you have enough trunked ISL's to cater for maximum bandwidth, and that nothing else is sharing the ISLs. <div>&nbsp;</div> To simplify things we recommend the SVC and storage are on the core, but some customers do have the storage on an edge (with SVC still in the core)

7 Piw commented Permalink

Hello, <br /> Barry, are there plans to incorporate host group management into Storwize? Managing i.e. ESX farms with lots of hosts can be pain, when all LUN mapping has to be done host by host. <br /> Also, are there plans for Storwize with more CPU cores and cache memory? (with compression and lots of volumes and some flashcopy running, single CPU with 4 cores gets a little taxed). <br />

8 orbist commented Permalink

Piw, <div>&nbsp;</div> I dont know of any changes to the host object mapping functions in this area. <div>&nbsp;</div> As for future boxes, while I cannot comment on a public forum about what maybe coming, your point is noted.

9 al_from_indiana commented Permalink

Barry, <div>&nbsp;</div> If Easytier has been enabled on a SVC node and the hot extents are currently residing on the internal SSDs is there anyway we can force the Easytier extents for a particular host back to non-SSD disks? <div>&nbsp;</div> -Al

10 tkim1 commented Permalink

Hi Barry, <div>&nbsp;</div> Any changes on decreasing amount of hours (from 24 hours to something less) for the relocation of data extents with easy tier?

11 Piw commented Permalink

Hey <br /> One more question, are there planned changes to performance monitoring part? Especially drill down to volume level and more then last 5 min of data? I hope answer wont be "buy TPC" :)

12 Sharbu commented Permalink

Piw - to make management of ESX farms easier we normally just create one host object for the cluster (e.g. VMCLUSTER01) and put ALL the HBA's for all of the nodes in there. That makes LUN mappings nice and easy, and removes the problems around making sure the SCSI id's all tie up etc.

13 orbist commented Permalink

Al, <div>&nbsp;</div> For Easy Tier volumes, you'd need to turn off easy tier on that volume, then the only way to migrate the extents back to HDD only would be to migrate the volume to another stroage pool, and back to this pool if you wanted it back in the original place. Repeat for each volume on that host

14 orbist commented Permalink

tkim, <div>&nbsp;</div> The purpose of Easy Tier it to look for long term hotness, i.e. over a number of days, weeks, months, this extent is generally accessed more than others. Therefore the 24 hour (initial learning) period is fixed. The system continually monitors, and make daily recommendations into the migration engine. We specifically dont want to react so quickly as to overload the SSDs with MB/s of workload, and at the same time, don't want to wear out the SSD by thashing data on and off them. <div>&nbsp;</div> There are other interesting things down the line with more reactive tiering proposals, but nothing I can discuss on here.

15 orbist commented Permalink

Piw, <div>&nbsp;</div> The inbuilt performance monitoring in SVC/Storwize is being continually enhanced. We would like to add extra capabilities to enable better performance monitoring for critical volumes, and better ability to drill down when a specific host/application reports a problem. There is a roadmap of enhancements that we are working on in this area. <div>&nbsp;</div> For Storwize we can't give the TPC answer, for SVC, with GM solutions etc, TPC is a must. Of course TPC will always display way more historical data than we can afford to maintain on a real-time basis.