<   Previous Post  2010-09 Interviewed...
2010-11 SVC User...  Next Post:   >

Comments (10)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 StorageBuddhist commented Permalink

The training material I have seen says there's PAM (DRAM) and PAM II (SLC flash). Also the Netapp boxes that take PAM II are all Opteron based, not Intel.

2 orbist commented Permalink

Thanks Jim, should have done some more research - I see now that PAM II is flash based. Makes more sense now.

3 DiegoKesselman commented Permalink

Barry, what about doing some kind of selective/automatic tiering like IBM i V6R1M1 with SSD in SVC? And what about using something like RAMSAN, a bunch of DDR cards acting like disk?

4 jsargent commented Permalink

Barry - In addition to JimKelly's comments, I believe the PAM II cards are configurable to the point of only enabling them for certain volumes. So if some workloads don't benefit from PAM, don't enable it for those volumes....only enable where it makes sense (VDI comes to mind right away). I've seen it have a very significant positive performance impact in certain cases. Cool fact: you can run a simple command on your NetApp system that will provide you with detailed information on how much PAM would help for your existing workload BEFORE you buy. <div>&nbsp;</div> Very excited to hear more about sub-LUN-level dynamic tiering in the SVC and DS8000...I've got several customers who will be excited as well.

5 orbist commented Permalink

Diego - when we support the sub-lun tiering, the internal SSD in the nodes themselves will provide a perfect tier0, capable of many hundreds of thousands of I/O - however if you do have external virtualized SSD controllers you can use them as well. <div>&nbsp;</div> Josh, thanks for the further info.

6 sfuhrman commented Permalink

We are anxiously awaiting the sub-lun tiering. Not just from IBM but all the other vendors that will have it "real soon". <div>&nbsp;</div> I've thought about how it could work with SVC... <div>&nbsp;</div> My first thought was something like this, let me know how far off I am :) <div>&nbsp;</div> Each mdisk group could have a scale value, say 0 to 100 of it's performance. This could either be manually entered or the SVC could figure it out based on real performance data. <div>&nbsp;</div> Each vdisk could have a flag of whether it is "tierable" (on or off). If it is on, the SVC could manage where to place it's extents, if it is off it would be like it always has. <div>&nbsp;</div> But this would imply a vdisk could span multiple mdisk groups, which it can not (nor would one want to typically, for failure domain reasons). Though in essence, you HAVE to be comfortable with your data living on two disparate groups of disks for sub-lun tiering to work. So now we either need a "group of groups", or the ability to allow a vdisk to span mdisk groups. <div>&nbsp;</div> <div>&nbsp;</div>

7 StorageBuddhist commented Permalink

ONTAP/WAFL is unusual in that it does a disproportionately good job of writes (as compared to reads). So PAM II is one way to overcome the read performance limitations. As jsargent says, VDI is an ideal use-case. Also, Netapp 'highend' systems are only incrementally faster than their midrange - as you (Barry) allude to, something in ONTAP clearly can't scale up, so when the workload gets heavy you have to grab any tricks you can find to try to squeeze a few more IOPs.

8 JohnMartin commented Permalink

Solid state devices are indeed relativley expensive beasts (at least in terms of $/GB) and will remain so for at least the next four or five years, in the mean time effective ways of sharing their benefits amongst as many workloads as possible seems like a good solution. <div>&nbsp;</div> I'm not sure where the inference that OnTap can only use two processors comes from, the current high end machines have 8 cores, the mid range 4 with only the low end having 2 cores. As far as why not go to Quad core, I strongly suspect its a matter of bang for buck, and our engineers decided that dual cores had the best balance. Going forward each ONTAP release improves NetApp's ability to get the most out of those CPU's cores, and the FAS/N-Series technology are well positioned to take advantage of that trend. <div>&nbsp;</div> As far as "incrementally faster" commend goes, In general each model has about 1.4x to 2x the performance of the model underneath it, depending on the workload, which is "incrementally faster", but they are reasonably large increments. <div>&nbsp;</div> If you're interested, I'm sure I can arrange an in-depth technology briefing for you so you dont have to dig through the manuals. <div>&nbsp;</div> Regards <br /> John Martin <br /> Consulting Sytems Engineer <br /> NetApp.

9 TuckerJohnson commented Permalink

Customers are most interested in the platformas innovation. This is usually measured by execution over time. Think about the performance boost that OnTap provides to application with the SNAP technologies. IBMs NSeries provides some of the best innovation in our product line. PAM is one example of support ing customer needs while ensuring their investmement in their technology.

10 ThoBu commented Permalink

Hi Barry, <div>&nbsp;</div> to add some notes here: PAM is really useful for caching meta data and often used data (common in virtualized environments/desktops). <br /> The inode based meta data needs to be cached for fast responses, this is WAFL/Ontap specific when you compare it to DS8k or SVC. The larger your dataset in one aggregat/volume the more possible inode hierachies you get and therefore the more benefit you get from caching this upper hierarchy :) <br /> We can discuss this in more detail if you like... <br /> Regards, <br /> Thorsten