Comments (20)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 iampattoiam commented Permalink

I'd love to use SSD to boost I/O but how can you do it and still use dual VIOS for resilience and LPM. It seems more suited to replacing JBOD at this stage, and I haven't seen a locally attached storage for many years now.

2 nagger commented Permalink

If you mean a SAS connected SSD in the Power Machine or remote I/O drawer then you are right no LPM but if you have a SAN connected Flash System then it provides LUNs and should work with SSP fine and work well with LPM. <br /> I don't see Flash Systems on the current VIOS support list at <br /> http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html <br /> but we are about to get the new VIOS 2.2.3 release on 29th Nov 2013 - fingers crossed that Flash is supported at that point. Remember Flash System only arrived this year so we have to let the VIOS test team some time to support it.

3 nagger commented Permalink

Checked on http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss <br /> The Flash Systems 720/810/820 is supported now with VIOS 2.2.2.2

4 GU0H_Jorge_Gutierrez commented Permalink

So VIOS 2.2.3.1 is now out, and being US Thanks giving day, I should thanks some one for the early present. <br /> I did not had to even wait for tomorrow to start downloading it. <div>&nbsp;</div> Nigel, my department is focusing on moving the monitoring to IBM Tivoli Monitoring suite, and (at least on VIOS 2.2.2.3) I could not find an easy way to monitor the pool usage from Tivoli Monitoring. Do you happen to know if that is possible, and if so how could that be done. <div>&nbsp;</div> Thanks! <br /> Jorge.

5 nagger commented Permalink

Hi, Thanks for the heads up on the VIOS download - IBM making code available early!!! That must have been a mistake and heads will roll :-) On the Tivoli question: No idea - life is too short to be an expert in everything. Perhaps you could try the ITMpert blog! (That was a joke, right. Don't go looking for it). Customer pressure in asking for features is what drives development priorities, so demand from your Tivoli Support channel that you need Shared Storage Pool stats NOW!! Better yet, raise a PMR that SSP stats are not working or documented. Does ITM have a way for you to forcibly add stats. The Ganglia gstat command does this so well for Ganglia and adds graphs on this website 60 seconds later. I have not seen other tools match that feature and the all should! Perhaps you can dump stats in to your syslog !! At least the numbers will be saves somewhere.

6 fbux commented Permalink

I've tried to install "CommonAgentSubagent Shared Storage Pool VIOS sub-Agent 6.3.3.1" to my VIOS with SSP4, but failed with "Failed to install SSP Sub-Agent: VIOS version: 2.2.3 not supported". I think SSP4 &amp; VIOS 2.2.3.1 is a little bit early release for SysDir.

7 eric_marchal commented Permalink

Tks Nigel for this helpful topic. Here is my 2-cents (or should I say pence :-) ) contribution: <br /> For the queue depth of the cvdisk, I just modified the ODM default (procedure follows) so any disk created after this wil have the optimized queue depth: <div>&nbsp;</div> You see the default with this odmget command: <div>&nbsp;</div> [/root] # odmget -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' PdAt <div>&nbsp;</div> PdAt: <br /> uniquetype = "disk/vscsi/cvdisk" <br /> attribute = "queue_depth" <br /> deflt = "8" <br /> values = "1-256,1" <br /> width = "" <br /> type = "R" <br /> generic = "UD" <br /> rep = "nr" <br /> nls_index = 12 <div>&nbsp;</div> <div>&nbsp;</div> You modify the default value with <div>&nbsp;</div> [/root] # odmget -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' PdAt | <br /> sed 's/deflt = "8"/deflt = "16"/' | <br /> odmchange -o PdAt -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' <div>&nbsp;</div> [/root] # savebase <div>&nbsp;</div> check ODM default is effective: <br /> [/root] # odmget -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' PdAt <div>&nbsp;</div> PdAt: <br /> uniquetype = "disk/vscsi/cvdisk" <br /> attribute = "queue_depth" <br /> deflt = "16" <br /> values = "1-256,1" <br /> width = "" <br /> type = "R" <br /> generic = "UD" <br /> rep = "nr" <br /> nls_index = 12 <div>&nbsp;</div> K.R., <div>&nbsp;</div> Eric.

8 Mike_Pete commented Permalink

I did some similar testing and found what seems to be a rather fatal flaw. Most testing seemed to show SSP vscsi speed is similar to vscsi with whole LUN depending on the number of processes I passed to ndisk however at one point I happened to create a snapshot of the SSP LU I was testing and found performance took a nose dive. It slowed by roughly an order of magnitude. After deleting the snapshot, performance rebounded.

9 nagger commented Permalink

Mike_Pete, You are so funny! I don't see this as a "fatal flaw" at all. If you have seen my videos or slides on Shared Storage Pools than you will know that once snapshot-ed, further writes to the SSP virtual disks will cause a COW (copy on write) for that 64 MB chunk but from then on it is a regular write for that whole chunk. Actually, I think the performance you reported is pretty good considering what it is doing. The snapshot does not use magic - it has to do some work - postponing the copy until the write means the snapshot is fast and space efficient. The normal use case for a snapshot is, for example, an OS disk where only a small percentage of the disk space is modified later. In your case, you are running a benchmark that will systematically write to every block. Admittedly, this is a hard test case for snapshots. Once you have performed the COW for all disk blocks then we should find the performance will return to normal. But thanks for pointing out the downside and it is a good reminder that once you decide that it is extremely unlikely that you will ever roll-back to a snapshot because you would miss subsequent work/data/installed applications that it is worth removing the snapshot for two reasons now 1) to free up the disk space and 2) for improved performance. Cheers Nigel Griffiths.

10 Mark Steele commented Permalink

NPIV is not supported on 4Gb HBA's.... what do i win?

11 nagger commented Permalink

Hi Mark Steele, Correct NPIV was first supported on POWER when IBM introduced the 8GB Fibre Channel Adapters. You win the first, 2013 "I impressed Nigel Griffiths" Award for really knowing your stuff. Well done :-) Feel free to add that to your email trailer.

12 9TUT_Kyunghyun_Kim commented Permalink

Hi, Nigel. I am not sure if this right place to ask this question. Do know when SSP will support multiple pools for different storage tiers? We desperately need that feature in our environment.

13 nagger commented Permalink

IBM does not make announcements about future products on a Forum. A VIOS can currently only be in one Shared Storage Pool but if you have a different VIOS or VIOS pairs for different workloads like Production, Internal Apps and Dev/Test then you could run three different Shared Storage Pools today. Multiple Tiers and Multiple Pools is a known priority requirement and may appear in a future release but this is not a promise. It seems from the past 3 years, that SSP gets functional additions once a year in Q4. So it is unlikely that new features will appear before the end of 2014. Of course, multiple pools is a good feature but you may have expectations about managing pools like moving a LU between pools and similar things. What management options do you expect when this features arrives?

14 9TUT_Kyunghyun_Kim commented Permalink

Thank you for your answer, Nigel. Moving LU between pools will certainly be nice. I would, however, be totally happy even though multiple-pools doesn't come with fancy options since I can manually move LU between pools anyway. I can wait for those features without any problem, but multiple pools is really needed in our environment.

15 alpharob1 commented Permalink

Could you speak to the inter-play SSP and V7000 SSD Easy Tier? I would expect that SSP would stuff Easy Tier. Suppose you re-ran with NPIV LUN direct to client, heat it up over time so Easy Tier kicks in, that LUN performs far better. I guess what I'm getting at is the spreading of the LUN via SSP in 64 MB chunks isn't doing Easy Tier any favors. I would think a radial button that addresses SSP provisioning to provision a LUN without spreading in 64 MB chunks (carve LUN out of current SSP physical disk with space, spill over into next disk if need be), might be a good thing?