Comments (23)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 iampattoiam commented Permalink

I'd love to use SSD to boost I/O but how can you do it and still use dual VIOS for resilience and LPM. It seems more suited to replacing JBOD at this stage, and I haven't seen a locally attached storage for many years now.

2 nagger commented Permalink

If you mean a SAS connected SSD in the Power Machine or remote I/O drawer then you are right no LPM but if you have a SAN connected Flash System then it provides LUNs and should work with SSP fine and work well with LPM. <br /> I don't see Flash Systems on the current VIOS support list at <br /> http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html <br /> but we are about to get the new VIOS 2.2.3 release on 29th Nov 2013 - fingers crossed that Flash is supported at that point. Remember Flash System only arrived this year so we have to let the VIOS test team some time to support it.

3 nagger commented Permalink

Checked on http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss <br /> The Flash Systems 720/810/820 is supported now with VIOS 2.2.2.2

4 GU0H_Jorge_Gutierrez commented Permalink

So VIOS 2.2.3.1 is now out, and being US Thanks giving day, I should thanks some one for the early present. <br /> I did not had to even wait for tomorrow to start downloading it. <div>&nbsp;</div> Nigel, my department is focusing on moving the monitoring to IBM Tivoli Monitoring suite, and (at least on VIOS 2.2.2.3) I could not find an easy way to monitor the pool usage from Tivoli Monitoring. Do you happen to know if that is possible, and if so how could that be done. <div>&nbsp;</div> Thanks! <br /> Jorge.

5 nagger commented Permalink

Hi, Thanks for the heads up on the VIOS download - IBM making code available early!!! That must have been a mistake and heads will roll :-) On the Tivoli question: No idea - life is too short to be an expert in everything. Perhaps you could try the ITMpert blog! (That was a joke, right. Don't go looking for it). Customer pressure in asking for features is what drives development priorities, so demand from your Tivoli Support channel that you need Shared Storage Pool stats NOW!! Better yet, raise a PMR that SSP stats are not working or documented. Does ITM have a way for you to forcibly add stats. The Ganglia gstat command does this so well for Ganglia and adds graphs on this website 60 seconds later. I have not seen other tools match that feature and the all should! Perhaps you can dump stats in to your syslog !! At least the numbers will be saves somewhere.

6 fbux commented Permalink

I've tried to install "CommonAgentSubagent Shared Storage Pool VIOS sub-Agent 6.3.3.1" to my VIOS with SSP4, but failed with "Failed to install SSP Sub-Agent: VIOS version: 2.2.3 not supported". I think SSP4 &amp; VIOS 2.2.3.1 is a little bit early release for SysDir.

7 eric_marchal commented Permalink

Tks Nigel for this helpful topic. Here is my 2-cents (or should I say pence :-) ) contribution: <br /> For the queue depth of the cvdisk, I just modified the ODM default (procedure follows) so any disk created after this wil have the optimized queue depth: <div>&nbsp;</div> You see the default with this odmget command: <div>&nbsp;</div> [/root] # odmget -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' PdAt <div>&nbsp;</div> PdAt: <br /> uniquetype = "disk/vscsi/cvdisk" <br /> attribute = "queue_depth" <br /> deflt = "8" <br /> values = "1-256,1" <br /> width = "" <br /> type = "R" <br /> generic = "UD" <br /> rep = "nr" <br /> nls_index = 12 <div>&nbsp;</div> <div>&nbsp;</div> You modify the default value with <div>&nbsp;</div> [/root] # odmget -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' PdAt | <br /> sed 's/deflt = "8"/deflt = "16"/' | <br /> odmchange -o PdAt -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' <div>&nbsp;</div> [/root] # savebase <div>&nbsp;</div> check ODM default is effective: <br /> [/root] # odmget -q 'attribute = queue_depth AND uniquetype = "disk/vscsi/cvdisk"' PdAt <div>&nbsp;</div> PdAt: <br /> uniquetype = "disk/vscsi/cvdisk" <br /> attribute = "queue_depth" <br /> deflt = "16" <br /> values = "1-256,1" <br /> width = "" <br /> type = "R" <br /> generic = "UD" <br /> rep = "nr" <br /> nls_index = 12 <div>&nbsp;</div> K.R., <div>&nbsp;</div> Eric.

8 Mike_Pete commented Permalink

I did some similar testing and found what seems to be a rather fatal flaw. Most testing seemed to show SSP vscsi speed is similar to vscsi with whole LUN depending on the number of processes I passed to ndisk however at one point I happened to create a snapshot of the SSP LU I was testing and found performance took a nose dive. It slowed by roughly an order of magnitude. After deleting the snapshot, performance rebounded.

9 nagger commented Permalink

Mike_Pete, You are so funny! I don't see this as a "fatal flaw" at all. If you have seen my videos or slides on Shared Storage Pools than you will know that once snapshot-ed, further writes to the SSP virtual disks will cause a COW (copy on write) for that 64 MB chunk but from then on it is a regular write for that whole chunk. Actually, I think the performance you reported is pretty good considering what it is doing. The snapshot does not use magic - it has to do some work - postponing the copy until the write means the snapshot is fast and space efficient. The normal use case for a snapshot is, for example, an OS disk where only a small percentage of the disk space is modified later. In your case, you are running a benchmark that will systematically write to every block. Admittedly, this is a hard test case for snapshots. Once you have performed the COW for all disk blocks then we should find the performance will return to normal. But thanks for pointing out the downside and it is a good reminder that once you decide that it is extremely unlikely that you will ever roll-back to a snapshot because you would miss subsequent work/data/installed applications that it is worth removing the snapshot for two reasons now 1) to free up the disk space and 2) for improved performance. Cheers Nigel Griffiths.

10 Mark Steele commented Permalink

NPIV is not supported on 4Gb HBA's.... what do i win?

11 nagger commented Permalink

Hi Mark Steele, Correct NPIV was first supported on POWER when IBM introduced the 8GB Fibre Channel Adapters. You win the first, 2013 "I impressed Nigel Griffiths" Award for really knowing your stuff. Well done :-) Feel free to add that to your email trailer.

12 9TUT_Kyunghyun_Kim commented Permalink

Hi, Nigel. I am not sure if this right place to ask this question. Do know when SSP will support multiple pools for different storage tiers? We desperately need that feature in our environment.

13 nagger commented Permalink

IBM does not make announcements about future products on a Forum. A VIOS can currently only be in one Shared Storage Pool but if you have a different VIOS or VIOS pairs for different workloads like Production, Internal Apps and Dev/Test then you could run three different Shared Storage Pools today. Multiple Tiers and Multiple Pools is a known priority requirement and may appear in a future release but this is not a promise. It seems from the past 3 years, that SSP gets functional additions once a year in Q4. So it is unlikely that new features will appear before the end of 2014. Of course, multiple pools is a good feature but you may have expectations about managing pools like moving a LU between pools and similar things. What management options do you expect when this features arrives?

14 9TUT_Kyunghyun_Kim commented Permalink

Thank you for your answer, Nigel. Moving LU between pools will certainly be nice. I would, however, be totally happy even though multiple-pools doesn't come with fancy options since I can manually move LU between pools anyway. I can wait for those features without any problem, but multiple pools is really needed in our environment.

15 alpharob1 commented Permalink

Could you speak to the inter-play SSP and V7000 SSD Easy Tier? I would expect that SSP would stuff Easy Tier. Suppose you re-ran with NPIV LUN direct to client, heat it up over time so Easy Tier kicks in, that LUN performs far better. I guess what I'm getting at is the spreading of the LUN via SSP in 64 MB chunks isn't doing Easy Tier any favors. I would think a radial button that addresses SSP provisioning to provision a LUN without spreading in 64 MB chunks (carve LUN out of current SSP physical disk with space, spill over into next disk if need be), might be a good thing?

16 nagger commented Permalink

Hi alpharob1, <br /> Sorry I can't comment on SSP and Easy Tier interference because I don't know about how Easy Tier works. I am flattered that you think I am an expert on everything. Reminds me of my first day at IBM when I went to tune an early AIX system and the customer wanted me to fix a IBM Golfball typewriter :-) Your idea seems completely crazy IMHO and cuts totally across the maximum I/O rate by spreading every LU across every LUN in the pool. I think a LUs spread across say many tens of LUNs and so across 100's of disks with V7000 caching is going to out perform any LU downgraded to a simple collection of NPIV LUNs without East Tier but could be interesting switching on Easy Tier - I would not bet on the results. But more importantly SSP will scale. Perhaps, we will get to benchmark that some day - say 500 hundred NPIV LUNs against 500 SSP LUs. You are also trying to micro-manage disk space that is so ultra high man-power "old school" - a button to convert SSP to dedicate NPIV LUNs is exactly what we don't need for the future. With a SSP supporting many hundreds virtual machines, I don't think its a option, anyway. It is good to have a debate and a long think about these alternatives. <br /> Cheers Nigel

17 joyama commented Permalink

Hi, Nigel, SSP4s Performance is very great! <br /> By the way, since in the event of a failure of the VIOS, to ensure the mirror write consistency, what technology do you use to SSP4? <br /> Technology similar to MWCC of LVM mirroring Do you have a built-in SSP4?

18 nagger commented Permalink

alpharob1 - Jan 28 <br /> I think you have forgotten that the SSP allocates "chunks" of disk in 64 MB units but the disk I/O is still done are the 4KB level or larger if the application does larger blocks. A busy VM will create busy regions in the SSP LUNs and Easy Tier will kick in. I don't think we need further magic and the LU spread across every LUN in the pool is exactly what I want for maximum performance.

19 FlorentMairesse commented Permalink

Hi Nigel <div>&nbsp;</div> Does SSP4 supports hdiskpower from EMC VPLEX ? <div>&nbsp;</div> we can't add or replace any hdiskpower <div>&nbsp;</div> $ pv -add -clustername clpurb1 -sp sppurb1 hdiskpower397 <br /> get_other_disk_uuid: Device hdiskpower397 is an unsupported third party device. <br /> register_other_disk: Could not create a disk UUID for device hdiskpower397. <br /> ERROR: Specified device, hdiskpower397, is not a supported type. <br /> chcluster: One or more of the given entities is invalid. <br /> PV could not be added to cluster, check PV and cluster status. Also check for network connectivity issues <br /> hdiskpower397 <div>&nbsp;</div> CAA: Invalid input parameter. <div>&nbsp;</div> Please check the parameters. If the parameters are valid, the error <br /> may be due to configuration or cleanup issues. <div>&nbsp;</div> If these issues cannot be resolved, please collect debug data and contact <br /> Service Representative for further assistance.

20 nagger commented Permalink

Hi Florent Mairesse, I am told any disk type that is supported by the Virtual I/O Server (VIOS) is also supported for Shared Storage Pools. So that is the check you need to make. For most non IBM disks, the last time I looked you may find that IBM refers your to your Disk Vendor - it is after all them that decides if VIOS is a supported environment. I know that some EMC disk types are support - I just don't know about the EMC VPEX or the version or device driver options. This goes for any disk vendor.

21 AchimScreen commented Permalink

As I think about to start with SSP, I can't see an opportunity to differenciate several modes in one SSP. For example: <br /> - set 1 are DS8K with failure groups over two storage boxes (site mirroring) <br /> - set 2 are NetApp unmirrored <br /> - set 3 are some production LUNs mirrored over two NetApp boxes <br /> I was told, that should be possible with TIERs, but googling about shared storage pool tiers, I find no usefull information <br /> So do I have to use a storage pool with a fixed attribute (either mirrored or unmirrored, either DS8K or NetApp - as we don't want to mix them) ? <br /> Or is (or will) there be some mechanism to differenciate these attributes in the storage pool ?

22 Daniel Bevilacqua Meireles commented Permalink

Hi Nigel, how are you? <div>&nbsp;</div> You said "in SSP4, the read I/O is only done on one mirror copy, so it is the same speed as no mirror". My question is: is there a way to tell the SSP which Failure Group to use when perform a read I/O? Something like the "sequential mirroring" used by AIX, when there is a distinct primary copy and secondary copy and reads are done using the primary copy? <div>&nbsp;</div> Best Regards!

23 nagger commented Permalink

Daniel Bevilacqua Meireles - There is no way to specific the preferred mirrored copy to read. I suspect that a future release will use both mirrors. There is no primary or secondary copy they are both the same.