Comments (32)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 AnthonyEnglish commented Permalink

Great post, Anthony. <div>&nbsp;</div> I've been working on tuning some AIX systems and I'm ready to look at the queue depth and FC adapters. Almost all disk goes to dual VIOS then presented as vscsi. <div>&nbsp;</div> Are these recommendations for XIV and the scripts also applicable for non-XIV systems running AIX 5.3 or higher? <div>&nbsp;</div> Do I change queue_depth only on the VIO server or on the client? <div>&nbsp;</div> The systems are running SAP/Oracle and connect to SVC then to DS8000 &amp; DS4800.

2 anthonyv commented Permalink

The script to change your HBA settings is generic (chhba) so your don't need an XIV. The settings changes reflect XIVs enhanced capabilities, although they sould work fine with SVC or DS8000. <br /> The only negative thing I have heard is that changing the max transfer size could cause the HBA to not work after reboot due to DMA starvation. I am waiting for more details on this, as I have not seen this myself. <div>&nbsp;</div> The script to change queue depth (chxiv) will only work with XIV. For non-XIV you would need the change the hdisk attributes manually. With SVC you could safely wind the queue depth out to 40 but 64 may be too high. Check out this link: <br /> http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_limitingtheqdepth_3kw21n.html <div>&nbsp;</div> As for changing queue depth setings on VIO clients... you would normally only change the VIO server (not the clients), unless your running NPIV. <div>&nbsp;</div>

3 paulzarnowski commented Permalink

Anthony, <br /> We recently purchased DS3512s for use in our large TSM(AIX) environment, for serial disk storage pools (similar to virtual tape). We also have some DS3400s for TSM database volumes (TSM6.1, so DB2). Do you know what queue_depth and transfer size would be suitable (or max)? The DS3500s have a "Turbo" option which alters the drive queue_depth from 4 to 16 per drive, so I am guessing this might affect the hdisk queue_depth in AIX. <br /> Thanks.

4 anthonyv commented Permalink

I wish I could offer you more expert advice on the DS3500 but I dont have enough hands on experience. <br /> The XIV is capable of handling a tremendous number of parallel IOs, thus the benefit in pushing the queue depth. <br /> This is not the case of the DS3500 (though it is a great little performer in its own right). I will try and do some more research on this, as your not the first person to ask.

5 MarcoPolo commented Permalink

Anthony, <div>&nbsp;</div> it is great what you have here. I am about to start working with xiv and your posts are of great help <div>&nbsp;</div> Thanks, <div>&nbsp;</div> MarkD:-) <div>&nbsp;</div> PS <br /> After I get XIV on two hosts, I will build a cluster and try the san replication instead of lvm mirroring <br />

6 anthonyv commented Permalink

Thanks! Glad you found it helpful. <br /> Please let me know how you went with mirroring.

7 dshaw commented Permalink

We recently purchased XIV Storage System 10.2, Model SAN32M-2. My AIX host is 5.3 TL12. I've successfully installed the HAK, but am receiving an error (as shown below) while running the xiv_attach. I was wondering if others are experiencing issues with the TL12 level. I do see my two MPIO 2810 XIV Disk hdisks on this system. Thanks

 
# xiv_attach
-------------------------------------------------------------------------------
Welcome to the XIV Host Attachment wizard, version 1.5.2.
This wizard will assist you to attach this host to the XIV system.
 
The wizard will now validate host configuration for the XIV system.
Press [ENTER] to proceed.
------------------------------------------------------------------------------
Please choose a connectivity type, [f]c / [i]scsi : f
-------------------------------------------------------------------------------
Please wait while the wizard validates your existing configuration...
The wizard needs to configure the host for the XIV system.
Do you want to proceed? [default: yes ]: yes
Please wait while the host is being configured...
A general error has occured (<type exceptions.NameError="exceptions.NameError">, global name 'shutil' is not defined). See the log file for more details.
</type>

8 anthonyv commented Permalink

Hi Donna. <div>&nbsp;</div> The error your getting is a actually a python error. The HAK isntalls xPYV which is really just python. The goods news is that your on AIX TL level, you can just run CFGMGR and it will work just fine... sicne AIX is reporting 2810 devices... we know this is true. In other words... you dont need XIV_attach. Does XIV_Devlist work?

9 dshaw commented Permalink

Hi Anthony, <br /> Thanks for the clarification. Yes, XIV_Devlist works and shows my XIV devices. <div>&nbsp;</div> I think we just had some confusion. We had a consultant that told us we needed to install the HAK. Didn't make a lot of sense to do the xiv_attach when I could already see the XIV disks. I've been working with IBM regarding this issue. Too bad IBM didn't catch that. <div>&nbsp;</div> Thanks again. Donna

10 anthonyv commented Permalink

Hi Donna. <div>&nbsp;</div> I am sorry you didnt get the right info first up. <br /> I sent you a small PPT that has the info on when the HAK is no longer needed. <br /> Let me know if there is any more info I can get you...

11 vilito2011 commented Permalink

Hi Anthony, <div>&nbsp;</div> Great Blog. <div>&nbsp;</div> I have a doubt about the posibility of change queue_depth on the VIO server and/or on the client <div>&nbsp;</div> In next redpaper attached note appears in section: 4.8 SCSI queue depth <div>&nbsp;</div> http://www.redbooks.ibm.com/abstracts/redp4194.html <div>&nbsp;</div> On the virtual I/O client, run the chdev -l hdiskN -a queue_depth=x command, where x is the <br /> number that you recorded previously on the Virtual I/O Server. Using this approach, you will <br /> have a balanced the physical and virtual disk queue depth. Remember that the size of the <br /> queue depth will limit the number of devices that you can have on the virtual I/O client SCSI <br /> adapter because it partitions the available storage to the assigned resources. <div>&nbsp;</div> We are tuning a dual vios environment with 14 vio clients with xiv disks for both rootvg and shared data volume groups because we are having some issues related to io performance <div>&nbsp;</div> should i change queue_depth both on vios and clients <div>&nbsp;</div> Thanks in advance <div>&nbsp;</div> Regards <div>&nbsp;</div>

12 vilito2011 commented Permalink

Hi Anthony <div>&nbsp;</div> Could you check mi previous post? <div>&nbsp;</div> I talked about tuning a dual vios environment with 14 vio clients with xiv disks for both rootvg and shared data volume groups because we are having some issues related to io performance <br /> Finally, We have changed the two atributes both in VIOs and Clients. <br /> If you don`t change max_transfer in Clients, you cannot get a LTG size (Dynamic): 1024 kilobyte(s) in disk groups. <br /> chdev -l $f -a max_transfer=0x100000 <br /> chdev -l $f -a queue_depth=64 <div>&nbsp;</div> Do you think this change is needed? <div>&nbsp;</div> Thanks in advance <div>&nbsp;</div>

13 anthonyv commented Permalink

In terms of whether this change is needed, for many applications it is not, as the average block size is rarely greater than 256KB and queues are rarely pushed beyond 32. Changing the settings helps with maximum performance, but with average workloads, I am not convinced they translate to much benefit.

14 vnarcisi5 commented Permalink

Hi, Anthony. Is there an advantage to attaching XIV FC ports 1 and 3 to the SAN, as opposed to ports 1 and 2? Thanks, Vince Narcisi

15 anthonyv commented Permalink

Great question. <br /> Ports 1 and 2 are on the left hand Fibre Channel Card, Ports 3 and 4 are on the right hand fibre channel card. So on the assumption that a fibre card failure could leave a working module with only one working fibre channel card, I recommend ensuring your attached to both cards. So thats the advantage of using ports 1 and 3 over using ports 1 and 2. <div>&nbsp;</div>