• Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (16)

1 oedwards commented Permalink

We have seen this occur on multiple TSM servers, and although we also identified the high "No Resource Errors", one of the team who logged a PMR with IBM was not as successful at getting their help to identify this problem. At least you had more luck than us, so now we can implement those changes as well.

2 niella commented Permalink

Thanks for the tips Chris. I recently spent some time benchmarking TSM backups over a hypervisor Ethernet and was surprised at how big a role the LPAR entitlement played in the throughput. <div>&nbsp;</div> Secondly, for some inexplicable reason, and even though up and stable for a few days, the link would become so slow as to be entirely unusable! <div>&nbsp;</div> I eventually dropped the MTU to 32k where it was reasonably stable but even there the link just stopped working one day. Your advice is something I will most certainly try out, since I really want to be able to increase the MTU back up to 65k.

3 cggibbo commented Permalink

Thanks for the comment Owen. Let me know if it helps.

4 cggibbo commented Permalink

Hi Neil, hope it helps. Let me know how you go. <div>&nbsp;</div> Cheers. <div>&nbsp;</div> Chris

5 Fiberopticpbxtoronto commented Permalink

As we know that it can be happen at multiple servers so there is no chance of error so I think these are the best tips I have seen ever. Thanks for sharing this article.

6 MuratYildirim commented Permalink

Hello Chris, <div>&nbsp;</div> I want to use MTU size as 9000 for all lpars. I have VIOS and set pysical adapter MTU size to 9000. Also I set all lpars's virtual adapter MTU size to 9000. <div>&nbsp;</div> The question is: Should I set virtual adapter's MTU size to 9000 which use for create SEA on VIOS? or it is not matter. <div>&nbsp;</div> ent0: phsical adapter <br /> ent1: virtual adapter <div>&nbsp;</div> mkvdev -sea ent0 -vadapter ent1 -default ent1 -defaultid 299 -attr ha_mode=auto ctl_chan=ent2 <div>&nbsp;</div> Thank you for help

7 cggibbo commented Permalink

There's no need to change the MTU size of the virtual adapter associated with the SEA.

8 paladium15 commented Permalink

Thanks for the tips. Nevertheless, I applied this issue but now the Max Buffer Value for Huge is to maximum : 128 <div>&nbsp;</div> Buffer Type Tiny Small Medium Large Huge <br /> Min Buffers 512 512 512 96 96 <br /> Max Buffers 2048 2048 1024 256 128 <br /> Allocated 512 512 512 96 96 <br /> Registered 512 512 512 96 96 <br /> History <br /> Max Allocated 1412 642 573 101 128 <br /> Lowest Registered 502 502 502 91 48 <div>&nbsp;</div> How may I solve that ? My backups are slower <div>&nbsp;</div> Thanks a lot

9 9BNQ_Ruwinda_Fernando commented Permalink

Hi Chris, <br /> Could this lead to situation, where performance improvement followed by AIX level reboot? In specific 10 hour RMAN backup has been reduced to 6 hours followed by reboot. This is what exactly customer told me. We are still working on with support on this matter. In general this customer say's that he has several experiences where performance improved drastically after aix level reboot. Do we have explanation to such claim in general? <br /> Many Thanks.

10 Bijoyendra commented Permalink

Hi Chris, <div>&nbsp;</div> We have similar issue on our environment and IBM has suggested the same. Tech team recommends to do this changes on VIO client end only and increase the buffers for VIO client virtual ethernet adapters. <div>&nbsp;</div> Shouldn't we also do the same with VIO server virtual ethernet adapter, having higher values in VIO client ethernet adapter than VIO server ethernet adapter is ok ?

11 cggibbo commented Permalink

Hi Bijoyendra, yes you may very well need to perform similar tuning for the virtual Ethernet adapters on the VIOS. Use entstat to check for similar buffer shortages and consider increasing the min values were appropriate.

12 AlbertoPre commented Permalink

Hi Chris, <div>&nbsp;</div> I have a similar problem. <br /> In the Lpar client I find the same errors as you describe but in the VIO server, there are a huge amount of Packets Dropped and Hypervisor Send Failures but not "No resource errors" or Hypevisor Rceive Failures. <div>&nbsp;</div> Do you think I should increase the buffers in both, VIO server and client ? <div>&nbsp;</div> Many thanks, <br /> Alberto

13 cggibbo commented Permalink

Hi Alberto, yes you may very well need to perform similar tuning for the virtual Ethernet adapters on the VIOS. Use entstat to check for similar buffer shortages and consider increasing the min values were appropriate.

14 Jame5.H commented Permalink

Hi Chris, we too have a current PMR open with IBM for an lpar running TSM 7.1.1.100 on AIX 7.1.3.4. I stumbled across this issue whilst waiting for IBM to analyse 'snaps' sent to them and have a quick question. <br /> My TSM lpar currently has ent0 max_buf_small=4096 and the Max Allocated History has reached the 4096, all other buffer sizes appear ok. <br /> What should be the next natural progression to test? Do I just double it to 8192 and monitor? Should I go up in 1k/2k increments? Is there a general rule of thumb here for the amount to increase the Max Allocated buffer sizes? <br /> Cheers, James.

15 cggibbo commented Permalink

Hi James, <div>&nbsp;</div> 4096 is the max you can set for min/max on max_buf_small. <div>&nbsp;</div> # lsattr -Rl ent0 -a max_buf_small <br /> 512...4096 (+1) <br /> # lsattr -Rl ent0 -a min_buf_small <br /> 512...4096 (+1) <div>&nbsp;</div> There's some good information on this in the AIX Performance FAQ under section 9.11.2 Virtual Ethernet Adapter Buffers. <div>&nbsp;</div> "A buffer shortage can have multiple reasons. One reason would be that the LPAR does not get sufficient CPU resources because the system is heavily utilized or the LPAR significantly over commits its entitlement. Another possibility would be that the number of virtual Ethernet buffers currently allocated might be too small for the amount of network traffic through the virtual Ethernet. <div>&nbsp;</div> For systems with a heavy network load it is recommended setting the minimum buffers to the same value as the maximum buffers. This will prevent any dynamic buffer allocation and the LPAR will always have all buffers available. <div>&nbsp;</div> Note: It is recommended to first check the CPU usage of the LPAR before making any virtual Ethernet buffer tuning." <div>&nbsp;</div> http://www-03.ibm.com/systems/power/software/aix/whitepapers/perf_faq.html <div>&nbsp;</div> The above may apply to both the client partition and the VIOS virtual Ethernet adapters. So you should also check the entX buffers for the Shared Ethernet Adapter trunk adapters on the VIOS.

Add a Comment Add a Comment