Comentários (16)

1 oedwards comentou às Link permanente

We have seen this occur on multiple TSM servers, and although we also identified the high "No Resource Errors", one of the team who logged a PMR with IBM was not as successful at getting their help to identify this problem. At least you had more luck than us, so now we can implement those changes as well.

2 niella comentou às Link permanente

Thanks for the tips Chris. I recently spent some time benchmarking TSM backups over a hypervisor Ethernet and was surprised at how big a role the LPAR entitlement played in the throughput. <div>&nbsp;</div> Secondly, for some inexplicable reason, and even though up and stable for a few days, the link would become so slow as to be entirely unusable! <div>&nbsp;</div> I eventually dropped the MTU to 32k where it was reasonably stable but even there the link just stopped working one day. Your advice is something I will most certainly try out, since I really want to be able to increase the MTU back up to 65k.

3 cggibbo comentou às Link permanente

Thanks for the comment Owen. Let me know if it helps.

4 cggibbo comentou às Link permanente

Hi Neil, hope it helps. Let me know how you go. <div>&nbsp;</div> Cheers. <div>&nbsp;</div> Chris

5 Fiberopticpbxtoronto comentou às Link permanente

As we know that it can be happen at multiple servers so there is no chance of error so I think these are the best tips I have seen ever. Thanks for sharing this article.

6 MuratYildirim comentou às Link permanente

Hello Chris, <div>&nbsp;</div> I want to use MTU size as 9000 for all lpars. I have VIOS and set pysical adapter MTU size to 9000. Also I set all lpars's virtual adapter MTU size to 9000. <div>&nbsp;</div> The question is: Should I set virtual adapter's MTU size to 9000 which use for create SEA on VIOS? or it is not matter. <div>&nbsp;</div> ent0: phsical adapter <br /> ent1: virtual adapter <div>&nbsp;</div> mkvdev -sea ent0 -vadapter ent1 -default ent1 -defaultid 299 -attr ha_mode=auto ctl_chan=ent2 <div>&nbsp;</div> Thank you for help

7 cggibbo comentou às Link permanente

There's no need to change the MTU size of the virtual adapter associated with the SEA.

8 paladium15 comentou às Link permanente

Thanks for the tips. Nevertheless, I applied this issue but now the Max Buffer Value for Huge is to maximum : 128 <div>&nbsp;</div> Buffer Type Tiny Small Medium Large Huge <br /> Min Buffers 512 512 512 96 96 <br /> Max Buffers 2048 2048 1024 256 128 <br /> Allocated 512 512 512 96 96 <br /> Registered 512 512 512 96 96 <br /> History <br /> Max Allocated 1412 642 573 101 128 <br /> Lowest Registered 502 502 502 91 48 <div>&nbsp;</div> How may I solve that ? My backups are slower <div>&nbsp;</div> Thanks a lot

9 9BNQ_Ruwinda_Fernando comentou às Link permanente

Hi Chris, <br /> Could this lead to situation, where performance improvement followed by AIX level reboot? In specific 10 hour RMAN backup has been reduced to 6 hours followed by reboot. This is what exactly customer told me. We are still working on with support on this matter. In general this customer say's that he has several experiences where performance improved drastically after aix level reboot. Do we have explanation to such claim in general? <br /> Many Thanks.

10 Bijoyendra comentou às Link permanente

Hi Chris, <div>&nbsp;</div> We have similar issue on our environment and IBM has suggested the same. Tech team recommends to do this changes on VIO client end only and increase the buffers for VIO client virtual ethernet adapters. <div>&nbsp;</div> Shouldn't we also do the same with VIO server virtual ethernet adapter, having higher values in VIO client ethernet adapter than VIO server ethernet adapter is ok ?

Incluir um Comentário Incluir um Comentário