Comments (17)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 MarkChandler commented Permalink

Great tip! If only for the very handy use of ftp and a piped command. <br /> I hope I can try this out somewhere.

2 rpatters commented Permalink

We've been told by support that NIM operations don't support anything past mtu=1500 because tftp doesn't support jumbo frames. <br /> It started working again after setting mtu=1500. Is there something else we can do?

3 cbartlett commented Permalink

Great write up. This was exactly what I was looking for. On p750 I went from "6553600000 bytes sent in 62.5 seconds" to "6553600000 bytes sent in 21.09 seconds"... rpatters, I stopped using NIM and switched to filebacked virtual devices for installs. So much quicker and easier to manage for me.

4 AnthonyEnglish commented Permalink

Glad to hear it. <div>&nbsp;</div> Since writing this post I've had inconsistent results when just changing the MTU size for backups to TSM over virtual ethernet. I'm not sure about other attributes that need to be changed. I'm glad your results were more positive.

5 vinodn commented Permalink

can u share more about this tftp/mtu size , like errors or issues you faced , after reading this note i just set my nim server virtual ethernet mtu=65280, and initiated a vio server build now, its running. Is this what u mean ?

6 AnthonyEnglish commented Permalink

Yes, what you've done was the idea I was suggesting in my original blog post. You've seen the comment in this post from rpatters about NIM operations not being supported beyond mtu of 1500. I'm not sure if this is version dependent. <div>&nbsp;</div> The difficulties I faced were that with large MTU (65280) on two virtual ethernet adapters between a TSM server and client, the performance for some backups was significantly SLOWER than the 1500. I suspected this was because of packet fragmentation, but never found a suitable large MTU size to work with. <div>&nbsp;</div> The Redbook IBM PowerVM MVirtualization Managing and Monitoring ( has some sections on setting the MTU. <div>&nbsp;</div> See section 3.6.3 Tuning network protocols and <br /> 3.6.4 Payload Tuning Examples (layer 3 device only needed for MTU (e.g. chdev -l en0 -a mtu=9000). <div>&nbsp;</div> Also, this document on PowerVM 10 Gigabit Ethernet ( also has some tips on virtual ethernet, even if you're not using 10 Gigabit Ethernet adapters. See Top Tip 7 Large packets with MTU (suggesting 65536, or 65394 or 65390 or at least 9000). Also see Top Tip 3, 4 and 6. <div>&nbsp;</div> I'd be keen to hear your results.

7 vinodn commented Permalink

If you talk about additional software it has its own tuning paramters may be you can throw more light into that like buffpool/logpool/tcpwindowsize . <div>&nbsp;</div> have u tried sending raw data between two OS with MTU 65280 - i guess thats what we are discussing here . <br />

8 AnthonyEnglish commented Permalink

We used the ftp command listed in the blog post, so as to eliminate other software (in this case TSM) as a factor. <div>&nbsp;</div> As for tuning, I focused on MTU size, but there's a post on Power IT Pro which covers tuning for 10 Gigabit adapters. Not directly dealing with virtual Ethernet adapters, but still helpful to know anyway. <br />

9 cbartlett commented Permalink

Update - Using this method I was able to significantly increase backup performance. We are using Data Protector and I setup a Media Agent LPAR on each of my P750 frames. I have the LHEA assigned directly to the media agent LPAR and setup routing to our Data Domain. I created an additional ethernet adapter on each backup client and setup an internal network with MTU set at 64280. Some host file tweaks and were cook'n. Significantly reduced network traffic over the SEA's and decreased backup times. Successful implementation - Thanks!

10 AnthonyEnglish commented Permalink

Great to hear. Well done. <div>&nbsp;</div> Would you be able to share with us the tweaks you refer to? What commands did you run? I'm also curious how you picked that exact speed.