About this series
Part 1 of this three-part series (see Resources) on
AIX® networking provides a networking overview and discusses the tools that
help you monitor your hardware. Part 2 covers tuning the Network File System (NFS)
with monitoring utilities, such as
nmon, and it also goes over how to tune with nfso. Part
3 shows you how to monitor network packets and how to use netstat for this
purpose. You'll learn how to tune your network subsystem using the
no utility. This series also expounds on various best
practices of network I/O performance tuning.
The first thing that usually comes to mind when a system administrator hears that
there might be some network contention issues is to run
equivalent of using
iostat for your memory reports, is a quick and dirty
way of getting an overview of how your network is configured. Unlike
defaults usually do not give you as much information as you probably would like.
You need to understand the correct usage of
how best to utilize it when monitoring your system.
netstat is really not a monitoring tool in the sense of
iostat. You can
use other tools more suitable (discussed later in the article) to help monitor
your network subsystem. At the same time, you can't really start to monitor unless
you have a thorough understanding of the various components related to network
performance. These components include your network adapters, your switches and
routers, and how you are using virtualization on your host logical partitions. If
you determine you are indeed having a network bottleneck, fixing the problem might
actually lay outside of your immediate host machine. There is little you can do if
the network switch is improperly configured on the other end. Of course, you might
be able to point the network team in the right direction. You should also spend
time gathering overall information about your network. How are you going to be
able to understand how to troubleshoot your network devices unless you really
understand your network? In this article, you'll look at specific AIX network
tracing tools, such as
netpmon, and how they can help
you isolate your bottlenecks.
Finally, no matter which subsystem you are looking to tune, you must think of systems tuning must as an ongoing process. As stated before, the best time to start monitoring your systems is at the beginning, before you have any problems and users aren't screaming. You must have a baseline of network performance so that you know what the system looks like when it is behaving normally. Finally, when making changes, be careful to make changes only one at a time so that you can really assess the impact of your change.
Network I/O overview
This section provides an overview of the network as it relates to AIX and covers the physical aspects of the network (device drives and adapters), the AIX networking stack, and how to make some changes to your adapter.
Understanding the network subsystem, as it relates to AIX, is not an easy undertaking. When examining the CPU and memory bottlenecks, there are far fewer areas that you need to examine from a hardware and software aspect. Disk I/O tuning is more complex, as there are many more issues that impact performance, particularly during the architectural and build-out of your systems. In this respect, tuning your network is probably most like tuning your disk I/O, which is actually not too surprising, as they both relate to I/O. Let's start. Figure 1 illustrates the AIX Transmission Control Protocol/Internet Protocol (TCP/IP) layers.
Figure 1. The AIX TCP/IP layers
You can clearly see there is more to network monitoring than running
netstat and looking for collisions. From the
application layer through the media layer, there are areas that need to be
configured, monitored, and tuned. At this point, you should notice some
similarities between this illustration and the Open Systems Interconnection Basic
Reference Model (OSI Model). The OSI Model has seven layers (bottom to top):
Perhaps the most important concept to understand is that on the host machine
each layer communicates with its corresponding layer on the remote machine. The
actual application programs transmit data using either User Datagram Protocol
(UDP) or Transmission Control Protocol (TCP) transport layer protocols. They
receive the data from whatever application you are using and divide them into
packets. The packets themselves differ, depending on whether it is a UDP or TCP
packet. Generally speaking, UDP is faster, while TCP is more secure. There are
many tunable parameters to look at—you'll get to these parameters
during subsequent phases of the series. You might want to start to familiarize
yourself with the
no command, which is the utility
designed to make the majority of your network changes. From a hardware
perspective, it is critical that you understand the components that need to be
configured appropriately to optimize performance. Though you might work together
with the network teams that manage your switches and routers, it is unlikely that
you will be configuring them, unless you are a small shop or a one-person IT
department. The most important component you will be working with is your network
adapter. In 2007, most of your adapters will probably be some version that
supports Gigabit Ethernet, such as a 10/100/1000 Mbps Ethernet card. There are
several important concepts you will need to work with here.
Maximum Transfer Unit
Maximum Transfer Unit (MTU) is defined as the largest packet that can be sent over a network. The size depends on the type of network. For example, 16-bit token ring has a default MTU size of 17914, while Fiber Distributed Data Interface (FDDI) has a default size of 4352. Ethernet has a default size of 1500 (9000 with jumbo frames enabled). Larger packets require less packet transfers, which result in higher bandwidth utilization on your system. An exception to this is if your application prefers smaller packets. If you are using a Gigabit Ethernet, you can use a jumbo frames option. To support the use of jumbo frames, it's important to note that your switch must also be configured, accordingly.
To change to jumbo frames, use this fastpath:
# smit devices.
Then go to Communication>Ethernet>Adapter>Change/show characteristics of an Ethernet adapter. Try to change the transmit jumbo frames option from "No" to "Yes" (see Listing 1).
Listing 1. Characteristics of an Ethernet adapter screen
Change / show characteristics of an Ethernet adapter Type or select values in entry fields. Press Enter AFTER making all desired changes. [TOP] [Entry Fields] Ethernet Adapter ent1 Description Virtual I/O Ethernet > Status Available Location Enable ALTERNATE ETHERNET address no + ALTERNATE ETHERNET address [0x000000000000] + Minimum Tiny Buffers  +# Maximum Tiny Buffers  +# Minimum Small Buffers  +# Maximum Small Buffers  +# Maximum Medium Buffers  +# Maximum Medium Buffers  +# Minimum Large Buffers  +# [MORE...8] F1=Help F2=Refresh F3=Cancel F4=List F5=Reset F6=Command F7=Edit F8=Image F9=Shell F10=Exit Enter=Do
Where is the jumbo frames option? In this case, you cannot make the change. The reason for this is because you are only using the Virtual I/O Ethernet on this system—this topic is discussed in more detail later. Remember, you must understand the network on the host you are administering!
Let's check this system (see Listing 2).
Listing 2. Checking the system
Change / show characteristics of an Ethernet adapter Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] Ethernet Adapter ent1 Description 10/100/1000 Base-TX P> Status Available Location 1j-08 RX descriptor queue size  +# TX descriptor queue size  +# Software transmit queue size  +# Transmit jumbo frames yes + Enable hardware TX TCP resegmentation yes + Enable hardware transmit and receive checksum yes + Media speed Auto_Negotiation + Enable ALTERNATE ETHERNET address no + ALTERNATE ETHERNET address [0x000000000000] + Apply change to DATABASE only no + F1=Help F2=Refresh F3=Cancel F4=List F5=Reset F6=Command F7=Edit F8=Image F9=Shell F10=Exit Enter=Do
You have now changed the field to support jumbo frames.
Adapters communicate with other devices based on how your media speed is
configured. Though there are other choices, you need to configure your card for
either 100_full_duplex or auto-negotiation. With auto-negotiation, both adapters
attempt to communicate using the highest possible speed. Though you might find in
the documentation that it should be configured this way (IBM® even defaults
it this way on the system), most senior AIX administrators I know prefer to set it
to full duplex, to ensure that you are receiving the fastest possible adapter
speed. If it doesn't work properly, you should then work with the appropriate
network teams to resolve the problems prior to deployment. I would prefer taking
more time initially versus setting the adapter to a setting that might cause
slower speeds as a result of poorly configured switches. The
lsattr command gives you the information that you need.
The en prefix displays your driver parameters, while the ent prefix displays your
hardware parameters. Let's display your hardware parameters (see
Listing 3. Displaying the hardware parameters
testsys:/home/test>lsattr -El ent0 alt_addr 0x000000000000 Alternate Ethernet Address True busintr 166 Bus interrupt level False busmem 0xc8030000 Bus memory address False chksum_offload yes Enable RX Checksum Offload True intr_priority 3 Interrupt priority False ipsec_offload no IPsec Offload True large_send no Enable TCP Large Send Offload True media_speed Auto_Negotiation Media Speed True poll_link no Enable Link Polling True poll_link_timer 500 Time interval for Link Polling True rom_mem 0xc8000000 ROM memory address False rx_hog 1000 RX Descriptors per RX Interrupt True rxbuf_pool_sz 1024 Receive Buffer Pool Size True rxdesc_que_sz 1024 RX Descriptor Queue Size True slih_hog 10 Interrupt Events per Interrupt True tx_preload 1520 TX Preload Value True tx_que_sz 8192 Software TX Queue Size True txdesc_que_sz 512 TX Descriptor Queue Size True use_alt_addr no Enable Alternate Ethernet Address True
In this case, your interface is set as auto-negotiate.
You should also check your firmware levels to make sure they are up-to-date. I've
seen many network problems fixed when updating to the latest levels of firmware.
lscfg command gives you the firmware information
(see Listing 4).
Listing 4. Using the
lscfg command for firmware information
testsys:/home/test >lscfg -vp | grep -p ROM 10/100 Mbps Ethernet PCI Adapter II: Part Number.................09P5023 FRU Number..................09P5023 EC Level....................H10971A Manufacture ID..............YL1021 Network Address.............0002556FC98B ROM Level.(alterable).......SCU015 Product Specific.(Z0).......A5204207 Device Specific.(YL)........U0.1-P1-I1/E1 10/100/1000 Base-TX PCI-X Adapter: Part Number.................00P3056 FRU Number..................00P3056 EC Level....................H11635A Manufacture ID..............YL1021 Network Address.............00096B2E31BD ROM Level.(alterable).......GOL002 Device Specific.(YL)........U0.1-P1/E2
See the Resources section at the end of the article for a link to the most current release information for your adapter. In this case, you're going to find the history for the 10/100/1000 Base-TX PCI-X adapter:
- OLxxx—This is a table place holder for future firmware revisions.
- GOL021—This level of firmware corrects the vendor device ID on
EMC Class B adapters so that the adapter is recognized during AIX Network
Installation Management (NIM). Checksum from the AIX
60;sum61;command is 38603.
- GOL012—In OF, there is a very small possibility that the adapter can hang the system when the adapter (hardware) does not function properly during transmit. The change prevents the adapter from trying to send packets forever. After trying for a predetermined time, a timer is added to timeout.
- GOL002—If the user selected 10/auto or 100/auto and did not ping the switch first, the updated firmware resolves the problem; the open firmware would not change the settings to auto/auto before passing it to AIX. If AIX sees a parameter as 10/auto or 100/auto, it does not understand how to deal with that, and the system stops during ioconfig with a code 607. The firmware now changes any combination of 10/auto, 100/auto, auto/full, or auto/half, which are invalid selections, to auto/auto.
- GOL001—Original (GA) Open Firmware level.
A quick glance at the history shows that you are two levels down from where you should be. You need to look for some downtime to upgrade the firmware, particularly if you've been having some intermittent network problems.
Though the series focuses on tuning in subsequent parts, you might want to start to familiarize yourself with the memory management facility of network subsystems. What you need to know at this point is that it relates to data structures called mbufs. These are used to store kernel data for incoming and outbound traffic. The buffer sizes themselves can range from 32 to 16384 bytes. They are created by making allocation requests to the Virtual Memory Manager (VMM). In an SMP box, each memory pool is split evenly for every processor . The monitoring section below shows you how to view mbufs. An important concept to note is that processors cannot borrow from the memory pool outside of its own processor.
Two other concepts you should be familiar with are virtual Ethernet and shared Ethernet.
- Virtual Ethernet: Virtual Ethernet, supported on AIX 5.3 on POWER5™, allows for inter-partition- and IP-based communications between logical partitions on the same frame. This is done by the use of a virtual I/O switch. The Ethernet adapters themselves are created and configured using the HMC. If you recall, you tried to change an adapter earlier that was configured with virtual Ethernet.
- Shared Ethernet: Shared Ethernet is one of the features of Advanced POWER Virtualization. It allows for the use of Virtual I/O Servers (VIOs), where several host machines can actually share one physical network adapter. Typically, this is used in environments that do not require substantial network bandwidth.
While the scope of this series is not on virtualization, you should understand that if you are using virtualization, there might be other reasons for your bottleneck outside of what you are doing on your host machine. While virtualization is a wonderful thing, be careful not to share too many adapters from your VIO Server, or you might pay a large network I/O penalty. Using appropriate monitoring tools should inform you if you have a problem. Further, you might also want to familiarize yourself with concepts such as Address Resolution Protocol (ARP) and Domain Name Server (DNS), which can also impact network performance and reliability in different ways.
This section provides an overview of general network monitoring commands and specific AIX tools available to you. Some of the tools allow you to quickly troubleshoot a performance problem and others capture data for historical trending and analysis.
Let's get back to the old standby,
displays overall network statistics. Probably one of the most common commands you
type in is
netstat -in (see
Listing 5. Using
netstat with the
@lpar7ml162f_pub[/home/u0004773] > netstat -in Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll en1 1500 link#2 2a.188.8.131.52.6 21005666 0 175389 0 0 en1 1500 10.153 10.153.3.7 21005666 0 175389 0 0 en0 1500 link#3 2a.184.108.40.206.5 328241182 0 1189 0 0 en0 1500 172.29.128 172.29.137.205 328241182 0 1189 0 0 lo0 16896 link#1 62223 0 62234 0 0 lo0 16896 127 127.0.0.1 62223 0 62234 0 0 lo0 16896 ::1 62223 0 62234 0 0 root@lpar7ml162f_pub[/home/u0004773] >
Here is what it means:
- Name: Interface name.
- MTU: Interface Maximum Transfer Unit size.
- Network: The actual network address that the interface connects to.
- Address: Mac and IP address.
- Ipkts: The total amount of packets received by the interface.
- Ierrs: The amount of errors reported back from the interface.
- Opkts: The amount of packets transmitted from the interface.
- Oerrs: The amount of error packets transmitted from the interface.
- Coll: The amount of collisions on the adapter. If you are using Ethernet, you won't see anything here.
netstat flag is the
-m option. This flag allows you to view the Kernel malloc
statistics; the mbuf memory requests, including the size of the buffers, the
amount in use and the failures by CPU (see Listing 6).
Listing 6. netstat with
root@lpar7ml162f_pub[/home/u0004773] > netstat -m Kernel malloc statistics: ******* CPU 0 ******* By size inuse calls failed delayed free hiwat freed 32 194 5203 0 2 62 2620 0 64 484 3926 0 7 28 2620 0 128 309 14913 0 8 875 1310 0 256 392 14494 0 22 136 2620 0 512 2060 261283179 0 261 60 3275 0 1024 31 2714 0 8 25 1310 0 2048 587 1237 0 292 5 1965 0 4096 9 8367 0 2 2 655 0 8192 2 12 0 2 1 327 0 16384 224 354 0 29 2 163 0 32768 48 183 0 13 3 81 0 65536 84 142 0 42 0 81 0 131072 3 4 0 0 51 102 0 ******* CPU 1 ******* By size inuse calls failed delayed free hiwat freed 32 17 96 0 0 111 2620 0 64 295 1214 0 5 25 2620 0 128 151 93806 0 5 713 1310 0 256 83 273 0 5 29 2620 0 512 1577 86936634 0 199 23 3275 0 1024 4 18 0 2 4 1310 0 2048 515 516 0 257 1 1965 0 4096 1 707 0 0 1 655 0 8192 1 1 0 1 4 327 0 16384 32 32 0 4 0 163 0 65536 34 34 0 17 0 81 0 131072 0 0 0 0 44 88 0
If you are using an Ethernet, you can also use the
entstat command to display device-driver statistics.
This provides a potpourri of information (see Listing 7).
Listing 7. Using the
enstat command to display device driver statistics
testsys:/home/test>entstat -d en1 ------------------------------------------------------------- ETHERNET STATISTICS (en1) : Device Type: 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Hardware Address: 00:02:55:6f:c9:9b Elapsed Time: 5 days 12 hours 14 minutes 46 seconds Transmit Statistics: Receive Statistics: -------------------- ------------------- Packets: 803536 Packets: 2095253 Bytes: 511099654 Bytes: 1099945394 Interrupts: 520 Interrupts: 2074913 Transmit Errors: 0 Receive Errors: 0 Packets Dropped: 0 Packets Dropped: 0 Bad Packets: 0 Max Packets on S/W Transmit Queue: 38 S/W Transmit Queue Overflow: 0 Current S/W+H/W Transmit Queue Length: 1 Broadcast Packets: 535 Broadcast Packets: 997476 Multicast Packets: 2 Multicast Packets: 5477 No Carrier Sense: 0 CRC Errors: 0 DMA Underrun: 0 DMA Overrun: 0 Lost CTS Errors: 0 Alignment Errors: 0 Max Collision Errors: 0 No Resource Errors: 0 Late Collision Errors: 0 Receive Collision Errors: 0 Deferred: 0 Packet Too Short Errors: 0 SQE Test: 0 Packet Too Long Errors: 0 Timeout Errors: 0 Packets Discarded by Adapter: 0 Single Collision Count: 0 Receiver Start Count: 0 Multiple Collision Count: 0 Current HW Transmit Queue Length: 1 General Statistics: ------------------- No mbuf Errors: 0 Adapter Reset Count: 0 Adapter Data Rate: 200 Driver Flags: Up Broadcast Running Simplex AlternateAddress 64BitSupport ChecksumOffload PrivateSegment DataRateSet 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Specific Statistics: -------------------------------------------------------------------- Link Status : up Media Speed Selected: Auto negotiation Media Speed Running: 100 Mbps Full Duplex Receive Pool Buffer Size: 1024 No Receive Pool Buffer Errors: 0 Receive Buffer Too Small Errors: 0 Entries to transmit timeout routine: 0 Transmit IPsec packets: 0 Transmit IPsec packets dropped: 0 Receive IPsec packets: 0 Receive IPsec packets dropped: 0 Inbound IPsec SA offload count: 0 Transmit Large Send packets: 0 Transmit Large Send packets dropped: 0 Packets with Transmit collisions: 1 collisions: 0 6 collisions: 0 11 collisions: 0 2 collisions: 0 7 collisions: 0 12 collisions: 0 3 collisions: 0 8 collisions: 0 13 collisions: 0 4 collisions: 0 9 collisions: 0 14 collisions: 0 5 collisions: 0 10 collisions: 0 15 collisions: 0 testsys:/home/test>
You won't see many collisions, as you'll probably be working in
a switched environment. Look for transmit errors and make sure they are not
increasing too fast. You need to learn to troubleshoot collision and error problems before you even begin
to think about tuning. Alternatively, you can use
-v, which provides similar information.
Let's look at
netpmon provides information on CPU usage as it relates
to the network, and it also includes data about network device driver I/O, Internet
socket calls, and other various statistics. Similar to its other trace brethren,
filemon, it starts
a trace and runs in the background until you stop it with the
trcstop command. I like
netpmon, because it really gives you a detailed
overview of network activity and also captures data for trending and
analysis (though it is not as useful as
nmon for this purpose). Here you'll use a
trace buffer size of two million bytes (see Listing 8).
Listing 8. netpmom with
root@lpar7ml162f_pub[/etc] > netpmon -T 2000000 -o /tmp/net.out Wed Sep 5 05:30:27 2007 System: AIX 5.3 Node: lpar7ml162f_pub Machine: 00C22F2F4C00 Run trcstop command to signal end of trace.
Now you'll stop it (see Listing 9).
Listing 9. Stopping netpmom
root@lpar7ml162f_pub[/etc] > root@lpar7ml162f_pub[/etc] > trcstop root@lpar7ml162f_pub[/etc] > [netpmon: Reporting started] [netpmon: Reporting completed] [ 4 traced cpus ] [ 245.464 secs total preempt time ] [netpmon: 164.813 secs in measured interval] root@lpar7ml162f_pub[/etc] >
Let's look at the data. Here is just a small sampling of the output (see Listing 10).
Listing 10. Sample output
# more net.out Process CPU Usage Statistics: ----------------------------- Network Process (top 20) PID CPU Time CPU % CPU % ---------------------------------------------------------- UNKNOWN 15920 151.2735 36.558 0.000 UNKNOWN 7794 104.8801 25.346 0.000 UNKNOWN 6876 73.8785 17.854 0.000 UNKNOWN 5402 50.6225 12.234 0.000 xmwlm 13934 15.0469 3.636 0.000 -ksh 5040 0.0371 0.009 0.000 getty 18688 0.0280 0.007 0.000 sshd: 28514 0.0224 0.005 0.000 syncd 10068 0.0212 0.005 0.000 gil 3870 0.0163 0.004 0.004 swapper 0 0.0135 0.003 0.000 spray 5400 0.0085 0.002 0.000 send-mail 18654 0.0084 0.002 0.000 rmcd 15026 0.0081 0.002 0.000 ping 5036 0.0068 0.002 0.000 ksh 26642 0.0062 0.002 0.000 trcstop 5404 0.0057 0.001 0.000 rpc.lockd 22032 0.0052 0.001 0.000 mail 6872 0.0039 0.001 0.000 IBM.ServiceRMd 28126 0.0032 0.001 0.000 ---------------------------------------------------------- Total (all processes) 395.9176 95.681 0.004 Idle time 70.3216 16.995 ======================================================================== First Level Interrupt Handler CPU Usage Statistics: --------------------------------------------------- Network FLIH CPU Time CPU % CPU % ---------------------------------------------------------- PPC decrementer 18.4640 4.462 0.000 queued interrupt 6.2882 1.520 0.000 external device 0.6343 0.153 0.000 data page fault 0.0220 0.005 0.000 ---------------------------------------------------------- Total (all FLIHs) 25.4085 6.140 0.000 TCP Socket Call Statistics (by Process): ---------------------------------------- ------ Read ----- ----- Write ----- Process (top 20) PID Calls/s Bytes/s Calls/s Bytes/s ------------------------------------------------------------------------ sshd: 28514 0.47 7754 0.65 40 sshd: 29596 0.04 596 0.05 3 ------------------------------------------------------------------------ Total (all processes) 0.51 8350 0.70 43 ======================================================================== NFSv3 Client RPC Statistics (by Server): ---------------------------------------- Server Calls/s ---------------------------------- p650 0.03 ------------------------------------------------------------------------ Total (all servers) 0.03 ======================================================================== PROCESS: ping PID: 5036 reads: 12 read sizes (bytes): avg 192.0 min 192 max 192 sdev 0.0 read times (msec): avg 7.927 min 7.136 max 12.806 sdev 1.496 writes: 12 write sizes (bytes): avg 64.0 min 64 max 64 sdev 0.0 write times (msec): avg 0.052 min 0.039 max 0.063 sdev 0.007
As you can see, there is little overall network I/O activity going on during this time. The top section is most important, as it really helps you get an understanding of what processes are eating up network I/O time.
lsattr (used earlier to view the hardware parameters)
is another command you will be using frequently to display statistics on your
interfaces. The attributes that you see here are configured using either the
no commands. Let's
display your driver parameters (see Listing 11).
Listing 11. Displaying the driver parameters using
testsys:/home/testsys >lsattr -El en0 alias4 IPv4 Alias including Subnet Mask True alias6 IPv6 Alias including Prefix Length True arp on Address Resolution Protocol (ARP) True authority Authorized Users True broadcast Broadcast Address True mtu 1500 Maximum IP Packet Size for This Device True netaddr Internet Address True netaddr6 IPv6 Internet Address True netmask Subnet Mask True prefixlen Prefix Length for IPv6 Internet Address True remmtu 576 Maximum IP Packet Size for REMOTE Networks True rfc1323 Enable/Disable TCP RFC 1323 Window Scaling True security none Security Level True state detach Current Interface Status True tcp_mssdflt Set TCP Maximum Segment Size True tcp_nodelay Enable/Disable TCP_NODELAY Option True tcp_recvspace Set Socket Buffer Space for Receiving True tcp_sendspace Set Socket Buffer Space for Sending True testsys:/home/testsys>
I also like to use the
spray command to troubleshoot
possible problems. The
spray command sends a one-way
stream of packets to the remote host machines from your host. It shows you the
amount of packets as well as the packet transfer rate (see
root@lpar7ml162f_pub[/etc] > /usr/etc/spray lpar8test -c 2000 -l 1400 -d 1 sending 2000 packets of length 1402 to lpar8test ... 34 packets (1.700%) dropped by lpar8test 23667 packets/second, 33181234 bytes/second root@lpar7ml162f_pub[/etc] >
In this example, 2000 packets were sent to the lpar8test host, with a delay of
one micro-second. Each packet was sent 1400 bytes. Before using
spray, make sure that
is not commented out of
inetd (defaulted in AIX), and
don't forget to refresh
inetd. If you are seeing a
substantial amount of dropped packets, that is obviously not good.
Finally, let's look at
?nmon????????p=Partitions???????Host=lpar7ml162f_pubRefresh=2 secs???05:43.15????????? ? Network ???????????????????????????????????????????????????????????????????????????? ?I/F Name Recv=KB/s Trans=KB/s packin packout insize outsize Peak->Recv Trans ? ? en1 2.1 0.0 46.3 0.0 46.0 0.0 2.1 0.0 ? ? en0 43.8 0.3 575.2 0.5 77.9 674.0 43.8 0.6 ? ? lo0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ? ? Total 0.0 0.0 in Mbytes/second ? ?I/F Name MTU ierror oerror collision Mbits/s Description ? ? en1 1500 0 0 0 2047 Standard Ethernet Network Interface ? ? en0 1500 0 0 0 2047 Standard Ethernet Network Interface ? ? lo0 16896 0 0 0 0 Loopback Network Interface
If you've been following the other series on AIX (see
Resources), you know I love
nmon and you should also, once you start using it. With
nmon (type in n after startup), you have a quick
snapshot of everything going on in your network, including adapter details, MTU,
error counters and collisions, and megabit rating.
Further, you also have the ability to
capture data with
nmon. Using the nmon analyzer, you
can print out pretty graphical
reports directly from Microsoft® Excel spreadsheets. See
Resources for a link to an IBM Wiki for the nmon
manual or for downloads.
This article covered the relative importance of the network I/O subsystem, and
defined the AIX network I/O layers and how it relates to the OSI Model. You
learned some best practices for network configuration, changed your Ethernet
settings to support jumbo frame, and viewed interface hardware and driver data.
You even examined the monitoring tools available to you and captured data using
nmon. In the next part of the series, you'll tune NFS, find out more
about monitoring utilities, such as
nmon, and discover how to tune
- Use RSS
feed to request notification for the upcoming articles in this series:
- Optimizing AIX 5L™ performance: Tuning disk performance
- Optimizing AIX 5L performance: Tuning your memory settings
- Optimizing AIX 5L performance: Monitoring your CPU
- Check out other parts in each series:
- AIX Wiki: Get the nmon manual or downloads here.
- Improving database performance with AIX concurrent I/O: Read this white paper for more information on how to improve database performance.
- Power Architecture: High-Performance Architecture with a History: Read this white paper.
- "Power to the People; A history of chip making at IBM" (developerWorks, December 2005): This article covers the IBM power architecture.
- "CPU Monitoring and Tuning" (March, 2002): Learn how standard AIX tools can help you determine CPU bottlenecks.
- nmon analyser -- A free tool to produce AIX performance reports (Steven Atkins, developerWorks, April 2006): You can download nmon analyser from here.
- IBM Redbooks: For a comprehensive guide about the performance monitoring and tuning tools that are provided with AIX 5L Version 5.3, read AIX 5L Practical Performance Tools and Tuning Guide.
- "AIX 5L Version 5.3: What's in it for you?" (Shiv Dutta, developerWorks, June 2005): Learn what features you can benefit from in AIX 5L Version 5.3.
- nmon performance: A free tool to analyze AIX and Linux performance (Nigel Griffiths, developerWorks, February 2006): Read this article for information on nmon.
- IBM Redbooks: AIX 5.3 Performance Management Guide provides application programmers, customer engineers, system engineers, system administrators, experienced end users, and system programmers with complete information about how to perform such tasks as assessing and tuning the performance of processors, file systems, memory disk I/O, NFS, Java™, and communications.
- Operating System and Device Management from IBM provides users and system administrators with complete information that can affect your selection of options when performing such tasks as backing up and restoring the system, managing physical and logical storage, and sizing appropriate paging space.
- Certification: For help in obtaining IBM certification for AIX 5L and the eServer® pSeries®, read IBM Certification Study Guide for eServer p5 and pSeries Administration and Support for AIX 5L Version 5.3.
- IBM Redbooks: The AIX 5L Differences Guide Version 5.3 Edition focuses on the differences introduced in AIX 5L Version 5.3 when compared to AIX 5L Version 5.2.
- Check out other articles and tutorials written by Ken Milberg:
- Popular content: See what AIX and UNIX® content your peers find interesting.
- AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills.
- New to AIX and UNIX?: Visit the New to AIX and UNIX page to learn more about AIX and UNIX.
- AIX Wiki: Discover a collaborative environment for technical information related to AIX.
- Search the AIX and UNIX library by topic:
- Safari bookstore: Visit this e-reference library to find specific technical resources.
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
- Podcasts: Tune in and catch up with IBM technical experts.
- Future Tech: Visit Future Tech's site to learn more about their latest offerings.
Get products and technologies
- Microcode downloads: Visit this site to get current release information for your adapter.
- IBM trial software: Build your next development project with software for download directly from developerWorks.
- Participate in the developerWorks blogs and get involved in the developerWorks community.
- Participate in the AIX and UNIX forums:
Dig deeper into AIX and Unix on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.