This article explores the effects of channel bonding on Red Hat Enterprise Linux to IBM InfoSphere Streams throughput and latency. It describes how to set up and configure a channel-bonded environment using Red Hat Enterprise Linux systems and what kind of performance improvements can be expected from InfoSphere Streams applications running in this environment. Target readers are those familiar and skilled with InfoSphere Streams and Linux network interface configuration.
What is channel bonding?
Channel bonding is a term used to describe the practice of splitting a single stream of data between two network interface cards while presenting a single interface to the application layer. By utilizing this technique, higher data throughput can be achieved, since the data can flow on both network interface cards at the same time. Channel bonding is available on all Red Hat Enterprise Linux kernels supported by InfoSphere Streams. We have tested one channel-bonded setup and will explain the results observed.
See the Resources section for a link to Red Hat Enterprise Linux documentation on channel bonding.
Setting up channel bonding
To achieve the best performance results with channel bonding over 1Gbps connections, consider the following hardware requirements:
- 1Gbps switch or a line card on a larger switch with a high over-subscription ratio
- Two Linux-capable systems with two 1Gbps Ethernet connections
Note: If you're using a blade, use pass-through modules in the chassis. Switch modules may not obtain the best performance due to algorithms in the switch for mediating traffic.
In our testing environment, Cisco 6509 switches form the 1Gbps backbone for the private cluster network. To achieve the maximum performance for individual systems, WS-X6748-GE-TX line cards are used. These line cards are high-performance cards (40Gbps per slot), providing near 1Gbps for all 48 ports on the card. For this test, the switch is running Version 12 of Cisco's IOS and the line card is current with its firmware. If your Cisco switch is running NX-OS or you're using another vendor's products, these commands may need modification or may be different altogether. Consult with your network administrator for assistance.
For the performance numbers listed below, no special configuration was utilized in the network switch. Although some vendor switches support Link Aggregation Control Protocol (LACP), higher performance results can be obtained by an operating system-only bonding configuration. Since today's modern hardware can easily handle issues like reordering packets that are not sequential without any significant penalty, reliance on the switch hardware to do the work is not as important.
Figure 1. InfoSphere Streams channel bonding test configuration
From a software perspective, everything you will need is already part of the Red Hat Enterprise Linux 5 distribution. To set up a channel bond in your environment, for each machine, you need to adjust the networking configuration and aggregate the link in the switch. To streamline these directions, we will assume that any server you're about to channel bond has been provisioned with Red Hat Enterprise Linux 5 and has one of two network interfaces that have been defined at build time. In our example, eth0 was set up initially, and /etc/sysconfig/network-scripts/ifcfg-eth0 looks like Listing 1.
Listing 1. /etc/sysconfig/network-scripts/ifcfg-eth0
There are four steps involved in setting up the software for channel bonding:
- Alter ifcfg-eth0 in /etc/sysconfig/network-scripts
- Alter ifcfg-eth1 in /etc/sysconfig/network-scripts
- Create a new ifcfg-bond0 in /etc/sysconfig/network-scripts
- Describe the "bond" to the system
Note: It is best to do this work with the interface down. If you do it while up, the system will issue a warning since eth0 is now a slave, but you will get the correct networking configuration. You will find that the restart will take longer.
Step 1: For ifcfg-eth0, change the BOOTPROTO line to
none. We will
remove the lines pertaining to IPADDR, NETMASK and DNS. This information
will be in the ifcfg-bond0 file, so don't lose the important details. The
new lines for ifcfg-eth0 are below:
Step 2: For ifcfg-eth1, we will edit the default unused configuration that the system established at provisioning:
Again, we will change the BOOTPROTO line as we did for the eth0 interface
and add the three lines beginning with MASTER, SLAVE, and USERCTL. We also
need to change ONBOOT to
Step 3: We will now define the bond via the new ifcfg-bond0 file. Using the same networking information, this file will look like this:
Step 4: Since the bond will be considered a new network device with it's own kernel module, we need to provide Linux some information about this new "device." We will create a file in /etc/modprobe.d called bonding.conf. In the file, we add these two lines:
alias bond0 bonding
options bond0 mode=0 miimon=100
See Resources for Red Hat Enterprise Linux documentation on channel bonding.
We implemented a channel-bonded environment using two machines running Red Hat Enterprise Linux release 5.5. Each machine has 8 Intel Xeon CPU X5570 processors running at 2.93 GHz. We took two 1Gbps NICs and channel-bonded them together through a Cisco 6509 switch. See Figure 1.
Since our goal was to test performance, we chose the bonding method called
Balance Round Robin (balance-rr). One way to improve throughput using
channel-bond mode balance-rr is to change TCP's tolerance for out-of-order
delivery. By increasing the tolerance for out-of-order delivery, you can
reduce how quickly TCP asks for a re-transmittal of packets. The tolerance
for out-of-order packets can be adjusted by setting the tcp_reordering
sysctl parameter. For this test environment, we set the value to 127,
utilizing the following Linux command:
Setting the value to 127 effectively disables re-transmission entirely. The drawbacks of doing this are increased CPU cost of TCP, and the TCP connections will generally stay in a congested state.
The throughput test is a simple test with a user-defined operator on one
machine sending tuples to a receiving operator on a different machine. The
receiving operator simply counts the incoming tuples. In addition to
counting, it stores away the current timestamp when the first and last
tuples are received. Determining the difference between the first and last
timestamps gives the duration of time taken to receive all the tuples.
Throughput = (Number of tuples) / (duration of time taken to receive the of tuples)
IBM achieved the following test results using TCP transport configuration.
Figure 2. InfoSphere Streams throughput for TCP transport
The same test was then run with the InfoSphere Streams applications configured to utilize LLMRUM transport over TCP.
Figure 3. InfoSphere Streams throughput with LLMRUMTCP transport
As indicated by the previous graphs, throughput increased approximately 60 percent utilizing TCP. Throughput utilizing LLMRUM over TCP increased almost 40 percent. Throughput is lower with LLM because LLM's own scheduling may be interfering with TCP's scheduling of the traffic to the two NICs.
The latency test measures round trip latency by using a source operator
and a sink operator on the same machine and a functor operator on a
different machine that merely passes to the sink operator the tuples
coming to it from the source operator. The latency is calculated as the
time when the tuple comes into the sink operator minus the time the tuple
left the source operator. The difference in these times is accurate
because the same machine's clock is used for both readings, thereby
eliminating any relative clock skew.
Round-trip Latency = (End time of tuple — Start time of tuple on same machine)
IBM achieved the following latency results during testing when sending at an approximate rate of 5,000 tuples per second.
Figure 4. InfoSphere Streams latency using TCP transport
Figure 5. InfoSphere Streams latency using LLMRUMTCP transport
As indicated by the graphs, channel bonding has little effect on latency with smaller tuple sizes. With larger tuple sizes, latency increases, possibly due to out-of-order packet re-transmission.
As indicated from the experiments above, the key factors that seem to affect performance are tuple size and transport configuration. However, in many circumstances, channel bonding can be a useful tool to increase throughput for applications on IBM InfoSphere Streams.
- Learn about Red Hat Enterprise Linux Channel Bonding.
- Learn about InfoSphere Streams and see the InfoSphere Streams wiki.
- Visit the InfoSphere Streams Information Center and check out the product manuals.
- See the InfoSphere area on developerWorks to get the resources you need to advance your skills on InfoSphere products.
- Learn more about Information Management at the developerWorks Information Management zone. Find technical documentation, how-to articles, education, downloads, product information, and more.
- Stay current with developerWorks technical events and webcasts.
- Follow developerWorks on Twitter.
Get products and technologies
- Build your next development project with IBM trial software, available for download directly from developerWorks.
- Now you can use DB2 for free. Download DB2 Express-C, a no-charge version of DB2 Express Edition for the community that offers the same core data features as DB2 Express Edition and provides a solid base to build and deploy applications.
- Participate in the discussion forum.
- Use the InfoSphere Streams Forum to ask questions and share your experiences, solutions, and best practices.
- Check out the developerWorks blogs and get involved in the developerWorks community.
Dig deeper into Big data and analytics on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.