InfiniBand how-to topics
InfiniBand is available on both IBM® x86 and Power Systems™ servers running Linux. IBM InfiniBand adapters can be used to create a high bandwidth, low latency communication network for your applications.
The InfiniBand how-to topics are short procedures designed to provide you with just the steps you need to complete the task. These topics are listed in the order in which they would traditionally be completed.
- Installing InfiniBand packages on Red Hat 5.x
Complete the following steps to install InfiniBand packages on any Red Hat Enterprise Linux version 5. - Installing InfiniBand packages on Red Hat 6.x
Complete the following steps to install InfiniBand packages on any Red Hat Enterprise Linux version 6. - Installing InfiniBand on SUSE
Complete the following steps to install InfiniBand packages on any version of SUSE. - Identifying RDMA-capable hardware
Several companies make InfiniBand HCAs and RNICs, and most of those are of two types: PCI/PCIe devices, and non-PCI/PCIe devices. Each type is identified with a different command. - Activating drivers for Red Hat 5.x and SUSE
To activate RDMA drivers on any Red Hat Enterprise Linux version 5, and any version of SUSE, use one of two commands. - Activating drivers for Red Hat 6.x
To activate RDMA drivers on any Red Hat Enterprise Linux version 6, use one of two commands. - Determining hardware activation
You can control which RDMA devices are loaded by the openibd script. - Verifying that ports are active
Use the ibv_devinfo command to verify that the ports are active. - Recovering from a failed ibv_devinfo command
The ibv_devinfo command can fail when modules or hardware drivers fail to load or when libraries are missing. - Verifying that the nodes can communicate
Ensure that your nodes can communicate with each other by running a single command on each node. - Verifying that RDMA is working
Ensure that RDMA is working between the two nodes by running a single command on each node. - Discovering the fabric configuration
Use the ibhosts command to find other hosts with the IB HCAs in the topology. - Using InfiniBand ports with IPoIB
Complete these procedures to set up IPoIB. - Setting connected mode on Red Hat Enterprise Linux
To set the connected mode on Red Hat Enterprise Linux, add a line to the configuration script. - Setting connected mode on SUSE
To set the connected mode on SUSE, set the mode type interface to ib0, and then set the MTU for that interface. - Setting up IPoIB bonding on Red Hat Enterprise Linux
Bonding two Red Hat Enterprise Linux systems requires setting up two configuration scripts on a single node. - Setting up IPoIB bonding on SUSE
Bonding two SUSE systems requires setting up two configuration scripts on a single node. - Remove bonding on SUSE systems
To remove the IPoIB bonding on systems running SUSE, you must run a series of commands on both systems that are bonded. - Preventing incorrect ARP entries
The correct setting of a multi-homed IPoIB host is achieved by using a different PKEY for each IP subnet. In some cases a host could have multiple interfaces on the same IP subnet, and you need to prevent a peer from building an incorrect ARP entry, or neighbor. - View RDMA packet counts
To view the RDMA packet counts, use the ibdatacounts command. - Verifying send and receive queue sizes for IPoIB
Under heavy load, you might see large packet drops and a resultant throughput drop off. IPoIB is particularly vulnerable to this with UDP traffic. Verify that your send and receive queue sizes are correct. - Dynamically enable the IPoIB debug
You can enable or disable the IPoIB debug dynamically at any time. - Dynamically enable Mellanox debug
You can enable or disable the Mellanox drivers debug dynamically at any time.
Parent topic: Performance for Linux on Power Systems servers