Overview of InfiniBand products and networks

If you plan to set up a clustered server configuration by using InfiniBand switches to network your servers, you need to understand the components that are required for the network. The requirements are based on the type of host channel adapter (HCA) that is used to connect to the InfiniBand fabric.

This table shows the required components and supported adapters for setting up your InfiniBand network.
Table 1. Supported InfiniBand components
InfiniBand component PCI adapter GX adapter
Adapter GX Dual-port 4x HCA GX Dual-port 4x HCA orGX Dual-port 12x HCA
Systems IBM® System p5®, low-end IBM System p5, mid-range and high-end
Switches Topspin 120 Server Switch (7048-120), Topspin 270 Server Switch (7048-270)
Cables IBM certified cables
Fabric Management Topspin Web user interface and Element Manager IBM Network Manager
AIX® version AIX 5L™ version 5.3 with the 5300-03 Recommended Maintenance package
Linux® version SUSE Linux Enterprise Server 9 SP2 with Topspin Enterprise Commercial Stack SUSE Linux Enterprise Server 9 SP2 with IBM IB GX HCA driver and OpenIB Gen2 Stack
Note: For the most recent information about cluster offerings, see the Facts and Features Web site, at http://www.ibm.com/servers/eserver/clusters/hardware/factsfeatures.html.
This figure shows servers that are working in a cluster with InfiniBand switch networks.
Figure 1. Servers configured in an InfiniBand network
An illustration of servers configured in an InfiniBand Network