Network switches and configuration
The IBM Fusion HCI is configured with a dual-network architecture, designed to optimize performance and management.
The two physical networks are:
- 100GbE High-Speed network switches (Model S01 and S05)
-
- Purpose: Storage cluster and application traffic.
- Functionality: This network is dedicated to handling high-speed data transfer between the storage cluster and applications, ensuring optimal performance and low latency.
- Connectivity: The high-speed network is built around a pair of 32-port, 200Gb Ethernet switches. The switches are configured together using MLAG to create a redundant pair. All of the compute-storage servers and the GPU servers have a 2-port, 100Gb Ethernet adapter. One port on the adapter is connected to the first high-speed switch and the second port is connected to the second high-speed switch. This 100GbE connections are reserved for use by the storage cluster. All of the compute-storage servers and the GPU servers also have a 2-port, 25Gb Ethernet adapter. Using breakout cables that split the 100GbE ports on the switch into four 25GbE ports, one port on the server’s 25 GbE network adapter is connected to the first high-speed switch and the second port is connected to the second high-speed switch. The OpenShift network and the workload runs through this switch.
- 1GbE Management network switches (Model S02 and S04)
-
- Purpose: Server management and health monitoring.
- Functionality: This network is used for managing the servers, monitoring their health, and performing administrative tasks, ensuring the overall system's stability and security.
- Connectivity: The management network is built around a pair of 48-port, 1Gb Ethernet switches. The BMC of each server and the management of switches run through this 1Gb management switch.