Host attachment
This topic describes configuration activities to enable hosts to access volumes and use host-based solutions to manage the storage system.
- Fibre Channel
- NVMe over Fibre Channel (FC-NVMe)
- Fibre Channel over Ethernet (FCoE)
- NVMe over RDMA
- NVMe over TCP
- Serial-attached SCSI (SAS)
- Ethernet iSCSI
- Small Computer System Interface-Fibre Channel (SCSI-FCP)
For more information about the supported host configurations on each product, see IBM® System Storage Interoperation Center (SSIC).
Hosts that use the Fibre Channel connections are attached to the system either directly or through a switched Fibre Channel fabric. Each port on a node is identified by a worldwide port name (WWPN).
The system does not limit the number of Fibre Channel ports or host bus adapters (HBAs) that each connected host or host partition can have. Your connected hosts are limited only by the number of ports or HBAs that are supported by the multipathing device driver on the host or host partition.
Refer to the subtopics of this section for information about setting host attachment parameters.
General protocol limitations for iSCSI, NVMe over RDMA, and NVMe over TCP
Host connections through NVMe over RDMA use specialized network switches and require fewer resources than FC-NVMe. RDMA allows higher throughput and better performance with low latency. In addition, RDMA requires less expertise at the storage networking level than the Fibre Channel implementation, potentially reducing overall costs.
The advantage of using NVMe over TCP data transfer is that unlike RDMA data transfer, NVMe over TCP uses the existing Ethernet adapters on the host and network infrastructure.
- When multiple ports are used within the same cage for NVMe over RDMA, or NVMe over TCP, or both, I/O operation types are limited to a total of 200 Gbps per cage (neither per adapter nor per port).