Host attachment

This topic describes configuration activities to enable hosts to access volumes and use host-based solutions to manage the storage system.

You can attach hosts to the system by using the following protocols:
  • Fibre Channel
  • NVMe over Fibre Channel (FC-NVMe)
  • Fibre Channel over Ethernet (FCoE)
  • NVMe over RDMA
  • NVMe over TCP
  • Serial-attached SCSI (SAS)
  • Ethernet iSCSI
  • Small Computer System Interface-Fibre Channel (SCSI-FCP)

For more information about the supported host configurations on each product, see IBM® System Storage Interoperation Center (SSIC).

Hosts that use the Fibre Channel connections are attached to the system either directly or through a switched Fibre Channel fabric. Each port on a node is identified by a worldwide port name (WWPN).

The system does not limit the number of Fibre Channel ports or host bus adapters (HBAs) that each connected host or host partition can have. Your connected hosts are limited only by the number of ports or HBAs that are supported by the multipathing device driver on the host or host partition.

Refer to the subtopics of this section for information about setting host attachment parameters.

General protocol limitations for iSCSI, NVMe over RDMA, and NVMe over TCP

Host connections through NVMe over RDMA use specialized network switches and require fewer resources than FC-NVMe. RDMA allows higher throughput and better performance with low latency. In addition, RDMA requires less expertise at the storage networking level than the Fibre Channel implementation, potentially reducing overall costs.

The advantage of using NVMe over TCP data transfer is that unlike RDMA data transfer, NVMe over TCP uses the existing Ethernet adapters on the host and network infrastructure.

When you use an Ethernet-based connection, you must consider the Ethernet protocol limitations:
  • The behavior of a host that supports both Fibre Channel and Ethernet-based connections and accesses a single volume can be unpredictable and depends on the multi-pathing software.
  • A maximum of four sessions can come from one iSCSI or NVMe initiator to an Ethernet-based target.
Note:
  • When multiple ports are used within the same cage for NVMe over RDMA, or NVMe over TCP, or both, I/O operation types are limited to a total of 200 Gbps per cage (neither per adapter nor per port).