NVMe over RDMA configuration details for host connections

Follow these configuration details for NVMe over RDMA host connections.

Attach the system to NVMe over RDMA hosts by using the Ethernet ports on the systems.

NVMe over RDMA connections route from hosts to the systems over the LAN. You must follow these configuration rules:
Each node contains 3 adapter slots which support up to three 2-port 25 Gbps ROCE adapters or up to three 2-port 100 Gbps ROCE adapters. Mix of adapters in one node is allowed but the limit is up to 3 ROCE adapters.
Note:
  • The 100 Gbps adapter supports iSCSI and NVMe over RDMA host attach protocols.
  • For iSCSI, performance is limited to 25 Gbps per port.
  • When both ports are used for NVMe over RDMA, I/O operation types are limited to 100 Gbps per adapter (not per port).

For each Ethernet port on a node, multiple IPv4 address and multiple IPv6 address can be designated for NVMe over RDMA I/O.

IP requirements for NVMe over RDMA

IP addresses are used to discover storage volumes to access the volumes using I/O commands.
  • Each node Ethernet port can be configured on the same subnet with the same gateway, or you can have each Ethernet port on separate subnets and use different gateways.
  • If you are configuring a system to use node Ethernet ports 1 and 2 for NVMe over RDMA I/O, ensure that the overall configuration also meets the system IP requirements that are listed previously.
  • To ensure IP failover for NVMe over RDMA operations, nodes in the same I/O group must be connected to the same physical segments on the same node ports. However, you can configure node Ethernet ports in different I/O groups to use different subnets and different gateways.

Host mapping

Use this information for specific NVMe over RDMA host mapping information. For additional general host mapping information, see General Ethernet port configuration details for host connections.

NVMe over RDMA hosts support both the native multipath driver as well as a multipath daemon, when supported by the operating system. However, this capability does not include IBM® AIX® host attachment as it does not support multipath functions.

For more information about host mapping, see Host mapping.

Priority Flow Control (PFC) information

NVMe over RDMA storage controllers support the use of Priority Flow Control (PFC) with ROCE v2 transport. Differentiated Services Code Point (DSCP) tagging is used for implementing the priority flow control. All RDMA ports are configured in DSCP trust mode and have a single PFC priority 3 enabled. The storage controller tags all outgoing frames with the DSCP value that was obtained from the initiator during connection establishment. To achieve end-to-end PFC configuration, initiators must use the Type of Service (TOS) value of 106 in RDMA connection requests, and configure Ethernet host ports and Ethernet switch ports to match this value (DSCP tag 26) to PFC class 3.