News
Abstract
By Jack Tedjai, IBM Systems Lab Services
NVMe-oF (Non-Volatile Memory express over Fabrics) is an exciting new storage network technology that allows you to take full advantage of IBM FlashCore Modules (FCM) from an Ethernet or Infiniband network. IBM FlashSystem 9100 and IBM Storwize Gen3 support end-to-end NVMe-oF for Fiber Channel solutions as well as RDMA ethernet based solutions.
Content
NVMe-oF (Non-Volatile Memory express over Fabrics) is an exciting new storage network technology that allows you to take full advantage of IBM FlashCore Modules (FCM) from an Ethernet or Infiniband network.
NVMe-oF is designed to leverage the performance of NVMe technology across the network using Remote Direct Memory Access (RDMA). RDMA is a direct memory access from the memory of one computer into that of another without involving either one's operating system.

Figure 1: NVMe over Fabrics (NVMe-oF)
IBM FlashSystem 9100 and IBM Storwize Gen3 support end-to-end NVMe-oF for Fiber Channel solutions as well as RDMA ethernet based solutions.
iSER (iSCSI Extensions for RDMA)
The iSCSI Extensions for RDMA (iSER) is a computer network protocol that extends the Internet Small Computer System Interface (iSCSI) protocol to use RDMA. Most important, iSER uses standard Ethernet switches and comparably priced network interface cards (NICs). That means it’s a low-cost solution, its performance ratio is far beyond contemporary Ethernet protocols and its throughput approaches wire speed, 40 Gbps and even 100 Gbps.
The requirements for running iSER are as follows:
- Applications that can use SCSI and iSCSI layer (in the client example I detail below, this was VMware ESXi 6.7)
- A network capable of passing RDMA, for example, 25GbE SR SFP28 multimode optical fiber (MMF) or 50uM OM4 multimode fiber
- Adapter cards that support RDMA (Ethernet or Infiniband), for example, Mellanox ConnectX-4 Lx 25GbE dual-port
- RDMA over Converged Ethernet (RoCE) Ethernet switches (with Flow control 802.1Qbb or Priority-based Flow Control), for example, Dell switch S5048-F
- A target that supports iSER, for example, IBM Storwize Gen3 with FCM Modules
In my client case, the following layers were used (see figure 2):
- Front-end iSCSI for existing iSCSI 10 Gbe network (juniper network)
- Back-end iSER network for iSER Clustering (Dell network 25 Gbe)
- iSER Interconnect between the data center was 10 Gbe darkfiber
- iSER VMware for the new ESXi host deployment

Figure 2: Customer global design
The port location address (see figure 3) is very important for getting the correct physical connectivity.
Figure 3: iSER IBM Storwize-Gen3 port mapping with canister location address

Figure 4: VMware ESXI6.7 multipath design
Note: iSER does not support NIC teaming. When configuring port binding, use only one RDMA adapter per vSwitch.
VMware ESXi Storage Adapters settings:
From the “Configuration” page, click on the “Storage Adapters” page. Select the device under “Mellanox iSCSI over RDMA (iSER) Adapter” and click “Properties” to add the network configuration settings.

Figure 5: ESXi Storage Adapters settings
Note: Please double check the MTU size setting.

Figure 6: MTU settings on the Storwize

Figure 7: VLAN support

Figure 8: VMware ESXi 6.7 host attachment overview from the Storwize

Figure 9: Storwize Gen3 performance test run overview on FCM core modules

Figure 10: Set up iSER Clustering ip address from the Service Assistance GUI
#satask chnodeip
satask chnodeip -noip -port_id 7 01-1
satask chnodeip -noip -port_id 8 01-1
satask chnodeip -noip -port_id 7 01-2
satask chnodeip -noip -port_id 8 01-2
satask chnodeip -gw x.0.5.1 -ip x.0.5.12 -mask 255.255.255.0 -port_id 7 -vlan 80 01-2
satask chnodeip -gw x.0.6.1 -ip x.0.6.12 -mask 255.255.255.0 -port_id 8 -vlan 81 01-2
satask chnodeip -gw x.0.5.1 -ip x.0.5.13 -mask 255.255.255.0 -port_id 7 -vlan 80 01-1
satask chnodeip -gw x.0.6.1 -ip x.0.6.13 -mask 255.255.255.0 -port_id 8 -vlan 81 01-1
More information can be found on the web: Setting the IP address for a RDMA-capable Ethernet port.

Figure 11: sainfo lsnodeip overview
Review the iSER Cluster setup with the command “sainfo lsnodeipconnectivity” on the GUI:

Figure 12: sainfo lsnodeipconnectivity overview

Figure 13: sainfo lsnodeipconnectivity overview from the GUI
Create the cluster or add the nodes to an existing cluster as described here.
#lscontrolenclosurecandidate
#addcontrolenclosure -iogrp 1 -sernum 78Y00XX
Finally, create ip-quorum for the HyperSwap cluster:
Figure 14: ip-quorum overview
Conclusion
In closing, NVMe-oF offers better performance and efficiency, especially for small I/O:
- Lower latency
- Higher IOPS
- Less CPU utilization
- Less power consumption
IBM FlashSystem 9100 and IBM Storwize Gen3 support end-to-end NVMe-oF solutions.
Looking for support?
IBM Systems Lab Services offers infrastructure services to help organizations build the foundation of a smart enterprise. Our Storage consultants can help you secure your enterprise with physical and software-defined storage solutions for on-premises, cloud, converged and virtualized environments. Contact Lab Services today to learn more.
Related Information
Was this topic helpful?
Document Information
Modified date:
28 March 2023
UID
ibm11125993