Non-Volatile Memory express (NVMe)
The Non-Volatile Memory express (NVMe) transport protocol provides enhanced performance on high-demand IBM FlashSystem® drives.
NVMe is a logical device interface specification for accessing non-volatile storage media. Host hardware and software use NVMe to fully use the levels of parallelism possible in modern solid-state drives (SSDs).
Compared to the Small Computer System Interface (SCSI) protocol, NVMe improves I/O and brings performance improvements such as multiple, long command queues, and reduced latency. For example, while SCSI has one queue for commands, NVMe is designed to have up to 64 thousand queues. In turn, each of those queues can have up to 64 thousand commands that are processed simultaneously. NVMe also streamlines the list of commands to only the basic commands that Flash technologies need.
Depending on the host bus adapter (HBA) support within your storage systems, you can use NVMe over Fibre Channel or NVMe over RDMA protocol. For more information, see the IBM System Storage Interoperation Center (SSIC).
NVMe over Fibre Channel
NVMe over Fibre Channel provides concurrent traffic by using the same Fibre Channel SCSI resources within your storage system.
Every physical Fibre Channel port supports four virtual ports: one for SCSI host connectivity, one for FC-NVMe host connectivity, one for SCSI host failover, and one for FC-NVMe host failover. Every NVMe virtual port supports the functions of NVMe discovery controllers and NVMe I/O controllers. Hosts create associations (NVMe logins) to the discovery controllers to discover volumes or to I/O controllers to complete I/O operations on NVMe volumes. Up to 512 discovery associations are allowed per node, and up to 512 I/O associations are allowed per node. An additional 512 discovery association and 128 I/O associations are allowed per node during N_Port ID virtualization (NPIV) failover.
NVMe over RDMA
RDMA data transfer uses specialized network switches and requires less resources than FC-NVMe. RDMA allows higher throughput and better performance with lower latency. In addition, RDMA requires less expertise at the storage networking level than the Fibre Channel implementation, potentially reducing overall costs.
Every physical Ethernet port supports four virtual ports: one for SCSI host connectivity, one for RDMA host connectivity, one for SCSI host failover, and one for RDMA host failover. Every NVMe virtual port supports the functions of NVMe discovery controllers and NVMe I/O controllers. Hosts create associations (NVMe logins) to the discovery controllers to discover volumes or to I/O controllers to complete I/O operations on NVMe volumes. Up to 128 discovery associations are allowed per port, not including N_Port ID virtualization (NPIV) failover. The number of ports per node depend on your storage system configuration.