Multipath configuration for NVMe over RDMA hosts

Follow multipath configuration recommendations for NVMe over RDMA when you attach Linux® hosts to the system.

Hosts can be configured to work with the traditional Device Mapper or with the Native NVMe Multipath. For SCSI devices, the host continues to work with Device Mapper in any case. SLES15 supports Native NVMe Multipath and is enabled by default. While Native NVMe Multipath is supported on Red Hat Enterprise Linux 8.x, it is not the default. Native NVMe Multipath supports NVMe ANA (the NVMe equivalent to SCSI ALUA). Device Mapper supports ANA on Red Hat Enterprise Virtualization. In Native NVMe Multipath, every namespace is shown as one instance.

Note: On SLES 15 working with Native NVMe Multipath is recommended. On Red Hat Enterprise Linux 8.0 or later, working with Device Mapper is recommended.

Turning Native NVMe Multipath on or off

On SLES15, Native NVMe Multipath is turned on by default.

To check whether Native NVMe Multipath is on, enter:
# systool -m nvme_core -A multipath
Module = "nvme_core"

    multipath           = "Y"
If the multipath = "N", and you want to turn it on, enter:

echo "options nvme_core multipath=Y" > /etc/modprobe.d/50-nvme_core.conf
dracut -f
reboot
If you choose to work with Device Mapper, enter:
echo "options nvme_core multipath=N" > /etc/modprobe.d/50-nvme_core.conf
dracut -f
reboot

Device Mapper or Native NVMe Multipath configuration

Two framework options are available to configure the multipath configuration file: Device Mapper or Native NVMe Multipath.

To configure Device Mapper, complete the following steps:
  1. Edit the /etc/multipath.conf file to include the following code:
    devices {
        device {
            vendor "NVME"
            product "IBM     2145"
            path_grouping_policy "group_by_prio"
            path_selector "round-robin 0"
            prio "ana"
            path_checker "none"
            failback "immediate"
            no_path_retry "queue"
            rr_weight uniform
            rr_min_io_rq "1"
            fast_io_fail_tmo 15
            dev_loss_tmo 600
        }
        device {
            vendor "IBM"
            product "2145"
            path_grouping_policy "group_by_prio"
            path_selector "service-time 0"
            prio "alua"
            path_checker "tur"
            failback "immediate"
            no_path_retry 5
            rr_weight uniform
            rr_min_io_rq "1"
            dev_loss_tmo 120
        }
    }
    defaults {
        user_friendly_names yes
        path_grouping_policy group_by_prio
    }
  2. Run the following commands to validate that the multipath daemon is running:
    systemctl enable multipathd.service
    systemctl start multipathd.service
    
    # ps -ef | grep -v grep | grep multipath
    root      1616     1  0 Nov21 ?        00:01:14 /sbin/multipathd -d -s
    
  3. Run the following commands to apply configurations:
    multipath -F
    multipath
    multipath -ll