Multipath configuration for FC-NVMe hosts

Follow FC-NVMe multipath configuration recommendations for successful attachment of Linux® hosts to the system.

Hosts can be configured to work with the traditional Device Mapper or with the Native NVMe Multipath. For SCSI devices, the host continues to work with Device Mapper in any case. Native NVMe Multipath is supported on SLES12SP4, SLES15, and Red Hat Enterprise Linux 8.0 or later. Native NVMe Multipath supports NVMe ANA (the NVMe equivalent to SCSI ALUA). Device Mapper supports ANA on Red Hat Enterprise Virtualization. In Native NVMe Multipath, every namespace is shown as one instance.

Note: On SLES12SP4 and SLES 15, working with Native NVMe Multipath is recommended. On Red Hat Enterprise Linux 8.0 or later, working with Device Mapper is recommended.
Note: Native NVMe Multipath is not supported on Red Hat Enterprise Linux 7.x or earlier. Device Mapper used by Red Hat Enterprise Linux 7.x or earlier does not support ANA.

Turning Native NVMe Multipath on or off

  • On SLES15, Native NVMe Multipath is turned on by default. On SLES12SP4, Native NVMe Multipath is not turned on by default.
    To check whether Native NVMe Multipath is on, enter:
    # systool -m nvme_core -A multipath
    Module = "nvme_core"
    
        multipath           = "Y"
    If the multipath = "N", and you want to turn it on, enter:
    
    echo "options nvme_core multipath=Y" > /etc/modprobe.d/50-nvme_core.conf
    dracut -f
    reboot
    Note: On SLES12SP4, if the multipath is not turned on even after reboot, then follow these steps:
    1. Edit the file /etc/default/grub.
    2. Append nvme-core.multipath=on at the end of the GRUB_CMDLINE_LINUX_DEFAULT variable.
    3. Save the file.
    4. Run grub2-mkconfig -o /boot/grub2/grub.cfg to apply the new configuration.
    5. Reboot.
    If you choose to work with Device Mapper, enter:
    echo "options nvme_core multipath=N" > /etc/modprobe.d/50-nvme_core.conf
    dracut -f
    reboot
  • On Red Hat Enterprise Linux 8.0 or later, Native NVMe Multipath is not turned on by default.
    To check whether Native NVMe Multipath is enabled in the command-line, enter:
    # cat /sys/module/nvme_core/parameters/multipath
    
        Y
    If the output is "N", and you want to enable it, follow these steps:
    • Using command-line:
      1. Add the nvme_core.multipath=Y option on the command-line:
        # grubby --update-kernel=ALL --args="nvme_core.multipath=Y"
      2. On the 64-bit IBM Z architecture, update the boot menu:
        # zipl
        
      3. Reboot the system.
    • Using module configuration file:
      1. Create the /etc/modprobe.d/nvme_core.conf configuration file with the following content:
        options nvme_core multipath=Y
      2. Back up the initramfs file system:
        # cp /boot/initramfs-$(uname -r).img \
             /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img
      3. Rebuild the initramfs file system:
        # dracut --force --verbose
      4. Reboot the system.
    If you choose to work with Device Mapper, enter:
    1. Remove the nvme_core.multipath=Y option from the command-line:

      # grubby --update-kernel=ALL --remove-args="nvme_core.multipath=Y"
    2. On the 64-bit IBM Z architecture, update the boot menu:

      # zipl
    3. Remove the options nvme_core multipath=Y line from the /etc/modprobe.d/nvme_core.conf file, if it is present.
    4. Reboot the system.

    For more details to enable native NVMe and Device Mapper on Red Hat Enterprise Linux 8.0 or later, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_device_mapper_multipath/enabling-multipathing-on-nvme-devices_configuring-device-mapper-multipath#doc-wrapper.

Device Mapper or Native NVMe Multipath configuration

Two framework options are available to configure the multipath configuration file: Device Mapper or Native NVMe Multipath.

To configure Device Mapper, complete the following steps:
  1. Edit the /etc/multipath.conf file to include the following code:
    devices {
        device {
            vendor "NVME"
            product "IBM     2145"
            path_grouping_policy "multibus"
            path_selector "round-robin 0"
            prio "ANA"
            path_checker "none"
            failback "immediate"
            no_path_retry "queue"
            rr_weight uniform
            rr_min_io_rq "1"
            fast_io_fail_tmo 15
            dev_loss_tmo 600
        }
    }
    defaults {
        user_friendly_names yes
        path_grouping_policy    group_by_prio
    }
  2. Run the following commands to validate that the multipath daemon is running:
    systemctl enable multipathd.service
    systemctl start multipathd.service
    
    # ps -ef | grep -v grep | grep multipath
    root      1616     1  0 Nov21 ?        00:01:14 /sbin/multipathd -d -s
    
    After you enable the multipath service for SLES12SP4, rebuild initrd with multipath support:
    dracut --force --add multipath
  3. Run the following commands to apply configurations:
    multipath -F
    multipath
    multipath -ll
If you use Native NVMe Multipath, add the following code to the multipath configuration file:
blacklist {
    device {
        vendor  "NVME"
        product "IBM\s+2145"
    }
}

Device Mapper performance

Testing shows that performance is superior when you work with multiqueue I/O scheduling as described in the following steps.
Note: Before you apply these steps, contact a SUSE representative to verify that this option is supported.
For performance improvements, apply multiqueue I/O scheduling with blk-mq (in SLES): 
Warning: Do not change nvme-core.multipath value when Native NVMe is in use.
  1. Edit the /etc/default/grub file and the following text:
    GRUB_CMDLINE_LINUX_DEFAULT="BOOTPTimeout=20 BootpWait=20 
    biosdevname=0 powersaved=off resume=/dev/system/swap splash=silent 
    quiet showopts crashkernel=175M,high dm_mod.use_blk_mq=y 
    scsi_mod.use_blk_mq=1 transparent_hugepage=never"
  2. Apply the new configuration:
    swfc178:~ # grub2-mkconfig -o /boot/grub2/grub.cfg
    Generating grub configuration file ...
    Found theme: /boot/grub2/themes/SLE/theme.txt
    Found linux image: /boot/vmlinuz-4.12.14-25.19-default
    Found initrd image: /boot/initrd-4.12.14-25.19-default
    Found linux image: /boot/vmlinuz-4.12.14-23-default
    Found initrd image: /boot/initrd-4.12.14-23-default
    done
  3. Reboot.
  4. Validate that the multiqueue feature is enabled in multipath:
    mpatho (eui.880000000000000b0050760071c60044) dm-3 NVME,IBM     2145
    size=1.5G features='3 queue_if_no_path queue_mode mq' hwhandler='0' wp=rw