Multipath configuration for FC-NVMe hosts
Follow FC-NVMe multipath configuration recommendations for successful attachment of Linux® hosts to the system.
Hosts can be configured to work with the traditional Device Mapper or with the Native NVMe Multipath. For SCSI devices, the host continues to work with Device Mapper in any case. Native NVMe Multipath is supported on SLES12SP4, SLES15, and Red Hat Enterprise Linux 8.0 or later. Native NVMe Multipath supports NVMe ANA (the NVMe equivalent to SCSI ALUA). Device Mapper supports ANA on Red Hat Enterprise Virtualization. In Native NVMe Multipath, every namespace is shown as one instance.
Turning Native NVMe Multipath on or off
- On SLES15, Native NVMe Multipath is turned on by default. On SLES12SP4, Native NVMe Multipath is
not turned on by default. To check whether Native NVMe Multipath is on, enter:
# systool -m nvme_core -A multipath Module = "nvme_core" multipath = "Y"If the multipath = "N", and you want to turn it on, enter:echo "options nvme_core multipath=Y" > /etc/modprobe.d/50-nvme_core.conf dracut -f rebootNote: On SLES12SP4, if the multipath is not turned on even after reboot, then follow these steps:- Edit the file /etc/default/grub.
- Append nvme-core.multipath=on at the end of the GRUB_CMDLINE_LINUX_DEFAULT variable.
- Save the file.
- Run grub2-mkconfig -o /boot/grub2/grub.cfg to apply the new configuration.
- Reboot.
If you choose to work with Device Mapper, enter:echo "options nvme_core multipath=N" > /etc/modprobe.d/50-nvme_core.conf dracut -f reboot
- On Red Hat Enterprise Linux 8.0 or later, Native NVMe
Multipath is not turned on by default.To check whether Native NVMe Multipath is enabled in the command-line, enter:
# cat /sys/module/nvme_core/parameters/multipath YIf the output is "N", and you want to enable it, follow these steps:- Using command-line:
- Add the
nvme_core.multipath=Yoption on the command-line:# grubby --update-kernel=ALL --args="nvme_core.multipath=Y" - On the 64-bit IBM Z architecture, update the boot menu:
# zipl - Reboot the system.
- Add the
- Using module configuration file:
- Create the
/etc/modprobe.d/nvme_core.confconfiguration file with the following content:options nvme_core multipath=Y - Back up the
initramfsfile system:# cp /boot/initramfs-$(uname -r).img \ /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img - Rebuild the
initramfsfile system:# dracut --force --verbose - Reboot the system.
- Create the
If you choose to work with Device Mapper, enter:-
Remove the
nvme_core.multipath=Yoption from the command-line:# grubby --update-kernel=ALL --remove-args="nvme_core.multipath=Y" -
On the 64-bit IBM Z architecture, update the boot menu:
# zipl - Remove the
options nvme_core multipath=Yline from the/etc/modprobe.d/nvme_core.conffile, if it is present. - Reboot the system.
For more details to enable native NVMe and Device Mapper on Red Hat Enterprise Linux 8.0 or later, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_device_mapper_multipath/enabling-multipathing-on-nvme-devices_configuring-device-mapper-multipath#doc-wrapper.
- Using command-line:
Device Mapper or Native NVMe Multipath configuration
Two framework options are available to configure the multipath configuration file: Device Mapper or Native NVMe Multipath.
- Edit the /etc/multipath.conf file to include the following
code:
devices { device { vendor "NVME" product "IBM 2145" path_grouping_policy "multibus" path_selector "round-robin 0" prio "ANA" path_checker "none" failback "immediate" no_path_retry "queue" rr_weight uniform rr_min_io_rq "1" fast_io_fail_tmo 15 dev_loss_tmo 600 } } defaults { user_friendly_names yes path_grouping_policy group_by_prio } - Run the following commands to validate that the multipath daemon is
running:
systemctl enable multipathd.service systemctl start multipathd.service # ps -ef | grep -v grep | grep multipath root 1616 1 0 Nov21 ? 00:01:14 /sbin/multipathd -d -sAfter you enable the multipath service for SLES12SP4, rebuild initrd with multipath support:dracut --force --add multipath - Run the following commands to apply
configurations:
multipath -F multipath multipath -ll
blacklist {
device {
vendor "NVME"
product "IBM\s+2145"
}
}
Device Mapper performance
- Edit the /etc/default/grub file and the following
text:
GRUB_CMDLINE_LINUX_DEFAULT="BOOTPTimeout=20 BootpWait=20 biosdevname=0 powersaved=off resume=/dev/system/swap splash=silent quiet showopts crashkernel=175M,high dm_mod.use_blk_mq=y scsi_mod.use_blk_mq=1 transparent_hugepage=never" - Apply the new configuration:
swfc178:~ # grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found theme: /boot/grub2/themes/SLE/theme.txt Found linux image: /boot/vmlinuz-4.12.14-25.19-default Found initrd image: /boot/initrd-4.12.14-25.19-default Found linux image: /boot/vmlinuz-4.12.14-23-default Found initrd image: /boot/initrd-4.12.14-23-default done - Reboot.
- Validate that the multiqueue feature is enabled in
multipath:
mpatho (eui.880000000000000b0050760071c60044) dm-3 NVME,IBM 2145 size=1.5G features='3 queue_if_no_path queue_mode mq' hwhandler='0' wp=rw