NVMe hosts that run the Linux operating system
When you install and configure attachments between the system and a host that runs the Linux® operating system, follow specific guidelines.
For other specific information about NVMe over Fibre Channel (FC-NVMe) such as interoperability requirements, see Configuration limits.
Attachment requirements for hosts that are running the Linux operating system
Ensure that your system meets the requirements for attaching to a host that is running the Linux operating system.
- Follow the HBA vendor instructions to update to the right firmware and driver level.
- Install the required NVM-Express user space tools, by using one of the following commands:
- For SLES
zypper install nvme-cli
- For RHEL
yum install nvme-cli
- For SLES
- If you are working with an Emulex adapter, verify that Emulex auto-connect
is installed. To verify, run the following
command:
rpm -q nvmefc-connect
Note: For SLES 15, RHEL 8.3 and latest kernel levels, the command nvmefc-connect is no longer applicable.Note: When using FC-NVMe on RHEL 9.0 , use nvme-cli version 1.16 or above.
Configuring the Linux operating system for FC-NVMe hosts
After you ensure that your system meets the requirements for attaching to a Linux host, configure the Linux operating system.
You must install the appropriate host bus adapters with the correct firmware and driver levels that support FC-NVMe.
- Zone the host ports to the NVMe ports on the system.
For more information about how to identify NVMe ports on the system, see the CLI host commands.
- Find the host NVMe Qualified Name (NQN) address (under
/etc/nvme/hostnqn
). - On the system, create the NVMe host object by using the host NQN.
svctask mkhost -force -name fc-nvmehost -nqn nqn.2014-08.org.nvmexpress:uuid:449f8291-9c1e-446c-95c1-0942f55fa208 -portset fcnvme -protocol fcnvme -type generic
- Map relevant volumes to the NVMe host. The same volumes cannot be mapped concurrently to both NVMe and SCSI.
- To discover and connect to NVMe targets, enter the following commands.
- The NVMe Discover command.
nvme discover --transport=fc --traddr=nn-0x$twwnn:pn-0x$twwpn --host-traddr=nn-0x$wwnn:pn-0x$wwpn
The return of this command is the NVMe Discover log page, which consists of the target's subsystem NQN.Discovery Log Number of Records 1, Generation counter 0 =====Discovery Log Entry 0====== trtype: fibre-channel adrfam: fibre-channel subtype: nvme subsystem treq: not required portid: <> trsvcid: none subnqn: nqn.1986-03.com.ibm:nvme:2145.<>.iogroup<> traddr: nn-$twwnn:pn-0x$twwpn
- The NVMe Connect command, which has the same syntax with the addition of the
subnqn.
nvme connect --transport=fc --traddr=nn-0x$twwnn:pn-0x$twwpn --host-traddr=nn-0x$wwnn:pn-0x$wwpn -n $subnqn
-
NVMe connect-all includes the Discover and
Connect commands within the same command. This command connects automatically to
the subnqn that is presented in the discover
command:
nvme connect-all --transport=fc --traddr=nn-0x$twwnn:pn-0x$twwpn --host-traddr=nn-0x$wwnn:pn-0x$wwpn
- Initiate, the NVME disconnect command, to disconnect any NVMe device:
nvme disconnect -d /dev/nvme-$x
- Initiate the NVME disconnect-all command, to disconnect all NVMe
devices:
nvme disconnect-all
- The NVMe Discover command.
Removing a LUN
- Stop all files access to the LUN, which needs to be removed and unmount the files system, if
applicable. If LVM was used on one or more storage systems from which the LUN is being removed, make sure that no Logical Volumes, Volume Groups or Physical Volumes are still present or in use of one or more relevant storage systems. To ensure the above function, run the following commands:
- man lvremove
- man remove
- man pvremove
- To verify that all files are closed, enter the lsof/dev/nvme*n* commands. The output must be empty.
- Place the LUNs on the storage side. From the storage system, unmap the volume from the host that
uses the GUI.
- From the left-side panel, open the Hosts tab.
- Right-click on the hostname and select Modify Volume Mappings.
- Select the volume to be unmapped and click Remove Volume Mapping.
Multipath configuration for FC-NVMe hosts
Follow FC-NVMe multipath configuration recommendations for successful attachment of Linux hosts to the system.
Turning Native NVMe Multipath on or off
Hosts can be configured to work with the traditional Device Mapper or with the Native NVMe Multipath as follows:
- On SLES15, Native NVMe Multipath is turned on by default. On SLES12SP4, Native NVMe Multipath is
not turned on by default. To check whether Native NVMe Multipath is on, enter:
# systool -m nvme_core -A multipath Module = "nvme_core" multipath = "Y"
If the multipath = "N", and you want to turn it on, enter:echo "options nvme_core multipath=Y" > /etc/modprobe.d/50-nvme_core.conf dracut -f reboot
Note: On SLES12SP4, if the multipath is not turned on even after reboot, then follow these steps:- Edit the file /etc/default/grub.
- Append nvme-core.multipath=on at the end of the GRUB_CMDLINE_LINUX_DEFAULT variable.
- Save the file.
- Run grub2-mkconfig -o /boot/grub2/grub.cfg to apply the new configuration.
- Reboot.
If you choose to work with Device Mapper, enter:echo "options nvme_core multipath=N" > /etc/modprobe.d/50-nvme_core.conf dracut -f reboot
- On Red Hat Enterprise Linux 8.0 or later, Native NVMe
Multipath is not turned on by default.To check whether Native NVMe Multipath is enabled in the command-line, enter:
# cat /sys/module/nvme_core/parameters/multipath Y
If the output is "N", and you want to enable it, follow these steps:- Using command-line:
- Add the
nvme_core.multipath=Y
option on the command-line:# grubby --update-kernel=ALL --args="nvme_core.multipath=Y"
- On the 64-bit IBM Z architecture, update the boot menu:
# zipl
- Reboot the system.
- Add the
- Using module configuration file:
- Create the
/etc/modprobe.d/nvme_core.conf
configuration file with the following content:options nvme_core multipath=Y
- Back up the
initramfs
file system:# cp /boot/initramfs-$(uname -r).img \ /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img
- Rebuild the
initramfs
file system:# dracut --force --verbose
- Reboot the system.
- Create the
If you choose to work with Device Mapper, enter:-
Remove the
nvme_core.multipath=Y
option from the command-line:# grubby --update-kernel=ALL --remove-args="nvme_core.multipath=Y"
-
On the 64-bit IBM Z architecture, update the boot menu:
# zipl
- Remove the
options nvme_core multipath=Y
line from the/etc/modprobe.d/nvme_core.conf
file, if it is present. - Reboot the system.
For more details to enable native NVMe and Device Mapper on Red Hat Enterprise Linux 8.0 or later, see Enabling multipathing on NVMe devices.
- Using command-line:
Device Mapper or Native NVMe Multipath configuration
Two framework options are available to configure the multipath configuration file: Device Mapper or Native NVMe Multipath.
- Edit the /etc/multipath.conf file to include the following
code:
devices { device { vendor "NVME" product "IBM 2145" path_grouping_policy "multibus" path_selector "round-robin 0" prio "ANA" path_checker "none" failback "immediate" no_path_retry "queue" rr_weight uniform rr_min_io_rq "1" fast_io_fail_tmo 15 dev_loss_tmo 600 } } defaults { user_friendly_names yes path_grouping_policy group_by_prio }
- Run the following commands to validate that the multipath daemon is
running:
systemctl enable multipathd.service systemctl start multipathd.service # ps -ef | grep -v grep | grep multipath root 1616 1 0 Nov21 ? 00:01:14 /sbin/multipathd -d -s
After you enable the multipath service for SLES12SP4, rebuild initrd with multipath support:dracut --force --add multipath
- Run the following commands to apply configurations:
multipath -F multipath multipath -ll
blacklist {
device {
vendor "NVME"
product "IBM\s+2145"
}
}
Device Mapper performance
- Edit the /etc/default/grub file and the following
text:
GRUB_CMDLINE_LINUX_DEFAULT="BOOTPTimeout=20 BootpWait=20 biosdevname=0 powersaved=off resume=/dev/system/swap splash=silent quiet showopts crashkernel=175M,high dm_mod.use_blk_mq=y scsi_mod.use_blk_mq=1 transparent_hugepage=never"
- Apply the new configuration:
swfc178:~ # grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found theme: /boot/grub2/themes/SLE/theme.txt Found linux image: /boot/vmlinuz-4.12.14-25.19-default Found initrd image: /boot/initrd-4.12.14-25.19-default Found linux image: /boot/vmlinuz-4.12.14-23-default Found initrd image: /boot/initrd-4.12.14-23-default done
- Reboot.
- Validate that the multiqueue feature is enabled in
multipath:
mpatho (eui.880000000000000b0050760071c60044) dm-3 NVME,IBM 2145 size=1.5G features='3 queue_if_no_path queue_mode mq' hwhandler='0' wp=rw