NVMe hosts that run the Linux operating system

When you install and configure attachments between the system and a host that runs the Linux® operating system, follow specific guidelines.

For other specific information about NVMe over Fibre Channel (FC-NVMe) such as interoperability requirements, see Configuration limits.

Attachment requirements for hosts that are running the Linux operating system

Ensure that your system meets the requirements for attaching to a host that is running the Linux operating system.

  1. Follow the HBA vendor instructions to update to the right firmware and driver level.
  2. Install the required NVM-Express user space tools, by using one of the following commands:
    • For SLES
      zypper install nvme-cli
    • For RHEL
      yum install nvme-cli
  3. If you are working with an Emulex adapter, verify that Emulex auto-connect is installed. To verify, run the following command:
    rpm -q nvmefc-connect
    Note: For SLES 15, RHEL 8.3 and latest kernel levels, the command nvmefc-connect is no longer applicable.
    Note: When using FC-NVMe on RHEL 9.0 , use nvme-cli version 1.16 or above.

Configuring the Linux operating system for FC-NVMe hosts

After you ensure that your system meets the requirements for attaching to a Linux host, configure the Linux operating system.

You must install the appropriate host bus adapters with the correct firmware and driver levels that support FC-NVMe.

  1. Zone the host ports to the NVMe ports on the system.

    For more information about how to identify NVMe ports on the system, see the CLI host commands.

  2. Find the host NVMe Qualified Name (NQN) address (under /etc/nvme/hostnqn).
  3. On the system, create the NVMe host object by using the host NQN.
    svctask mkhost -force -name fc-nvmehost -nqn nqn.2014-08.org.nvmexpress:uuid:449f8291-9c1e-446c-95c1-0942f55fa208 
    -portset fcnvme -protocol fcnvme -type generic
  4. Map relevant volumes to the NVMe host. The same volumes cannot be mapped concurrently to both NVMe and SCSI.
  5. To discover and connect to NVMe targets, enter the following commands.
    1. The NVMe Discover command.
      nvme discover --transport=fc --traddr=nn-0x$twwnn:pn-0x$twwpn 
      --host-traddr=nn-0x$wwnn:pn-0x$wwpn
      The return of this command is the NVMe Discover log page, which consists of the target's subsystem NQN.
      Discovery Log Number of Records 1, Generation counter 0
      =====Discovery Log Entry 0======
      trtype:  fibre-channel
      adrfam:  fibre-channel
      subtype: nvme subsystem
      treq:    not required
      portid:  <>
      trsvcid: none
      subnqn:  nqn.1986-03.com.ibm:nvme:2145.<>.iogroup<>
      traddr:  nn-$twwnn:pn-0x$twwpn
    2. The NVMe Connect command, which has the same syntax with the addition of the subnqn.
      nvme connect --transport=fc --traddr=nn-0x$twwnn:pn-0x$twwpn 
      --host-traddr=nn-0x$wwnn:pn-0x$wwpn -n $subnqn
    3. NVMe connect-all includes the Discover and Connect commands within the same command. This command connects automatically to the subnqn that is presented in the discover command:
      nvme connect-all --transport=fc --traddr=nn-0x$twwnn:pn-0x$twwpn 
      --host-traddr=nn-0x$wwnn:pn-0x$wwpn
    4. Initiate, the NVME disconnect command, to disconnect any NVMe device:
      nvme disconnect -d /dev/nvme-$x
    5. Initiate the NVME disconnect-all command, to disconnect all NVMe devices:
      nvme disconnect-all

Removing a LUN

To remove a LUN from the storage system, perform the following steps:
Attention: Before removing a LUN from the storage system, ensure that all processes and applications that access the LUN are stopped and nothing is accessing one or more storage systems. If any process is still accessing the removed LUN, an emergency shutdown may occur, resulting in loss of access.
  1. Stop all files access to the LUN, which needs to be removed and unmount the files system, if applicable.
    If LVM was used on one or more storage systems from which the LUN is being removed, make sure that no Logical Volumes, Volume Groups or Physical Volumes are still present or in use of one or more relevant storage systems. To ensure the above function, run the following commands:
    • man lvremove
    • man remove
    • man pvremove
  2. To verify that all files are closed, enter the lsof/dev/nvme*n* commands. The output must be empty.
  3. Place the LUNs on the storage side. From the storage system, unmap the volume from the host that uses the GUI.
    1. From the left-side panel, open the Hosts tab.
    2. Right-click on the hostname and select Modify Volume Mappings.
    3. Select the volume to be unmapped and click Remove Volume Mapping.

Multipath configuration for FC-NVMe hosts

Follow FC-NVMe multipath configuration recommendations for successful attachment of Linux hosts to the system.

Turning Native NVMe Multipath on or off

Hosts can be configured to work with the traditional Device Mapper or with the Native NVMe Multipath as follows:

  • On SLES15, Native NVMe Multipath is turned on by default. On SLES12SP4, Native NVMe Multipath is not turned on by default.
    To check whether Native NVMe Multipath is on, enter:
    # systool -m nvme_core -A multipath
    Module = "nvme_core"
    
        multipath           = "Y"
    If the multipath = "N", and you want to turn it on, enter:
    
    echo "options nvme_core multipath=Y" > /etc/modprobe.d/50-nvme_core.conf
    dracut -f
    reboot
    Note: On SLES12SP4, if the multipath is not turned on even after reboot, then follow these steps:
    1. Edit the file /etc/default/grub.
    2. Append nvme-core.multipath=on at the end of the GRUB_CMDLINE_LINUX_DEFAULT variable.
    3. Save the file.
    4. Run grub2-mkconfig -o /boot/grub2/grub.cfg to apply the new configuration.
    5. Reboot.
    If you choose to work with Device Mapper, enter:
    echo "options nvme_core multipath=N" > /etc/modprobe.d/50-nvme_core.conf
    dracut -f
    reboot
  • On Red Hat Enterprise Linux 8.0 or later, Native NVMe Multipath is not turned on by default.
    To check whether Native NVMe Multipath is enabled in the command-line, enter:
    # cat /sys/module/nvme_core/parameters/multipath
    
        Y
    If the output is "N", and you want to enable it, follow these steps:
    • Using command-line:
      1. Add the nvme_core.multipath=Y option on the command-line:
        # grubby --update-kernel=ALL --args="nvme_core.multipath=Y"
      2. On the 64-bit IBM Z architecture, update the boot menu:
        # zipl
        
      3. Reboot the system.
    • Using module configuration file:
      1. Create the /etc/modprobe.d/nvme_core.conf configuration file with the following content:
        options nvme_core multipath=Y
      2. Back up the initramfs file system:
        # cp /boot/initramfs-$(uname -r).img \
             /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img
      3. Rebuild the initramfs file system:
        # dracut --force --verbose
      4. Reboot the system.
    If you choose to work with Device Mapper, enter:
    1. Remove the nvme_core.multipath=Y option from the command-line:

      # grubby --update-kernel=ALL --remove-args="nvme_core.multipath=Y"
    2. On the 64-bit IBM Z architecture, update the boot menu:

      # zipl
    3. Remove the options nvme_core multipath=Y line from the /etc/modprobe.d/nvme_core.conf file, if it is present.
    4. Reboot the system.

    For more details to enable native NVMe and Device Mapper on Red Hat Enterprise Linux 8.0 or later, see Enabling multipathing on NVMe devices.

Device Mapper or Native NVMe Multipath configuration

Two framework options are available to configure the multipath configuration file: Device Mapper or Native NVMe Multipath.

To configure Device Mapper, complete the following steps:
  1. Edit the /etc/multipath.conf file to include the following code:
    devices {
        device {
            vendor "NVME"
            product "IBM     2145"
            path_grouping_policy "multibus"
            path_selector "round-robin 0"
            prio "ANA"
            path_checker "none"
            failback "immediate"
            no_path_retry "queue"
            rr_weight uniform
            rr_min_io_rq "1"
            fast_io_fail_tmo 15
            dev_loss_tmo 600
        }
    }
    defaults {
        user_friendly_names yes
        path_grouping_policy    group_by_prio
    }
  2. Run the following commands to validate that the multipath daemon is running:
    systemctl enable multipathd.service
    systemctl start multipathd.service
    
    # ps -ef | grep -v grep | grep multipath
    root      1616     1  0 Nov21 ?        00:01:14 /sbin/multipathd -d -s
    
    After you enable the multipath service for SLES12SP4, rebuild initrd with multipath support:
    dracut --force --add multipath
  3. Run the following commands to apply configurations:
    multipath -F
    multipath
    multipath -ll
If you use Native NVMe Multipath, add the following code to the multipath configuration file:
blacklist {
    device {
        vendor  "NVME"
        product "IBM\s+2145"
    }
}

Device Mapper performance

Testing shows that performance is superior when you work with multiqueue I/O scheduling as described in the following steps.
Note: Before you apply these steps, contact a SUSE representative to verify that this option is supported.
For performance improvements, apply multiqueue I/O scheduling with blk-mq (in SLES): 
Warning: Do not change nvme-core.multipath value when Native NVMe is in use.
  1. Edit the /etc/default/grub file and the following text:
    GRUB_CMDLINE_LINUX_DEFAULT="BOOTPTimeout=20 BootpWait=20 
    biosdevname=0 powersaved=off resume=/dev/system/swap splash=silent 
    quiet showopts crashkernel=175M,high dm_mod.use_blk_mq=y 
    scsi_mod.use_blk_mq=1 transparent_hugepage=never"
  2. Apply the new configuration:
    swfc178:~ # grub2-mkconfig -o /boot/grub2/grub.cfg
    Generating grub configuration file ...
    Found theme: /boot/grub2/themes/SLE/theme.txt
    Found linux image: /boot/vmlinuz-4.12.14-25.19-default
    Found initrd image: /boot/initrd-4.12.14-25.19-default
    Found linux image: /boot/vmlinuz-4.12.14-23-default
    Found initrd image: /boot/initrd-4.12.14-23-default
    done
  3. Reboot.
  4. Validate that the multiqueue feature is enabled in multipath:
    mpatho (eui.880000000000000b0050760071c60044) dm-3 NVME,IBM     2145
    size=1.5G features='3 queue_if_no_path queue_mode mq' hwhandler='0' wp=rw