Example of multipath I/O devices as physical volumes for LVM2

By default, LVM2 does not consider device-mapper block devices.

Procedure

To enable the multipath I/O devices for LVM2, change the device section of /etc/lvm/lvm.conf as follows:

  1. Add the directory with the DM device nodes to the array that contains directories that are scanned by LVM2. LVM2 accepts device nodes within these directories only:
    scan = [ "/dev", "/dev/mapper" ]
  2. Add device-mapper volumes as an acceptable block devices type:
    types = [ "device-mapper". 16]
  3. Modify the filter patterns, which LVM2 applies to devices found by a scan. The following line instructs LVM2 to accept the multipath I/O and reject all other devices.
    Note: If you are also using LVM2 on non-multipath I/O devices, you need to modify this line according to your requirements.
    filter = [ "a|/dev/disk/by-name/.*|", "r|.*|" ]

Results

With the preceding settings, you should be able to use the multipath I/O devices for LVM2. The next steps are similar for all types of block devices.

Example

The following example shows the steps to create a volume group that is composed of four multipath I/O devices. It assumes that the multipath I/O devices are already configured.

  1. List available multipath I/O devices:
    # multipath -l
    36005076303ffc56200000000000010d2 dm-2 IBM,2107900
    size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=-2 status=enabled
      |- 0:0:24:1087520784 sdc   8:32  active undef running
      `- 1:0:20:1087520784 sdg   8:96  active undef running
    36005076303ffc56200000000000010d1 dm-1 IBM,2107900
    size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=-2 status=enabled
      |- 0:0:24:1087455248 sdb   8:16  active undef running
      `- 1:0:20:1087455248 sdf   8:80  active undef running
    36005076303ffc56200000000000010d0 dm-0 IBM,2107900
    size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=-2 status=enabled
      |- 0:0:24:1087389712 sda   8:0   active undef running
      `- 1:0:20:1087389712 sde   8:64  active undef running
    36005076303ffc56200000000000010d3 dm-3 IBM,2107900
    size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=-2 status=enabled
      |- 0:0:24:1087586320 sdd   8:48  active undef running
      `- 1:0:20:1087586320 sdh   8:112 active undef running
  2. Initialize the volume by using pvcreate (you must initialize before a volume can be used for LVM2):
    # pvcreate /dev/mapper/36005076303ffc56200000000000010d0
     Physical volume "/dev/mapper/36005076303ffc56200000000000010d0" successfully created
    Repeat this step for all multipath I/O devices that you intend to use for LVM2.
  3. Create the volume group:
    # vgcreate sample_vg /dev/mapper/36005076303ffc56200000000000010d[0123]
       Volume group "sample_vg" successfully created
    # vgdisplay sample_vg
       --- Volume group ---
       VG Name               sample_vg
       System ID
       Format                lvm2
       Metadata Areas        4
       Metadata Sequence No  1
       VG Access             read/write
       VG Status             resizable
       MAX LV                0
       Cur LV                0
       Open LV               0
       Max PV                0
       Cur PV                4
       Act PV                4
       VG Size               19.98 GB
       PE Size               4.00 MB
       Total PE              5116
       Alloc PE / Size       0 / 0
       Free  PE / Size       5116 / 19.98 GB
       VG UUID               Lmlgx9-2A2p-oZEP-CEH3-ZKqc-yTpY-IVOG6v
    Now you can proceed normally: Create logical volumes, build file systems, and mount the logical volumes.

When configured, the multipath I/O devices and LVM2 volume groups can be made available at start time. To do this, continue with the following additional steps.

  1. Include the zfcp unit configuration in the distribution configuration, see the documentation of your distribution about how to do this.
  2. Update the IPL record:
    # zipl
    Using config file '/etc/zipl.conf'
    Building bootmap in '/boot/zipl'
    Adding IPL section 'ipl' (default)
    Preparing boot device: dasda (2c1a).
    Done.
  3. Ensure that multipathing and LVM are enabled in the init scripts for your distribution. Consult the distribution documentation for details.
After re-boot you should see messages that report multipath I/O devices and LVM2 groups, for example:
SCSI subsystem initialized
...
scsi0 : zfcp
qdio: 0.0.181d ZFCP on SC 10 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO
scsi1 : zfcp
qdio: 0.0.191d ZFCP on SC 11 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO
...
device-mapper: uevent: version 1.0.3
device-mapper: ioctl: 4.16.0-ioctl (2009-11-05) initialised: dm-devel@redhat.com
device-mapper: multipath: version 1.1.1 loaded
device-mapper: multipath round-robin: version 1.0.0 loaded
device-mapper: multipath queue-length: version 0.1.0 loaded
device-mapper: multipath service-time: version 0.2.0 loaded
...
For each SCSI device, you see output messages, for example:
scsi 1:0:20:1087127568: Direct-Access     IBM      2107900          .280 PQ: 0 ANSI: 5
scsi 1:0:20:1087127568: alua: supports implicit TPGS
scsi 1:0:20:1087127568: alua: port group 00 rel port 233
scsi 1:0:20:1087127568: alua: rtpg failed with 8000002
scsi 1:0:20:1087127568: alua: port group 00 state A supports tousNA
sd 1:0:20:1087127568: Attached scsi generic sg0 type 0
sd 1:0:20:1087127568: [sda] 10485760 512-byte logical blocks: (5.36 GB/5.00 GiB)
sd 1:0:20:1087127568: [sda] Write Protect is off
sd 1:0:20:1087127568: [sda] Mode Sense: ed 00 00 08
sd 1:0:20:1087127568: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sda: unknown partition table
sd 1:0:20:1087127568: [sda] Attached SCSI disk