Disk I/O
This section describes using HyperPAV and using the Logical Volume Manager.
Using HyperPAV
The I/O throughput for an ECKD DASD disk device can be improved by using Parallel Access Volumes (PAV) or HyperPAV. This feature is important for DASDs used for the FileNet® P8 CE file store and for the FileNet P8 CE/PE databases where a lot of disk I/O occurs.
The Linux® DASD device driver can use this IBM® System Storage® feature to perform multiple concurrent data transfer operations to or from the same DASD device instead of single data transfers. To use HyperPAV, there must be base and alias devices available, which require System z® Input/Output Configurations Data Set (IOCDS) definitions. For HyperPAV on an IBM System Storage subsystem, the alias devices are not exclusively referenced to a certain base device, but they are eligible for all base devices in the same logical control unit (LCU). Linux handles HyperPAV alias devices in the same way as a normal DASD base devices by using the chccwdev command or defining the appropriate udev rules for them. When listing the DASD devices with the lsdasd command, HyperPAV aliases can be identified per the 'alias' status tag. The usage of the HyperPAV aliases is completely handled by the Linux kernel and transparent to the users.
Sample command: lsdasd showing 11 DASD devices and 20 HyperPAV aliases for the database server
# lsdasd
Bus-ID Status Name Device Type BlkSz Size Blocks
==============================================================================
0.0.6ebc alias ECKD
0.0.6ebd alias ECKD
0.0.6ebe alias ECKD
0.0.6ebf alias ECKD
0.0.6ec0 alias ECKD
0.0.6ec1 alias ECKD
0.0.6ec2 alias ECKD
0.0.6ec3 alias ECKD
0.0.6ec4 alias ECKD
0.0.6ec5 alias ECKD
0.0.6fbc alias ECKD
0.0.6fbd alias ECKD
0.0.6fbe alias ECKD
0.0.6fbf alias ECKD
0.0.6fc0 alias ECKD
0.0.6fc1 alias ECKD
0.0.6fc2 alias ECKD
0.0.6fc3 alias ECKD
0.0.6fc4 alias ECKD
0.0.6fc5 alias ECKD
0.0.7215 active dasda 94:0 ECKD 4096 21129MB 5409180
0.0.680d active dasdb 94:4 ECKD 4096 42259MB 10818360
0.0.690c active dasdc 94:8 ECKD 4096 42259MB 10818360
0.0.6e0c active dasdd 94:12 ECKD 4096 42259MB 10818360
0.0.6e0d active dasde 94:16 ECKD 4096 42259MB 10818360
0.0.6e0f active dasdf 94:20 ECKD 4096 21129MB 5409180
0.0.6e10 active dasdg 94:24 ECKD 4096 21129MB 5409180
0.0.6f0c active dasdh 94:28 ECKD 4096 42259MB 10818360
0.0.6f0d active dasdi 94:32 ECKD 4096 42259MB 10818360
0.0.6f0f active dasdj 94:36 ECKD 4096 21129MB 5409180
0.0.6f10 active dasdk 94:40 ECKD 4096 21129MB 5409180
https://www.ibm.com/servers/resourcelink
https://www.ibm.com/docs/en/linux-on-systems?topic=overview-how-improve-performance-pav
https://www.ibm.com/docs/en/linuxonibm/liaag/l0orac00.pdf
Using the Logical Volume Manager
The Linux Logical Volume Manager (LVM) was used to create Logical Volumes (LV) using a couple of physical DASD devices. The LVs were defined with striping enabled, so that any I/O operations can be parallelized across the physical DASD devices within the LV. This allows a higher performance for reading and writing sequential files, but also benefits for random disk I/O. LVs are used for the FileNet P8 CE/PE server and the database server for their large data. The underlying Volume Groups (VG) were set up with full DASD devices.
Sample command: pvscan showing the DASDs assigned to database server VGs
# pvscan
PV /dev/dasdf1 VG DB_log_vg lvm2 [20.63 GiB / 0 free]
PV /dev/dasdj1 VG DB_log_vg lvm2 [20.63 GiB / 0 free]
PV /dev/dasdg1 VG DB_log_vg lvm2 [20.63 GiB / 0 free]
PV /dev/dasdk1 VG DB_log_vg lvm2 [20.63 GiB / 0 free]
PV /dev/dasdd1 VG ECM_data_vg lvm2 [41.27 GiB / 0 free]
PV /dev/dasde1 VG ECM_data_vg lvm2 [41.27 GiB / 0 free]
PV /dev/dasdh1 VG ECM_data_vg lvm2 [41.27 GiB / 0 free]
PV /dev/dasdi1 VG ECM_data_vg lvm2 [41.27 GiB / 0 free]
PV /dev/dasdb1 VG ECM_backup_vg lvm2 [41.27 GiB / 0 free]
PV /dev/dasdc1 VG ECM_backup_vg lvm2 [41.27 GiB / 0 free]
Total: 10 [330.11 GiB] / in use: 10 [330.11 GiB] / in no VG: 0 [0 ]
Sample command: lvdisplay showing the extent mapping of the database data LV
# lvdisplay -m /dev/ECM_data_vg/ECM_data_lv
--- Logical volume ---
LV Name /dev/ECM_data_vg/ECM_data_lv
VG Name ECM_data_vg
LV UUID dtEdy3-TZGY-5dcY-wMv1-Scx1-XWHx-4fzHGv
LV Write Access read/write
LV Status available
# open 1
LV Size 165.06 GiB
Current LE 42256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:1
--- Segments ---
Logical extent 0 to 42255:
Type striped
Stripes 4
Stripe size 64.00 KiB
Stripe 0:
Physical volume /dev/dasdd1
Physical extents 0 to 10563
Stripe 1:
Physical volume /dev/dasde1
Physical extents 0 to 10563
Stripe 2:
Physical volume /dev/dasdh1
Physical extents 0 to 10563
Stripe 3:
Physical volume /dev/dasdi1
Physical extents 0 to 10563
The above LVM command outputs show the setup for the database server. The database server has three VGs each with one LV in total for database data files, database log files and one for backup purposes. It is common to put the database log files and the database data files on separate Logical Volumes to avoid that they interfere together. The VG to LV one-to-one relation implicates that each LV has its own DASD devices. The DASDs were alternately selected from two ECKD storage pools, so that the LV can benefit from both storage subsystem internal server caches.
The lvdisplay command for the database data LV shows four physical extents (full DASDs in this case). From performance point of view, it is recommended to define the number of stripes equal to the number of extents in the LV. Hence four LV stripes are used in this example. Further a stripe size of 64 KiB has been chosen.
Sample command: pvscan listing the DASDs assigned to the file store VG for the FileNet P8 Content Engine
# pvscan
PV /dev/dasdb1 VG CE_filestor_vg lvm2 [97.82 GiB / 0 free]
PV /dev/dasdc1 VG CE_filestor_vg lvm2 [97.82 GiB / 0 free]
Total: 2 [195.63 GiB] / in use: 2 [195.63 GiB] / in no VG: 0 [0 ]
Sample command: lvdisplay showing the extent mapping of the DB data LV
# lvdisplay -m
--- Logical volume ---
LV Name /dev/CE_filestor_vg/CE_filestor_lv
VG Name CE_filestor_vg
LV UUID 8xpcgE-Y14U-0hpe-YWPa-Q2wv-Gk6X-FYM9Ji
LV Write Access read/write
LV Status available
# open 1
LV Size 195.63 GiB
Current LE 50082
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:0
--- Segments ---
Logical extent 0 to 50081:
Type striped
Stripes 2
Stripe size 64.00 KiB
Stripe 0:
Physical volume /dev/dasdb1
Physical extents 0 to 25040
Stripe 1:
Physical volume /dev/dasdc1
Physical extents 0 to 25040
The above LVM command outputs show the LV setup for the FileNet P8 CE/PE server. One LV was created for the FileNet P8 CE file store. The LV stripe size is set to two using stripe size of 64 KiB
https://www.ibm.com/developerworks/linux/linux390/perf/tuning_diskio.html#dpo