File system I/O workload

For the file system I/O workload fio was used.

fio is an I/O tool intended to be used both for benchmark and stress/hardware verification, for more information, see:
https://git.kernel.dk/?p=fio.git;a=blob_plain;f=README;hb=cf9a74c8bd63d9db5256f1362885c740e11a1fe5
It has support for various types of I/O engines (such as sync or libaio), I/O priorities, throughput, forked or threaded jobs, and much more. It can work on block devices as well as files. Fio displays all sorts of I/O performance information.

For our tests, a file system I/O workload was executed within a Linux® on System z® operating system instance running within the z/VM® guest virtual machine that was relocated. The file system I/O workload was varied such that specific memory usage patterns result, using the following parameters:

  • Use of page cache I/O versus direct I/O

Page cache I/O causes high memory access rates because data is written through the Linux page cache. The use of page cache I/O is expected to cause a very different relocation behavior as opposed to not using the page cache.

  • Use of asynchronous versus synchronous I/O

Asynchronous I/O provides for more I/O operations being performed concurrently.

  • I/O rate

The I/O rate controls the maximum amount of data transferred by fio, effectively constraining the memory access rates of the fio processes.

The I/O rate in terms of MiB/s controls the amount of data transferred by fio. This provides a means to control the effective memory change rate of the fio processes.

The file system I/O workload was customized for producing specific loads on memory and I/O resources. The workload parameters are listed in Table 1.

Table 1. File system I/O workload parameters

Parameter

Name

Value

Number of fio jobs

numjobs

16

file size

filesize

1 GiB

block size

bs

32 KiB

type of I/O pattern

rw

randwrite

enable direct I/O

direct

0 (page cache I/O)

1 (direct I/O)

Preset throughput per job

rate

not limited 25000 KiB/s 12500 KiB/s 3125 KiB/s

The workload always consisted of 16 fio jobs, each doing random write operations on a file with a size of one GiB and a block size of 32 KiB on an ext3 filesystem.