MIO configuration example
You can configure MIO at the application level.
OS configuration
The alot_buf application accomplishes the following:- Writes a 14 GB file.
- 140 000 sequential writes with a 100 KB buffer.
- Reads the file sequentially with a 100 KB buffer.
- Reads the file backward sequentially with buffer 100 KB.
# vmstat
System Configuration: lcpu=2 mem=512MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 1 35520 67055 0 0 0 0 0 0 241 64 80 0 0 99 0
# ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 131072
stack(kbytes) 32768
memory(kbytes) 32768
coredump(blocks) 2097151
nofiles(descriptors) 2000
# df -k /mio
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/fslv02 15728640 15715508 1% 231 1% /mio
# lslv fslv02
LOGICAL VOLUME: fslv02 VOLUME GROUP: mio_vg
LV IDENTIFIER: 000b998d00004c00000000f17e5f50dd.2 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 32 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 480 PPs: 480
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: /mio LABEL: /mio
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
MIO configuration to analyze the application /mio/alot_buf
setenv MIO_DEBUG " OPEN MODULES TIMESTAMP"
setenv MIO_FILES "* [ trace/stats/kbytes ]"
setenv MIO_STATS mio_analyze.stats
time /mio/alot_buf
Note: The output diagnostic file is mio_analyze.stats for
the debug data and the trace module
data. All values are in kilobytes.
Note: The time command
instructs MIO to post the time of the execution of a command.
Result of the analysis
- Time of the execution is 28:06.
- MIO analyze diagnostic output file is mio_analyse.stats.
MIO statistics file : Thu May 26 17:32:22 2005
hostname=miohost : with Legacy aio available
Program=/mio/alot_buf
MIO library libmio.a 3.0.0.60 AIX 64 bit addressing built Apr 19 2005 15:07:35
MIO_INSTALL_PATH=
MIO_STATS =mio_analyze.stats
MIO_DEBUG = MATCH OPEN MODULES TIMESTAMP
MIO_FILES =* [ trace/stats/kbytes ]
MIO_DEFAULTS =
MIO_DEBUG OPEN =T
MIO_DEBUG MODULES =T
MIO_DEBUG TIMESTAMP =T
17:32:22
Opening file test.dat
modules[18]=trace/stats/kbytes
trace/stats={mioout}/noevents/kbytes/nointer
aix/nodebug/trunc/sector_size=0/einprogress=60
============================================================================
18:00:28
Trace close : program <-> aix : test.dat : (42000000/1513.95)=27741.92 kbytes/s
demand rate=24912.42 kbytes/s=42000000/(1685.92-0.01))
current size=14000000 max_size=14000000
mode =0640 FileSystemType=JFS2 sector size=4096
oflags =0x302=RDWR CREAT TRUNC
open 1 0.01
write 140000 238.16 14000000 14000000 102400 102400
read 280000 1275.79 28000000 28000000 102400 102400
seek 140003 11.45 average seek delta=-307192
fcntl 2 0.00
close 1 0.00
size 140000
============================================================================
Note:
- 140 000 writes of 102 400 bytes.
- 280 000 reads of 102 400 bytes.
- rate of 27 741.92 KB/s.
MIO configuration to increase I/O performance
setenv MIO_FILES "* [ trace/stats/kbytes | pf/cache=100m/page=2m/pref=4/stats/direct | trace/stats/kbytes ]"
setenv MIO_DEBUG "OPEN MODULES TIMESTAMP"
setenv MIO_STATS mio_pf.stats
time /mio/alot_buf
- A good way to analyse the I/O of your application is to use the
trace | pf | trace
module list. This way you can get the performance that the application sees from the pf cache and also the performance that the pf cache sees from the operating system. - The pf global cache is 100 MB in size. Each page is 2 MB. The number of pages to prefetch is four. The pf cache does asynchronous direct I/O system calls.
- The output diagnostic file is mio_pf.stats for the debug data, the trace module data, and the pf module data. All value are in kilobytes.
Result of the performance test
- Time of the execution is 15:41.
- MIO analyze diagnostic output file is mio_pf.stats.
MIO statistics file : Thu May 26 17:10:12 2005
hostname=uriage : with Legacy aio available
Program=/mio/alot_buf
MIO library libmio.a 3.0.0.60 AIX 64 bit addressing built Apr 19 2005 15:07:35
MIO_INSTALL_PATH=
MIO_STATS =mio_fs.stats
MIO_DEBUG = MATCH OPEN MODULES TIMESTAMP
MIO_FILES =* [ trace/stats/kbytes | pf/cache=100m/page=2m/pref=4/stats/direct | trace/stats/kbytes ]
MIO_DEFAULTS =
MIO_DEBUG OPEN =T
MIO_DEBUG MODULES =T
MIO_DEBUG TIMESTAMP =T
17:10:12
Opening file test.dat
modules[79]=trace/stats/kbytes|pf/cache=100m/page=2m/pref=4/stats/direct|trace/stats/kbytes
trace/stats={mioout}/noevents/kbytes/nointer
pf/nopffw/release/global=0/asynchronous/direct/bytes/cache_size=100m/page_size=2m/prefetch=4/st
ride=1/stats={mioout}/nointer/noretain/nolistio/notag/noscratch/passthru={0:0}
trace/stats={mioout}/noevents/kbytes/nointer
aix/nodebug/trunc/sector_size=0/einprogress=60
============================================================================
17:25:53
Trace close : pf <-> aix : test.dat : (41897728/619.76)=67603.08 kbytes/s
demand rate=44527.71 kbytes/s=41897728/(940.95-0.01))
current size=14000000 max_size=14000000
mode =0640 FileSystemType=JFS2 sector size=4096
oflags =0x8000302=RDWR CREAT TRUNC DIRECT
open 1 0.01
ill form 0 mem misaligned 0
write 1 0.21 1920 1920 1966080 1966080
awrite 6835 0.20 13998080 13998080 2097152 2097152
suspend 6835 219.01 63855.82 kbytes/s
read 3 1.72 6144 6144 2097152 2097152
aread 13619 1.02 27891584 27891584 1966080 2097152
suspend 13619 397.59 69972.07 kbytes/s
seek 20458 0.00 average seek delta=-2097036
fcntl 5 0.00
fstat 2 0.00
close 1 0.00
size 6836
17:25:53
pf close for test.dat
50 pages of 2097152 bytes 4096 bytes per sector
6840/6840 pages not preread for write
7 unused prefetches out of 20459 : prefetch=4
6835 write behinds
bytes transferred / Number of requests
program --> 14336000000/140000 --> pf --> 14336000000/6836 --> aix
program <-- 28672000000/280000 <-- pf <-- 28567273472/13622 <-- aix
17:25:53
pf close for global cache 0
50 pages of 2097152 bytes 4096 bytes per sector
6840/6840 pages not preread for write
7 unused prefetches out of 20459 : prefetch=0
6835 write behinds
bytes transferred / Number of requests
program --> 14336000000/140000 --> pf --> 14336000000/6836 --> aix
program <-- 28672000000/280000 <-- pf <-- 28567273472/13622 <-- aix
17:25:53
Trace close : program <-> pf : test.dat : (42000000/772.63)=54359.71 kbytes/s
demand rate=44636.36 kbytes/s=42000000/(940.95-0.01))
current size=14000000 max_size=14000000
mode =0640 FileSystemType=JFS2 sector size=4096
oflags =0x302=RDWR CREAT TRUNC
open 1 0.01
write 140000 288.88 14000000 14000000 102400 102400
read 280000 483.75 28000000 28000000 102400 102400
seek 140003 13.17 average seek delta=-307192
fcntl 2 0.00
close 1 0.00
size 140000
============================================================================
Note: The program executes 140 000 writes
of 102 400 bytes and 280 000 reads of 102 400 bytes, but the pf module
executes 6 836 writes (of which 6 835 are asynchronous writes) of
2 097 152 bytes and executes 13 622 reads (of which 13 619 are asynchronous
reads) of 2 097 152 bytes. The rate is 54 359.71 KB/s.