LFS
The LFS report provides detailed file system statistics; the following sample shows an example of the content. Each part of the report is described.
F ZFS,QUERY,LFS
IOEZ00438I Starting Query Command LFS. 790
zFS Vnode Op Counts
Vnode Op Count Vnode Op Count
----------------- ---------- ----------------- ----------
efs_hold 0 efs_readdir 12473
efs_rele 0 efs_create 11209
efs_inactive 0 efs_remove 4
efsvn_getattr 71182435 efs_rename 0
efs_setattr 13 efs_mkdir 84
efs_access 64240 efs_rmdir 3
efs_lookup 216423 efs_link 0
efs_getvolume 0 efs_symlink 0
efs_getlength 0 efs_readlink 1208
efs_afsfid 0 efs_rdwr 0
efs_fid 0 efs_fsync 0
efs_vmread 0 efs_waitIO 61121
efs_vmwrite 0 efs_cancelIO 5
efs_clrsetid 0 efs_audit 23
efs_getanode 2498 efs_vmblkinfo 0
efs_readdir_raw 33 efs_convert 0
Average number of names per convert 0
Number of version5 directory splits 0
Number of version5 directory merges 0
Total zFS Vnode Ops 71551772
zFS Vnode Cache Statistics
Vnodes Requests Hits Ratio Allocates Deletes
---------- ---------- ---------- ----- ---------- ----------
29295 766173 716967 93.578% 7 34171
zFS Vnode structure size: 240 bytes
zFS extended vnodes: 13830, extension size 864 bytes (minimum)
Held zFS vnodes: 8 (high 11293)
Open zFS vnodes: 0 (high 5)
Reusable: 29286
Total osi_getvnode Calls: 13495 (high resp 0)
Avg. Call Time: 0.008 (msecs)
Total SAF Calls: 87013 (high resp 0)
Avg. Call Time: 0.001 (msecs)
Remote Vnode Extension Cleans: 0
zFS Fast Lookup Statistics
Buffers Lookups Hits Ratio Neg. Hits Updates
---------- ---------- ---------- ------ ---------- ----------
1000 4660 2452 52.618% 1357 2271
YSID EIMG DATE 07/05/2017 2017.186 LINE 4,584 PAGE 2
Metadata Caching Statistics
Buffers (K bytes) Requests Hits Ratio Updates PartialWrt
--------- --------- ---------- ---------- ------ ---------- ----------
83484 23848 981046 967961 98.6% 476870 1813
I/O Summary By Type
-------------------
Count Waits Cancels Merges Type
---------- ---------- ---------- ---------- ----------
44579 27968 0 1968 File System Metadata
422 34 0 0 Log File
121373 60255 0 0 User File Data
I/O Summary By Circumstance
---------------------------
Count Waits Cancels Merges Circumstance
---------- ---------- ---------- ---------- ------------
40251 23846 0 1968 Metadata cache read
52102 52101 0 0 User file cache direct read
34 34 0 0 Log file read
0 0 0 0 Metadata cache async delete write
159 4 0 0 Metadata cache async write
0 0 0 0 Metadata cache lazy write
983 983 0 0 Metadata cache sync delete write
0 0 0 0 Metadata cache sync write
68257 7140 0 0 User File cache direct write
19 19 0 0 Metadata cache file sync write
51 0 0 0 Metadata cache sync daemon write
0 0 0 0 Metadata cache aggregate detach write
0 0 0 0 Metadata cache buffer block reclaim write
53 53 0 0 Metadata cache buffer allocation write
4034 4034 0 0 Metadata cache file system quiesce write
4 4 0 0 Metadata cache log file full write
388 0 0 0 Log file write
8 8 0 0 Metadata cache shutdown write
31 31 0 0 Format, grow write
zFS I/O by Currently Attached Aggregate
DASD PAV
VOLSER IOs Mode Reads K bytes Writes K bytes
------ --- ---- ---------- ---------- ---------- ----------
*OMVS.MNT.OMVSSPA.SVT.TOOLS.ZFS
SMMMN0 1 R/O 8007 35880 0 0
*POSIX.CFCIMGKA.ICTROOT
POSIX6 1 R/W 338 2688 7094 28472
*SUIMGKA.HIGHRISK.LTE
SMBRS1 1 R/W 21 488 7342 29920
*POSIX.ZFSFVT.REGFS
POSIX5 1 R/O 7014 28636 0 0
*ZFSAGGR.BIGZFS.FS1
ZFSD33 1 R/W 2306 46992 2403 48032
------ ---------- ---------- ---------- ----------
*TOTALS*
5 17686 114684 16839 106424
Compression calls: 6708 Avg. call time: 2.316
KB input 411216 KB output 59488
Decompression calls: 5892 Avg. call time: 2.190
KB input 48864 KB output 373536
Total number of waits for I/O: 88257
Average I/O wait time: 3.532 (msecs)
IOEZ00025I zFS kernel: MODIFY command - QUERY,LFS completed 791
successfully.
| Field name | Contents |
|---|---|
| zFS Vnode Op Counts: | Shows the number of calls to the lower layer zFS components. One request from z/OS® UNIX typically requires more than one lower-layer call. Note that the output of this report wraps. |
| zFS Vnode Cache Statistics: | zFS Fast Lookup Statistics: |
| Shows the basic performance characteristics of the zFS fast lookup cache. The fast lookup cache is used on the owning system for a zFS sysplex-aware file system to improve the performance of the lookup operation. There are no externals for this cache (other than this display). The statistics show the total number of buffers (each are 8K in size), the total number of lookups, the cache hits for lookups and the hit ratio. The higher the hit ratio, the better the performance. | |
| Metadata Caching Statistics: | Shows the basic performance characteristics of the metadata cache. The
metadata cache contains a cache of all disk blocks that contain metadata and any file data for files
less than 7 K in size. For files smaller than 7 K, zFS places multiple files in one disk block (for
zFS a disk block is 8 K bytes). Only the lower metadata management layers have the block
fragmentation information, so the user file I/O for small files is performed directly through this
cache rather than the user file cache. The statistics show the total number of buffers (each buffer is 8 K in size), the total bytes, the request rates, hit ratio of the cache, Updates (the number of times an update was made to a metadata block), and Partial writes (the number of times that only half of an 8-K metadata block needed to be written). The higher the hit ratio the better the performance. Metadata is accessed frequently in zFS and all metadata is contained only (for the most part) in the metadata cache therefore, a hit ratio of 80% or more is typically sufficient. |
| zFS I/O by Currently Attached Aggregate: | The zFS I/O driver is essentially an I/O queue manager (one I/O queue per
DASD). It uses Media Manager to issue I/O to VSAM data sets. It generally sends no more than one I/O
per DASD volume to disk at one time. The exception is parallel access volume (PAV) DASD. These DASD
often have multiple paths and can perform multiple I/O in parallel. In this case, zFS will divide
the number of access paths by two and round any fraction up. (For example, for a PAV DASD with five
paths, zFS will issue, at the most, three I/Os at one time to Media Manager). zFS limits the I/O because it uses a dynamic reordering and prioritization scheme to improve performance by reordering the I/O queue on demand. Thus, high priority I/Os (I/Os that are currently being waited on, for example) are placed up front. An I/O can be made high priority at any time during its life. This reordering has been proven to provide the best performance, and for PAV DASD, performance tests have shown that not sending quite as many I/Os as available paths allows zFS to reorder I/Os and leave paths available for I/Os that become high priority. Another feature of the zFS I/O driver is that by queuing I/Os, it allows I/Os to be canceled. For example, this is done in cases where a file was written, and then immediately deleted. Finally, the zFS I/O driver merges adjacent I/Os into one larger I/O to reduce I/O scheduling resource, this is often done with log file I/Os because often times multiple log file I/Os are in the queue at one time and the log file blocks are contiguous on disk. This allows log file pages to be written aggressively (making it less likely that users lose data in a failure) and yet batched together for performance if the disk has a high load. This
section contains the following information:
By using this information with the KN report, you can break down zFS response time into what percentage of the response time is for I/O wait. To reduce I/O waits, you can run with larger cache sizes. Small log files (small aggregates) that are heavily updated might result in I/Os to sync metadata to reclaim log file pages resulting in additional I/O waits. Note that this number is not DASD response time. It is affected by it, but it is not the same. If a thread does not have to wait for an I/O then it has no I/O wait; if a thread has to wait for an I/O but there are other I/Os being processed, it might actually wait for more than one I/O (the time in queue plus the time for the I/O). This report, along with RMF DASD reports and the zFS FILE report, can be also used to balance zFS aggregates among DASD volumes to ensure an even I/O spread. |
| Field name | Contents |
|---|---|
| Compression calls | The number of compression calls. |
| Decompression calls | The number of decompression calls. |
| Average call time | The average number of milliseconds per compression or decompression call. |
| KB input | The number of kilobytes sent to zEDC cards for compression or decompression calls. |
| KB output | The number of kilobytes returned from zEDC cards for compression or decompression calls. |