LFS

The LFS report provides detailed file system statistics; the following sample shows an example of the content. Each part of the report is described.

   F ZFS,QUERY,LFS
   IOEZ00438I Starting Query Command LFS. 421
                        zFS Vnode Op Counts
  
   Vnode Op               Count    Vnode Op               Count
   ----------------- ----------    ----------------- ----------
   efs_hold                   0    efs_readdir            67997
   efs_rele                   0    efs_create           1569039
   efs_inactive               0    efs_remove           1945874
   efsvn_getattr        9856523    efs_rename            235320
   efs_setattr               40    efs_mkdir             237359
   efs_access           1656502    efs_rmdir             238004
   efs_lookup          21545682    efs_link              237318
   efs_getvolume              0    efs_symlink           237318
   efs_getlength              0    efs_readlink               0
   efs_afsfid                 0    efs_rdwr                   0
   efs_fid                    0    efs_fsync                  0
   efs_vmread                 0    efs_waitIO                 9
   efs_vmwrite                0    efs_cancelIO               0
   efs_clrsetid               0    efs_audit               5425
   efs_getanode           16640    efs_vmblkinfo              0
   efs_readdir_raw            0    efs_convert                0
  
   Average number of names per convert                        0
   Number of version5 directory splits                      126
   Number of version5 directory merges                       63
 
   Total zFS Vnode Ops                                 37849050
  
                           zFS Vnode Cache Statistics
  
   Vnodes      Requests     Hits    Ratio  Allocates   Deletes
 ---------- ---------- ---------- ----- ---------- ----------
     200000   25908218   22431383  86.580%          0          1
  
   zFS Vnode structure size: 224 bytes
   zFS extended vnodes: 200000, extension size Start of change816End of change bytes (minimum)
   Held zFS vnodes:        2914 (high      29002) 
   Open zFS vnodes:           0 (high         10) 
   Reusable:             197085
  
   Total osi_getvnode Calls:    3886774 (high resp          0) Avg. Call
   Time:         0.069 (msecs)
   Total SAF Calls:            11050540 (high resp          1) Avg. Call
   Time:         0.008 (msecs)
  
  Start of changeRemote Vnode Extension Cleans                0End of change
                           zFS Fast Lookup Statistics
  
   Buffers     Lookups      Hits    Ratio  Neg. Hits   Updates
   ---------- ---------- ---------- ----- ---------- ----------
         1000          0          0   0.0%          0          0
  
                            Metadata Caching Statistics
  
   Buffers   (K bytes)  Requests     Hits    Ratio   Updates   PartialWrt
   --------- --------- ---------- ---------- ------ ---------- ----------
       32768    262144   77813570   77529130  99.6%   27943073     423524
  
  
                     I/O Summary By Type
                   -------------------
  
   Count       Waits       Cancels     Merges      Type
   ----------  ----------  ----------  ----------  ----------
        33006        7701           0           0  File System Metadata
       680516        1020           0       56366  Log File
           11           1           0           0  User File Data
  
                     I/O Summary By Circumstance
                     ---------------------------
  
   Count       Waits       Cancels     Merges      Circumstance
   ----------  ----------  ----------  ----------  ------------
         7213        6553           0           0  Metadata cache read
            1           1           0           0  User file cache direct read
            4           4           0           0  Log file read
            0           0           0           0  Metadata cache async delete write
            0           0           0           0  Metadata cache async write
            0           0           0           0  Metadata cache lazy write
            0           0           0           0  Metadata cache sync delete write
            0           0           0           0  Metadata cache sync write
           10           0           0           0  User File cache direct write
            1           1           0           0  Metadata cache file sync write
        16981         861           0           0  Metadata cache sync daemon write
            0           0           0           0  Metadata cache aggregate detach write
            0           0           0           0  Metadata cache buffer block reclaim write
            0           0           0           0  Metadata cache buffer allocation write
            0           0           0           0  Metadata cache file system quiesce write
         8811         286           0           0  Metadata cache log file full write
       680512        1016           0       56366  Log file write
            0           0           0           0  Metadata cache shutdown write
            0           0           0           0  Format, grow write
  
                        zFS I/O by Currently Attached Aggregate
  
   DASD   PAV
   VOLSER IOs Mode  Reads       K bytes     Writes      K bytes
   Dataset Name
   ------ --- ----  ----------  ----------  ----------  ----------
   ZFSAGGR.BIGZFS.DHH.FS14.EXTATTR
   ZFSD18   1  R/W          44         344        1831       17224
   ZFSAGGR.BIGZFS.DHH.FS1.EXTATTR
   ZFS121   1  R/W        6509       52056      648750    10276788
   
   ------           ----------  ----------  ----------  ----------
  *TOTALS*
        2                 6553       52400      650581    10294012
     
  
   Total number of waits for I/O:       8722
   Average I/O wait time:               115.334 (msecs)
   IOEZ00025I zFS kernel: MODIFY command - QUERY,LFS completed successfully
Table 1. LFS report sections
Field name Contents
zFS Vnode Op Counts: Shows the number of calls to the lower layer zFS components. One request from z/OS® UNIX typically requires more than one lower-layer call. Note that the output of this report wraps.
zFS Vnode Cache Statistics: Shows the zFS vnode cache statistics. It shows the number of currently allocated vnodes and the vnode hit ratio. Allocates and "Deletes" show requests to create new vnodes (for operations such as create or mkdir) and delete vnodes (for operations such as remove or failed creates or mkdirs). The size of this cache is controlled by the vnode_cache_size parameter and the demand for zFS vnodes placed by z/OS UNIX. In general, zFS tries to honor the setting of the vnode_cache_size parameter and recycle vnode structures to represent different files.

However, if z/OS UNIX requests more vnodes than zFS has allocated then zFS must allocate vnodes to avoid applications failing. Held zFS vnodes is the number of vnodes that z/OS UNIX has required of zFS to currently access. high is the largest number of vnodes that z/OS UNIX required of zFS to access at one time (during a peak time). z/OS UNIX also determines when files are to be opened and closed. Open zFS vnodes is the number of vnodes that represent currently open files. high is the largest number of files open at the same time. Generally, a good hit ratio for this cache is preferable because a miss means initializing the data structures and initialization requires a read of the object's status from disk. Often this is in the metadata cache, but it is not guaranteed. Consequently a vnode cache lookup miss might require an I/O wait.

The vnode structure size is shown; however, additional data structures anchored from the vnode also take space. Everything added together yields over 1 K of storage per vnode. Consider this when planning the size of this cache. Also note that initializing a vnode will not require an I/O if the object's status information is in the metadata cache, thus a good size metadata cache can be as useful—often more useful than an extremely large vnode cache.

Total osi_getvnode Calls is the number of times zFS called the osi_getvnode interface of z/OS UNIX to get a z/OS UNIX vnode to correspond to a new zFS vnode. Its high resp is the number of calls that took longer than a second to complete. Avg. Call Time is the average number of milliseconds each call took to complete.

Total SAF Calls is the number of calls zFS made to the security product via the SAF interface. high resp is the number of these security calls that took longer than a second to complete. Avg. Call Time is the average number of milliseconds each call took to complete.

zFS Fast Lookup Statistics: Shows the basic performance characteristics of the zFS fast lookup cache. The fast lookup cache is used on the owning system for a zFS sysplex-aware file system to improve the performance of the lookup operation. There are no externals for this cache (other than this display). The statistics show the total number of buffers (each are 8K in size), the total number of lookups, the cache hits for lookups and the hit ratio. The higher the hit ratio, the better the performance.
Metadata Caching Statistics: Shows the basic performance characteristics of the metadata cache. The metadata cache contains a cache of all disk blocks that contain metadata and any file data for files less than 7 K in size. For files smaller than 7 K, zFS places multiple files in one disk block (for zFS a disk block is 8 K bytes). Only the lower metadata management layers have the block fragmentation information, so the user file I/O for small files is performed directly through this cache rather than the user file cache.

The statistics show the total number of buffers (each are 8 K in size), the total bytes, the request rates, hit ratio of the cache, Updates (the number of times an update was made to a metadata block), and Partial writes (the number of times that only half of an 8-K metadata block needed to be written). The higher the hit ratio the better the performance. Metadata is accessed frequently in zFS and all metadata is contained only (for the most part) in the metadata cache therefore, a hit ratio of 80% or more is typically sufficient.

zFS I/O by Currently Attached Aggregate: The zFS I/O driver is essentially an I/O queue manager (one I/O queue per DASD). It uses Media Manager to issue I/O to VSAM data sets. It generally sends no more than one I/O per DASD volume to disk at one time. The exception is parallel access volume (PAV) DASD. These DASD often have multiple paths and can perform multiple I/O in parallel. In this case, zFS will divide the number of access paths by two and round any fraction up. (For example, for a PAV DASD with five paths, zFS will issue, at the most, three I/Os at one time to Media Manager).

zFS limits the I/O because it uses a dynamic reordering and prioritization scheme to improve performance by reordering the I/O queue on demand. Thus, high priority I/Os (I/Os that are currently being waited on, for example) are placed up front. An I/O can be made high priority at any time during its life. This reordering has been proven to provide the best performance, and for PAV DASD, performance tests have shown that not sending quite as many I/Os as available paths allows zFS to reorder I/Os and leave paths available for I/Os that become high priority.

Another feature of the zFS I/O driver is that by queueing I/Os, it allows I/Os to be canceled. For example, this is done in cases where a file was written, and then immediately deleted. Finally, the zFS I/O driver merges adjacent I/Os into one larger I/O to reduce I/O scheduling resource, this is often done with log file I/Os because often times multiple log file I/Os are in the queue at one time and the log file blocks are contiguous on disk. This allows log file pages to be written aggressively (making it less likely that users lose data in a failure) and yet batched together for performance if the disk has a high load.

This section contains the following information:
  • PAV IO, which shows how many I/Os are sent in parallel to Media Manager by zFS, non PAV DASD always shows the value 1.
  • DASD VOLSER for the primary extent of each aggregate and the total number of I/Os and bytes read/written.
  • Number of times a thread processing a request must wait on I/O and the average wait time in milliseconds is shown.
  • Start of changeFor each zFS aggregate, the name of the aggregate is listed, followed by a line of its statistics.End of change

By using this information with the KN report, you can break down zFS response time into what percentage of the response time is for I/O wait. To reduce I/O waits, you can run with larger cache sizes. Small log files (small aggregates) that are heavily updated might result in I/Os to sync metadata to reclaim log file pages resulting in additional I/O waits. Note that this number is not DASD response time. It is affected by it, but it is not the same. If a thread does not have to wait for an I/O then it has no I/O wait; if a thread has to wait for an I/O but there are other I/Os being processed, it might actually wait for more than one I/O (the time in queue plus the time for the I/O).

This report, along with RMF™ DASD reports and the zFS FILE report, can be also used to balance zFS aggregates among DASD volumes to ensure an even I/O spread.