I/O statistics

The I/O statistics section in the nmon recording file contains statistics about disk, disk adapter, Enterprise Storage Server (ESS) disks, disk groups, and file system.

The following sections in the nmon recording file are used to identify the I/O statistics:
Table 1. Sections for I/O statistics
Section Description
FILE Records statistics for different file I/O operations. This statistics is recorded by default. This section contains the following fields:
iget
Number of inode lookup operations per second.
namei
Number of vnode operations by using the path name per second.
dirblk
Number of 512-byte block read operations by the dictionary search routine to locate an entry for a file per second.
readch
Number of characters that are transferred by using read system calls per second.
writech
Number of characters that are transferred by using write system calls per second.
ttyrawch
Number of raw input characters.
ttycanch
Number of canonical input characters.
ttyoutch
Number of raw output characters.
DISK Records I/O statistics for each disk. This statistics is recorded by default. For every disk , the same set of tags on DISK* are repeated. Disk metrics are displayed in the following generic format:
DISK*,<Description> <runname>, <diskname>
A sample DISK metric might be similar to the below example:
<runname>, <diskname> <diskname1>,….<disknameN>
Where <diskname1>…<diskmameN> represents the list of disk names present in the LPAR or VIOS. This section contains the following metrics:
DISKBUSY, Disk %Busy
Percentage of time during which the disk is active.
DISKREAD, Disk Read KB/s
Total read operations from the disk in KBs per second.
DISKWRITE, Disk Write KB/s
Total write operations to the disk in KBs per second.
DISKXFER, Disk transfers per second
Number of transfers per second.
DISKRXFER, Transfers from disk (reads) per second
Number of read transfers per second.
DISKBSIZE, Disk Block Size
Total number of disk blocks that are read and written over the interval.
DISKRIO, Disk IO Reads per second
Number of disk read I/O transfers per second.
DISKWIO, Disk IO Writes per second
Number of disk write I/O transfers per second.
DISKAVGRIO, Disk IO Average Reads KBs/xfer
Average number of KBs that are read from the disk per read I/O operation.
DISKAVGWIO, Disk IO Average Writes KBs/xfer
Average number of KBs that are written to the disk per write I/O operation.
DISKSERV Disk Service Time msec/xfer
Average disk I/O service time per transfer in milliseconds.
DISKREADSERV, Disk Read Service Time msec/xfer
Average read disk service time per transfer in milliseconds.
DISKWRITESERV, Disk Write Service Time msec/xfer
Average write disk service time per transfer in milliseconds.
DISKWAIT, Disk Wait Queue Time msec/xfer
Average time spent in the disk wait queue per transfer in milliseconds.

If the number of service requests are greater than disk queue depth, then the requests are moved to the wait queue. This metric provides the time taken by the I/O requests in the wait queue.

IOADAPT Records disk adapter statistics. This statistics is based on the average of all the disks connected to the adapter and the MPIO links connected to this adapter. This statistics is recorded by default. This section contains the following fields:
<AdapterName>_read-KB/s
Total read operations from the adapter in KBs per second.
<AdapterName>_write-KB/s
Total writes to the adapter in KBs per second.
<Adaptername>_xfer-tps
Total I/O transfers on the adapter per second.
BBBD Records disk adapter configuration details. An example recording configuration information follows:
BBBD,0000, Disk Adapter Information : 
<Description of BBBD>.
BBBD,000,Disk Adapter Information,<hostname>
BBBD,001,Adapter_number, Name,Disks,Description
BBBD,002,<values corresponding to the header in BBBD 001>
This section is displayed only once in the nmon recording file.
Note: Dynamic configuration changes are not displayed in this section.
This statistics is recorded by default. You can use the –D option to disable the recording of the BBBD section. This section contanins the following fields:
Adapter_number
Index created by the nmon tool for the adapter.
Name
Name of the disk adapter.
Number
Number of disks attached to the disk adapter including MPIO disks.
Description
Description of the adapter as specified in the Object Data Manager(ODM), catalog.
ESS Records IBM® TotalStorage® ESS device statistics. This statistics is recorded by default if ESS disks are present. This sections contains following fields:
ESSREAD, ESS Logical Disks read KB/s
Total KBs of data that is read from ESS virtual paths per second.
ESSWRITE, ESS Logical Disks write KB/s
Total KBs of data written to ESS virtual paths per second.
ESSXFER, ESS Logical Disks Transfers
Total number of transfers per second on the ESS paths.
BBBE Records ESS configuration details for all ESS virtual paths. An example of this section follows:
BBBE, 000, ESS vpath Logical Disks
Records ESS virtual path.
BBBE, 001, <Config details>
This provides configuration detail for each and every ESS virtual path. This is recorded by default and can be skipped with option –E.
This section is recorded only once in the ESS statistics recording. This section provides configuration details for every ESS virtual path. This statistics is recorded by default. You can use the -E option to disable the recording of this section. This section contains the following fields:
Number
Index of virtual path.
SizeMB
Size of the ESS virtual path.
Name
Name of the virtual path.
Disks
Number of disks that belong to this ESS virtual path.
hdisks
List of disk names that belong to this ESS path.
BBBSSP Records shared storage pool (SSP) information. SSP information is recorded only if SSP statistic collection is enabled. You can use the -y sub=ssp option to enable this statistics.
Note: SSP provides distributed storage access to all VIOS logical partitions in a cluster. All the VIOS logical partitions within a cluster must have access to all the physical volumes in a shared storage pool.
This section contains the following fields:
Cluster Name
Name of the SSP cluster.
Pool Name
SSP name of the partition number of the VIOS client.
Pool Size
SSP size in MBs.
Pool Used
Percentage of SSP usage over the total shared storage size.
BBBSSP_CLIENT Records the SSP client and logical unit mapping statistics. This section contains the following fields:
PoolName_ClientId
SSP name and partition number of the VIOS client joined by delimiter _.
Machine Type Model (MTM)
Machine type and model of the SSP client.
LU Size Used
Physical usage of logical unit in MBs.
Note: Logical unit is a file-backed storage device that is located in the cluster file system of the shared storage pool.
BBBSSP Fibre Channel adapter Records Fibre Channel adapter statistics in the following format:
BBBF, FC Adapter stats, FCs found, 
<number of Fiber channel Adapters>
<List of all the Fiber Channel 
adapters names>
You ca use ^ option to enable recording of this statistics. Following are fields displayed in this section:
BBBF, FC_<index>, <FC Adapter Name>, fcstat command output
This field is listed for each and every Fibre Channel adapters.
BBBSSP Fibre Channel adapter utilization Records the utilization statistics of the Fibre Channel adapter. This section contains the following fields:
FCREAD, Fibre Channel Read KB/s, <FCNAMES>
Total data in KB that is read on the adapter per second.
FCWRITE, Fibre Channel Write KB/s
Total amount of data in KB that is written to the adapter per second.
FCXFERIN, Fibre Channel Tranfers In/s
Total number of read requests per second on the adapter.
FCXFEROUT, Fibre Channel Tranfers Out/s
Total number of write requests per second on the adapter.
BBBVFC, Virtual Fiber Channel Adapter Information Records N_Port ID Virtualization (NPIV) configuration details. The NPIV is an industry-standard technology that configures an NPIV capable Fibre Channel adapter with multiple, virtual worldwide port names (WWPNs). This technology is also called as virtual Fibre Channel. Virtual Fibre Channel is a method to securely share a physical Fibre Channel adapter among multiple Virtual I/O Servers.
The metrics of this section have following suffix:
BBBVFC,<Metrics list>
Run the following command to print all these configuration details:
lsmap -all -npiv
If the Fiber Channel statistics are enabled by using the ^ option, this section is also recorded. This section contains the following fields:
vfchost name
Virtual Fibre Channel host adapter name.
client name
Name of the client partition that has the virtual Fibre Channel adapter.
WWPN
Worldwide port number.
FC Adapter Name
Fibre Channel port name.
NPIV, Virtual FC Adapter, <metrics list> Records NPIV utilization statistics. This section contains the following fields:
<NPIV_Name>_ read-KB/s
Total amount of data in KB that is read per second on this adapter.
<NPIV_Name>_ write-KB/s
Total amount of data in KB that is written to this adapter.
<NPIV_Name>_ reads/s
Number of read requests per second on this adapter.
<NPIV_Name>_ writes/s
Number of write requests per second on this adapter.
<NPIV_Name>_ port_speed
Speed of this adapter in GB per second.
BBBVG Records volume group configuration details in the following format: BBBVG, 000, <Volume group name>, <number of disks>. This section is recorded for each volume group. You can use the-V option to enable the recording of these statistics. This section contains the following fields:
Volume Group Name
Name of the volume group.
Number of Disks
Number of disks or physical volumes that belong to the volume group.
VG* Records volume group statistics. Each metric is recorded for all volume groups that are present on the LPAR. This section contains the following fields:
VGBUSY, Disk Busy Volume Group, <list of volume groups>
Average busy time of all disks in the volume group.
VGREAD, Disk Read KB/s Volume Group, <list of volume groups>
Total data in KB that is read per second by all disks in the volume group.
VGWRITE, Disk Write KB/s Volume Group, <list of volume groups>
Total data in KB written per second by all disks in the volume group.
VGXFER, Disk Xfer Volume Group, <list of volume groups>
Total amount of transfer of data in KB/s by all the disks that belong to this volume group.
VGSIZE,Disk Size KB Volume Group, <list of volume groups>
Block size that is read or written to the disk per transfer in KB per second.
DG* Records disk group statistics. Multiple disks can be monitored by grouping them. You must create a group configuration file that contains the following information:
<Group_name1> <disk_name1> <disk_name2> ....
<Group_name2> <disk_nameA> <disk_nameB> ... 

In the example, <Group_name1> is the name of the first disk in the group; <disk_name1> and <disk_name2> are the first and second disks in the group.

You can use -g option to enable the recording of these statistics. This section contains the following fields:

BBBG, <number>, diskgroupname, <disknames>
The disk group name for which the statistics is recorded and all the disks that belong to this disk group.
Disk Group Records disk group utilization statistics. Following are the fields in this section:
DGBUSY, Disk Group Busy, <hostname>, <disk group name>
Average busy time of all disks in the disk group.
DGREAD, Disk Group Read KB/s, <hostname>, <disk group name>
Total data in KB that is read per second by all disks that belong to the disk group.
DGWRITE, Disk Group Write KB/s, <hostname>, <disk group name>
Total data in KB that is written per second by all disks that belong to the disk group.
DGSIZE, Disk Group Block Size KB,<hostname>, <disk group name>
Block size that is read or written to the disk per transfer in KB per second.
DGXFER,Disk Group Transfers/s,<hostname>,<disk group name>
Total transfers per second by all disks that belong to this volume group.
JFS* Records the Journaled File System (JFS) statistics. The proc file system is not recorded because it is a pseudo file system. JFS statistics are recorded by default. This section contains the following fields:
JFSFILE,JFS Filespace %Used, JFS names
Percentage of file space used by this JFS over the total space allocated to it.
JFSINODE, JFS Inode %Used, JFS list
Percentage of inode usage by the JFS over the total inode files present on the LPAR.