IBM Support

njmon and nimon Listing Tags, Measures & Statistics

How To


Summary

To draw Grafana graphs of the njmon or nimon data from an InfluxDB, you need to know the structure and names of measures and statistics. This blog covers how to find the names and values of these items.
Updated to include IBM Virtual I/O Servers extra measures and statistics (towards the end).

Objective

Nigels Banner

Environment

This blog relates to new Nigel's Performance Monitoring software (njmon and nimon), which is an open source project that you can find and download at http://nmon.sourceforge.net/pmwiki.php?n=Site.Njmon
Note:
  • The njmon command outputs JSON data (the meaning of the "J" in the name) and
  • The nimon command outputs InfluxDB Line Protocol data (the meaning of the "I" in the name)
But the statistic names and values are the same.
The only difference between njmon and nimon is the output format. They share common source code to extract the data from the AIX, VIOS and Linux operating systems.  The data of both commands can be safely placed in the same InfluxDB database, which for historic reasons, we call "njmon".
As the developer, I can get "expert blindness" in thinking every one knows these details. Clearly, that is a mistake which I need to fix with the blog. Even I forget some of the statistics that are available.
Here is how extract from the output details of the tags, measures and statistics.
What are Tags?
Tags are attached to all the stats. The tags are:
  1. "host" = hostname
  2. "os" = operating system like AIX or one of the flavours of Linux like RHEL, SLES or Ubuntu
  3. "architecture" = the computer CPU type like POWER9 (AIX), ppc64le (Linux), x86-64, arm
  4. "serial_no" = Serial number of the server (if available)
  5. "mtm" = "Machine Type Model", an IBM term. For non-IBM hardware, somthing similar is used
These Tags are used to select the data from the InfluxDB database:
  • For particular virtual machines (also called hostname, host or server),
  • Group AIX hosts into a list to select from ,
  • Group Linux hosts into a list to select from,
  • Select all the virtual machines from a particular server (same serial number),
  • Select the servers of a particular type like POWER9 E950 (9040-MR9 Machine Type Model in IBM speak!)
What are Measures?
Next, we have "Measures", which is a InfluxDB term. For computer stats, we have measures like the CPUs, memory, disks, networks, file systems, NFS, configuration details, kernel stats and more.
Each measure then has a number of statistics. For CPU utilisation, as an example, User, System, Idle and Waiting for I/O.  For disks we have: read and write KB/s, transfers per second and block sizes. 
We could extract details of the measures and statistics from the njmon or nimon output. 
  1. JSON is easy to parse in Python (using a Python dictionary).  njmon output the JSON on a single giant line. So the output file needs to be changed for pretty output format (human readable). If you want to read or edit the JSON - use the line2pretty program to do this.
  2. The nimon output is in InfluxDB Line Protocol format, which is one measure per line and can be tackled via a simple shell script.  We use the nimon output as our starting point.
Use shells script following the example:
  On AIX  
$ nimon -s1 -c1 -f  
$ ls -ltr *lp  
-rw-------    1 nag      rtc           39750 Jul 29 10:08 blue_20200729_1008.influxlp  

$   
$ ./nimon_list_stats  blue_20200729_1008.influxlp   >nimon_aix_statistics.txt  

$    On Linux  
$ nimon -s1 -c1 -f  $ ls -ltr *lp  
-rw-rw-r--. 1 nag nag 44340 Jul 10 18:13 silver2_20200710_1813.influxlp  
$   
$ ../nimon_list_stats silver2_20200710_1813.influxlp >nimon_linux_statistics.txt  
$
Here is the "nimon_list_stats" script (it is also in the download package):
  # nimon_list_stats  
# 1 parameter = the nimon -f output file in InfluxDB Line protocol format  
# Suggest nimon -s1 -c1 -f  
# nimon_list_stats hostname_datetime.influxlp    

file=$1    
echo File: $file  
echo  
echo Tags '- Used to graph a specific host, group by OS or model type etc.'  

cat $file | head -1 | awk '{ print $1 }' | sed 's/identity,//' | tr ',' '\n' |  awk '{printf("    %s\n", $1 ) }'    

TAGS=$(cat $file | head -1 | sed 's/,/ /' | awk '{ print $2 }')    
TMPFILE=/tmp/$0-$$  

cat $file | sed 's/'$TAGS'//' | sed 's/   $//' >$TMPFILE    
echo  
echo 'MEASURES - Trailing "i" means only an integer allowed, otherwise a "string" or floating point number'  
echo  
for i in $( cat $TMPFILE | awk '{ print $1 }' | sort | uniq )  
do          
    echo Measure: $(echo $i | tr , ' ' )          
    grep ^$i $TMPFILE | tail -1 | \                  
    sed 's/, / /g' | sed 's/@/+/g' | sed 's/IBM,/IBM/g' | sed 's/ /@/' | \
    awk -F@ '{ print $2 }'  |  tr ' ' '!' |tr ',' '\n' |  \
    awk '{printf("    %s\n", $1 ) }' | tr '!' ' '  
done  
rm $TMPFILE  
In the blog, we are not going to cover the details of this straightforward shell script. You will need to have a sample of the nimon output to understand it. A sample for AIX and Linux output is in the download package.
This shell script work with
  • AIX Korn shell (ksh)
  • Linux bash
Although, I noticed a different sorting order from the AIX "sort" and Linux "sort" commands.
Note: the measures and statistics to AIX and Linux are very different.
What are Sub Measures?
There are also measures and sub measures.  While there are disk totals and network totals, njmon/nimon also collects individual disks and networks stats. The same goes for CPU, adapters, volume groups and others. The individual statistics of a measure class (like "cpus" or "disks") are what I call sub measures (like "cpu1", "cpu2" or "hdisk0", "hdisk1"). With njmon these are handled with JSON records within the measure JSON records. With nimon these are handled with extra Tags like "cpu=cpu1" or "disk=hdisk0".
How many Measures and Statistics?
For AIX on a small virtual machine 2 CPU cores, SMT=8, 4 disks and 2 networks
  • 22 Measures
  • 116 Sub measures
  • 1450 Statistics
For Linux on Power on a small virtual machine 2 CPU cores, SMT=8, 1 disk and 1 network
  • 14 Measures
  • 71 Sub measures
  • 838 Statistics
Why the large difference?
AIX has the excellent Perfstat C library to supply nearly all the many statistics in a quick and consistent manor. Linux is a mess in this area with dozens of scattered files (many in /proc), dozens of file formats and less statistics generally.
Statistics names and meaning?
For AIX, njmon and nimon use the statistics names as found in the perfstat library. Look in the /usr/include/llibperfstat.h header file for a one line description of most of the stats. The measure names are the perfstat function names to get the data structures.
For Linux, it is tricky! Some is found in the manual pages for the related /proc files.
What makes the output files larger?
If you have 10's or 100's of disks then the number of statistics grow.
If you have "Top Process" statistics switched on (the -P option) and 1000's of processes on then the number of statistics can be massive. One user report TB's of data.  Note: njmon/nimon ignore processes not using the CPU at all. There is a settable threshold (-t option).
If on AIX you have exotic extra resources like Spectrum Scale (GPFS), NFS, Power Processor Pools, VIOS virtual resources, VIOS Shared Storage Pool, many adapters, tape drives, Async I/O then the numbers of stats grows. These are automatically discovered and reported.
If on Linux you have exotic extra resources like Spectrum Scale (GPFS), GPUs, NFS then the numbers of stats grows. These are automatically discovered and reported except GPUs have to be compiled in to the code due to using a library.
What are the Measurement names?
Here are lists of the measurement names for AIX and then Linux:
AIX Measures:
  1. NFS_totals
  2. NFSv2server
  3. NFSv3server
  4. NFSv2server
  5. config
  6. cpu_details
  7. cpu_dispatch
  8. cpu_logical
  9. cpu_logical_total
  10. cpu_physical
  11. cpu_physical_total
  12. cpu_physical_total_spurr
  13. cpu_syscalls
  14. cpu_util
  15. disk_adapters
  16. disk_total
  17. disks
  18. filesystems
  19. identity
  20. kernel
  21. logicalvolumes
  22. lpar_format1
  23. lpar_format2
  24. memory
  25. memory_page
  26. netbuffers
  27. network_adapters
  28. network_interfaces
  29. network_total
  30. paging_spaces
  31. partition_type
  32. rocesses
  33. rperf
  34. server
  35. timestamp
  36. uptime
  37. vminfo
  38. volumegroups
Linux Measures:
  1. cpuinfo
  2. cpuinfo_power
  3. cpus
  4. cpu_total
  5. disks
  6. filesystems
  7. identity
  8. lscpu
  9. networks
  10. NFS3server
  11. NFS4server
  12. os_release
  13. ppc64_lparcfg
  14. proc_meminfo
  15. proc_version
  16. proc_vmstat
  17. stat_counters
  18. timestamp
  19. uptime
Download file with script and samples:  nimon_list_stats.zip
Content
  • The shell script
  • Sample AIX nimon output in InfluxDB Line Protocol - blue
  • Example AIX output report - nimon_aix_statistics.txt
  • Sample Linux nimon output in InfluxDB Line Protocol - silver2
  • Example Linux output report - nimon_linux_statistics.txt

With the details of Measures and statistics,
I hope you can create more graphs and find the data you want.

What does the script output?
Example AIX output showing Tags, Measure names and Statistic names plus values. Note: Lots of the output has been trimmed with one example of each Measure.
AIX 7.2 
  $ ./nimon_list_stats  blue_20200729_1008.influxlp  - - - - -    File: blue_20200729_1008.influxlp    Tags - Used to graph a specific host, group by OS or model type etc.      host=blue      os=AIX      architecture=POWER8_COMPAT_mode      serial_no=78049AA      mtm=IBM-9009-42A    MEASURES - Trailing "i" means only an integer allowed, otherwise a "string" or floating point number    Measure: config      partitionname="w3-blue"      nodename="blue"      processorFamily="POWER8_COMPAT_mode"      processorModel="IBM      9009-42A"      machineID="78049AA"      processorMHz=3300.000      pcpu_max=16i      pcpu_online=16i      OSname="AIX"      OSversion="7.2"      OSbuild="Jul 30 2018 09:23:49 1831A_72H"      lcpus=8i      smtthreads=4i      drives=4i      nw_adapter=2i      cpucap_min=10i      cpucap_max=400i      cpucap_desired=0i      cpucap_online=0i      cpucap_weightage=200i      entitled_proc_capacity=1.500      vcpus_min=1i      vcpus_max=4i      vcpus_desired=0i      vcpus_online=2i      processor_poolid=0i      activecpusinpool=16i      cpupool_weightage=200i      sharedpcpu=16i      maxpoolcap=1600i      entpoolcap=1600i      mem_min=2048i      mem_max=32768i      mem_desired=0i      mem_online=16384i      mem_weightage=0i      ams_totiomement=0i      ams_mempoolid=0i      ams_hyperpgsize=0i      expanded_mem_min=2048i      expanded_mem_max=32768i      expanded_mem_desired=0i      expanded_mem_online=16384i      ame_targetmemexpfactor=0i      ame_targetmemexpsize=0i      subprocessor_mode="0x00000000"  Measure: cpu_details      cpus_active=8i      cpus_configured=8i      mhz=3300.000      cpus_description="PowerPC_POWER9"  . . .  Measure: cpu_physical_total      user=2.386      sys=41.283      wait=0.000      idle=56.331  . . .  Measure: cpu_util      user_pct=0.484      kern_pct=8.375      idle_pct=91.140      wait_pct=0.000      physical_busy=0.133      physical_consumed=0.304      idle_donated_pct=0.000      busy_donated_pct=0.000      idle_stolen_pct=0.000      busy_stolen_pct=0.000      entitlement=1.500      entitlement_pct=20.288      freq_pct=119.428      nominal_mhz=3300.000      current_mhz=3941.113  . . .  Measure: disk_adapters disk_adapter_name=vscsi5      description="Virtual SCSI Client Adapter"      adapter_type="SCSI SAS other"      devices=4i      size_mb=523264i      free_mb=87808i      capable_rate_kbps=0i      bsize=512i      transfers=502.499      rtransfers=0.000      wtransfers=502.499      read_kb=0.000      write_kb=32159.919      read_time=0.000      write_time=0.000      time=14.911  . . .  Measure: disk_total      disks=4i      size=523264i      free=87808i      xrate_read=0.000      xfers=502.499      read_blks=0.000      write_blks=64319.838      time=14.911      rserv=0.000      wserv=85833205.933      rtimeout=0.000      wtimeout=0.000      rfailed=0.000      wfailed=0.000      wq_time=60442401.172      wq_depth=0i  Measure: disks disk_name=hdisk1      description="Virtual SCSI Disk Drive"      vg="rootvg"      blocksize=512i      size_mb=130816i      free_mb=23296i      xrate_read=0.000      xfers=505.481      read_blks=0.000      write_blks=64701.558      read_mbps=0.000      write_mbps=32350.779      busy=14.911      qdepth=0i      rserv_min=0.000      rserv_max=0.000      rserv_avg=0.000      rtimeout=0i      rfailed=0i      wserv_min=245049.000      wserv_max=12068184.000      wserv_avg=1.330      wtimeout=0i      wfailed=0i      wqueue_time_min=427.000      wqueue_time_max=6382287.000      wqueue_time_avg=0.174      avgWQsz=0.000      avgSQsz=0.009      SQfull=26i      wq_depth=0i  . . .  Measure: filesystems filesystem_name=/      mount="/temp"      device="/dev/fslv04"      size_mb=43008.000      free_mb=43001.113      used_percent=0.016      inode_percent=0.000  . . .  Measure: kernel      pswitch=4542.618      syscall=10548.000      sysread=861.107      syswrite=74.555      sysfork=38.768      sysexec=39.514      readch=1210971.252      writech=157812.933      devintrs=8059.364      softintrs=1122.048      load_avg_1_min=0.793      load_avg_5_min=0.346      load_avg_15_min=0.228      runque=3i      swpque=0i      run_queue=1.500      swp_queue=0.000      bread=0.000      bwrite=0.000      lread=0.000      lwrite=0.000      phread=0.000      phwrite=0.000      runocc_count=2i      swpocc_count=0i      runocc_avg=1i      swpocc_avg=0i      iget=4.473      namei=1265.193  . . .  Measure: logicalvolumes logicalvolume_name=hd1      vgname="rootvg"      open_close=1i      state="Defined=1"      mirror_policy=2i      mirror_write_consistency=1i      write_verify=2i      ppsize_mb=256i      logical_partitions=1i      mirrors=1i      iocnt=0.000      kbreads=0.000      kbwrites=0.000  . . .  Measure: lpar_format1      lpar_name="w3-blue"      min_memory=2048i      max_memory=32768i      memory_region=256i      dispatch_wheel_time=10000000i      lpar_number=27i      lpar_flags=11007i      max_pcpus_in_sys=16i      min_vcpus=1i      max_vcpus=4i      min_lcpus=1i      max_lcpus=32i      minimum_capacity=0.100      maximum_capacity=4.000      capacity_increment=0.010      smt_threads=8i      num_lpars=23i      servpar_id=0i      desired_capacity=1.500      desired_vcpus=2i      desired_memory=16384i      desired_variable_capwt=200i      true_max_memory=32768i      true_min_memory=2048i      ame_max_memory=0i      ame_min_memory=0i      spcm_status=0i      spcm_max=0i  . . .  Measure: lpar_format2      online_memory=16384i      tot_dispatch_time=52i      pool_idle_time=1553i      dispatch_latency=5000000i      lpar_flags="0x00000056"      pcpus_in_sys=16i      online_vcpus=2i      online_lcpus=8i      pcpus_in_pool=16i      unalloc_capacity=0i      entitled_capacity=1.500      variable_weight=200i      unalloc_weight=0i      min_req_vcpu_capacity=5i      group_id=32795i      pool_id=0i      shcpus_in_sys=16i      max_pool_capacity=16.000      entitled_pool_capacity=16.000      pool_max_time=2745i      pool_busy_time=1267i      pool_scaled_busy_time=1513i      shcpu_tot_time=2745i      shcpu_busy_time=1267i      shcpu_scaled_busy_time=1513i      ent_mem_capacity=17179869184i      phys_mem=17179869184i      vrm_pool_physmem=0i      hyp_pagesize=4096i      vrm_pool_id=-1i      vrm_group_id=-1i      var_mem_weight=0i      unalloc_var_mem_weight=0i      unalloc_ent_mem_capacity=0i      true_online_memory=16384i      ame_online_memory=0i      ame_type=0i      ame_factor=0i      em_part_major_code=0i      em_part_minor_code=1i      bytes_coalesced=0i      bytes_coalesced_mempool=0i      purr_coalescing=0i      spurr_coalescing=0i  . . .  Measure: memory      virt_total=4521984i      real_total=4194304i      real_free=25460i      real_pinned=616911i      real_inuse=4168844i      pgbad=0.000      pgexct=2835.316      pgins=0.000      pgouts=7992.265      pgspins=0.000      pgspouts=0.000      scans=0.000      cycles=0.000      pgsteals=0.000      numperm=3433289i      pgsp_total=327680i      pgsp_free=324008i      pgsp_rsvd=1280i      real_system=458312i      real_user=3578468i      real_process=277243i      virt_active=693485i      iome=17179869184i      iomu=36417536i      iohwm=39587840i      pmem=17179869184i      comprsd_total=0i      comprsd_wseg_pgs=0i      cpgins=0i      cpgouts=0i      true_size=0i      expanded_memory=0i      comprsd_wseg_size=0i      target_cpool_size=0i      max_cpool_size=0i      min_ucpool_size=0i      cpool_size=0i      ucpool_size=0i      cpool_inuse=0i      ucpool_inuse=0i      real_avail=3335234i      bytes_coalesced=0i      bytes_coalesced_mempool=0i  . . .  Measure: network_adapters network_adapter_name=ent0 network_adapter_type=Virtual      adapter_type="Virtual"      tx_packets=10388.453      tx_bytes=768614.310      tx_interrupts=0.000      tx_errors=0.000      tx_packets_dropped=0.000      tx_queue_size=0.000      tx_queue_len=0.000      tx_queue_overflow=0.000      tx_broadcast_packets=0.000      tx_multicast_packets=0.000      tx_carrier_sense=0.000      tx_DMA_underrun=0.000      tx_lost_CTS_errors=0.000      tx_max_collision_errors=0.000      tx_late_collision_errors=0.000      tx_deferred=0.000      tx_timeout_errors=0.000      tx_single_collision_count=0.000      tx_multiple_collision_count=0.000      rx_packets=23290.145      rx_bytes=34741074.560      rx_interrupts=7728.341      rx_errors=0.000      rx_packets_dropped=0.000      rx_bad_packets=0.000      rx_multicast_packets=0.000      rx_broadcast_packets=32.059      rx_CRC_errors=0.000      rx_DMA_overrun=0.000      rx_alignment_errors=0.000      rx_noresource_errors=0.000      rx_collision_errors=0.000      rx_packet_tooshort_errors=0.000      rx_packet_toolong_errors=0.000      rx_packets_discardedbyadapter=0.000  . . .  Measure: network_total      networks=2i      ipackets=23293.128      ibytes=34415358.459      ierrors=0.000      opackets=10391.435      obytes=768960.244      oerrors=0.000      collisions=0.000      xmitdrops=0.000  . . .  Measure: paging_spaces paging_space_name=hd6      type="LV"      vgname="rootvg"      lp_size=3i      mb_size=768i      mb_used=4i      io_pending=0i      active=1i      automatic=1i  . . .  Measure: rperf      mtm="IBM-9009-42A"      nominal_mhz=3300.000      cpu_vp=2.000      cpu_entitled=1.500      cpu_consumed=0.304      official_rperf=219.400      official_cpus=8.000      rperf_vp=54.850      rperf_entitlement=41.138      rperf_consumed=8.346  . . .  Measure: server      aix_version=7.200      aix_technology_level=2i      aix_service_pack=2i      aix_build_year=2018i      aix_build_week=32i      serial_no="78049AA"      lpar_number_name="27      w3-blue"      machine_type="IBM-9009-42A"      uname_node="blue"  . . .  Measure: timestamp      datetime="2020-07-29T10:08:53"      UTC="2020-07-29T09:08:53"      snapshot_seconds=1i      snapshot_maxloops=1i      snapshot_loop=0i  . . .  Measure: uptime      days=101i      hours=11i      minutes=59i      users=23i  . . .  Measure: vminfo      pgexct=3671.819      pgrclm=0.000      lockexct=0.000      backtrks=6.710      pageins=0.000      pageouts=8087.695      pgspgins=0.000      pgspgouts=0.000      numsios=8087.695      numiodone=505.481      zerofills=1557.448      exfills=0.000      scans=0.000      cycles=0.000      pgsteals=0.000      numfrb=25299i      numclient=3433449i      numcompress=0i      numperm=3433449i      maxperm=3647851i      memsizepgs=4194304i      numvpages=693486i      minperm=121595i      minfree=960i      maxfree=1088i      maxclient=3647851i      npswarn=10240i      npskill=2560i      minpgahead=2i      maxpgahead=8i      ame_memsizepgs=0i      ame_numfrb=0i      ame_factor_tgt=0i      ame_factor_actual=0i      ame_deficit_size=0i  . . .  Measure: volumegroups volumegroup_name=rootvg      total_disks=1i      active_disks=1i      total_logical_volumes=13i      opened_logical_volumes=12i      iocnt=501.753      kbreads=0.000      kbwrites=32112.204      variedState=0i  . . .
Example Linux output showing Tags, Measure names and Statistic names plus values. Note: Lots of the output has been trimmed with one example of each Measure.
Linux Red Hat RHEL 7.7 on POWER
  File: silver2_20200710_1813.influxlp    Tags - Used to graph a specific host, group by OS or model type etc.      timestamp      host=silver2      os=RHEL      architecture=ppc64le      serial_no=067804930      mtm=9009-42A    MEASURES - Trailing "i" means only an integer allowed, otherwise a "string" or floating point number    Measure: cpuinfo cpuinfo_name=proc0      mhz_clock=3300.000  Measure: cpuinfo cpuinfo_name=proc1      mhz_clock=3300.000  . . .  Measure: cpuinfo_power      timebase="512000000"      power_timebase=512000000i      platform="pSeries"      model="IBM      9009-42A"      machine="CHRP IBM      9009-42A"  . . .  Measure: cpus cpu_name=cpu0      user=3.934      nice=0.000      sys=0.984      idle=92.449      iowait=0.000      hardirq=0.000      softirq=0.000      steal=0.000      guest=0.000      guestnice=0.000  . . .  Measure: cpu_total      user=5.440      nice=0.000      sys=0.061      idle=91.619      iowait=0.000      hardirq=0.000      softirq=0.000      steal=0.000      guest=0.000      guestnice=0.000  . . .  Measure: disks disk_name=sda      reads=0.000      rmerge=0.000      rkb=0.000      rmsec=0.000      writes=0.000      wmerge=0.000      wkb=0.000      wmsec=0.000      inflight=0i      busy=0.000      backlog=0.000      xfers=0.000      bsize=802816i  . . .  Measure: filesystems filesystem_name=/dev/mapper/rhel-root      fs_dir="/"      fs_type="xfs"      fs_opts="rw      seclabel      relatime      attr2      inode64      noquota"      fs_freqs=0i      fs_passno=0i      fs_bsize=4096i      fs_size_mb=40568i      fs_free_mb=10819i      fs_used_mb=29749i      fs_full_percent=73.331      fs_avail=10819i      fs_files=20781056i      fs_files_free=20633031i      fs_namelength=255i  . . .  Measure: identity      hostname="silver2"      fullhostname="silver2"      njmon_command="./nimon_RHEL7_ppc64le_v64 -s1 -c2 -f "      njmon_version="nimon4Linux-v63-+01/07/2020"      username="nag"      userid=1000i      cookie="0xdeadbeef"      fullhostname1="silver2"      lo_IP4="127.0.0.1"      eth0_IP4="9.137.62.12"      lo_IP6="::1"      eth0_IP6="fe80::3871:32ff:fe1b:9604"      compatible="IBM9009-42A"      model="IBM9009-42A"      system-id="IBM067804930"  . . .  Measure: lscpu      architecture="ppc64le"      byte_order="Little Endian"      cpus="32"      online_cpu_list="0-31"      threads_per_core="8"      cores_per_socket="1"      sockets="4"      numa_nodes="1"      model="2.2 (pvr 004e 0202)"      model_name="POWER9 (architected) altivec supported"  . . .  Measure: networks network_name=eth0      ibytes=2330.899      ipackets=34.423      ierrs=0.000      idrop=0.000      ififo=0.000      iframe=0.000      obytes=643.210      opackets=6.885      oerrs=0.000      odrop=0.000      ofifo=0.000      ocolls=0.000      ocarrier=0.000  . . .  Measure: os_release      name="Red Hat Enterprise Linux Server"      version="7.7 (Maipo)"      pretty_name="Red Hat Enterprise Linux Server 7.7 (Maipo)"      version_id="7.7"  Measure: ppc64_lparcfg      lparcfg_version="1.9"      serial_number="IBM      067804930"      system_type="IBM      9009-42A"      partition_id=3i      BoundThrds=1i      CapInc=1i      DisWheRotPer=5120000i      MinEntCap=10i      MinEntCapPerVP=5i      MinMem=1024i      MinProcs=1i      partition_max_entitled_capacity=1200i      system_potential_processors=16i      DesEntCap=200i      DesMem=8192i      DesProcs=4i      DesVarCapWt=128i      DedDonMode=0i      CapiLicensed=0i      ServicePartition=0i      NumLpars=23i      partition_entitled_capacity=200i      group=32771i      system_active_processors=16i      pool=0i      pool_capacity=1600i      pool_idle_time=75770186117111238i      pool_idle_cpu=13.763      pool_num_procs=16i      unallocated_capacity_weight=0i      capacity_weight=128i      capped=0i      unallocated_capacity=0i      physical_procs_allocated_to_virtualization=16i      max_proc_capacity_available=1600i      entitled_proc_capacity_available=1600i      entitled_memory=8589934592i      entitled_memory_group_number=32771i      entitled_memory_pool_number=65535i      entitled_memory_weight=0i      unallocated_entitled_memory_weight=0i      unallocated_io_mapping_entitlement=0i      entitled_memory_loan_request=0i      backing_memory=8589934592i      cmo_enabled=0i      dispatches=7107834111i      dispatch_dispersions=146821999i      purr=1080934848642868i      physical_consumed=1.650      partition_active_processors=4i      partition_potential_processors=12i      shared_processor_mode=1i      slb_size=32i      power_mode_data=1000000010001i  . . .  Measure: proc_meminfo      MemTotal=7793728i      MemFree=325632i      MemAvailable=5350464i      Buffers=1216i      Cached=4352768i      SwapCached=35072i      Active=1137792i      Inactive=5386816i      Active_anon=560960i      Inactive_anon=518592i      Active_file=576832i      Inactive_file=4868224i      Unevictable=110976i      Mlocked=110976i      SwapTotal=4194240i      SwapFree=3781248i      Dirty=384i      Writeback=0i      AnonPages=2269888i      Mapped=172800i      Shmem=375808i      Slab=711360i      SReclaimable=150720i      SUnreclaim=560640i      KernelStack=11168i      PageTables=9536i      NFS_Unstable=0i      Bounce=0i      WritebackTmp=0i      CommitLimit=8091072i      Committed_AS=9743552i      VmallocTotal=60129542144i      VmallocUsed=0i      VmallocChunk=0i      HardwareCorrupted=0i      AnonHugePages=0i      ShmemHugePages=0i      ShmemPmdMapped=0i      CmaTotal=0i      CmaFree=0i      HugePages_Total=0i      HugePages_Free=0i      HugePages_Rsvd=0i      HugePages_Surp=0i      Hugepagesize=16384i  Measure: proc_version      version="Linux version 4.14.0-49.el7a.ppc64le (mockbuild+ppc-059.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC)) #1 SMP Wed Mar 14 13:58:40 UTC 2018"  . . .  Measure: proc_vmstat      nr_free_pages=5088i      nr_zone_inactive_anon=8103i      nr_zone_active_anon=8765i      nr_zone_inactive_file=76066i      nr_zone_active_file=9013i      nr_zone_unevictable=1734i      nr_zone_write_pending=6i      nr_mlock=1734i      nr_page_table_pages=149i      nr_kernel_stack=11168i      nr_bounce=0i      nr_zspages=0i      nr_free_cma=0i      numa_hit=417176075i      numa_miss=0i      numa_foreign=0i      numa_interleave=3365i      numa_local=417176075i      numa_other=0i      nr_inactive_anon=8103i      nr_active_anon=8765i      nr_inactive_file=76066i      nr_active_file=9013i      nr_unevictable=1734i      nr_slab_reclaimable=2355i      nr_slab_unreclaimable=8760i      nr_isolated_anon=0i      nr_isolated_file=0i      workingset_refault=102432860i      workingset_activate=1289007i      workingset_nodereclaim=1132629i      nr_anon_pages=35467i      nr_mapped=2700i      nr_file_pages=68579i      nr_dirty=6i      nr_writeback=0i      nr_writeback_temp=0i      nr_shmem=5872i      nr_shmem_hugepages=0i      nr_shmem_pmdmapped=0i      nr_anon_transparent_hugepages=0i      nr_unstable=0i      nr_vmscan_write=1194568i      nr_vmscan_immediate_reclaim=388053i      nr_dirtied=147935192i      nr_written=149056193i      nr_dirty_threshold=25782i      nr_dirty_background_threshold=8593i      pgpgin=10027222125i      pgpgout=9528289041i      pswpin=1137604i      pswpout=1192574i      pgalloc_dma=417209772i      pgalloc_dma32=0i      pgalloc_normal=0i      pgalloc_movable=0i      allocstall_dma=0i      allocstall_dma32=0i      allocstall_normal=16i      allocstall_movable=5568i      pgskip_dma=0i      pgskip_dma32=0i      pgskip_normal=0i      pgskip_movable=0i      pgfree=417214992i      pgactivate=5501382i      pgdeactivate=6426479i      pglazyfree=486833i      pgfault=358948266i      pgmajfault=1475284i      pglazyfreed=396734i      pgrefill=8708506i      pgsteal_kswapd=279747237i      pgsteal_direct=319763i      pgscan_kswapd=434298298i      pgscan_direct=1172276i      pgscan_direct_throttle=0i      zone_reclaim_failed=0i      pginodesteal=31i      slabs_scanned=9359819i      kswapd_inodesteal=150833i      kswapd_low_wmark_hit_quickly=6731i      kswapd_high_wmark_hit_quickly=243531i      pageoutrun=437390i      pgrotated=1791507i      drop_pagecache=0i      drop_slab=0i      oom_kill=0i      numa_pte_updates=0i      numa_huge_pte_updates=0i      numa_hint_faults=0i      numa_hint_faults_local=0i      numa_pages_migrated=0i      pgmigrate_success=0i      pgmigrate_fail=0i      compact_migrate_scanned=0i      compact_free_scanned=0i      compact_isolated=0i      compact_stall=0i      compact_fail=0i      compact_success=0i      compact_daemon_wake=0i      compact_daemon_migrate_scanned=0i      compact_daemon_free_scanned=0i      htlb_buddy_alloc_success=0i      htlb_buddy_alloc_fail=0i      unevictable_pgs_culled=1392i      unevictable_pgs_scanned=0i      unevictable_pgs_rescued=3037i      unevictable_pgs_mlocked=4771i      unevictable_pgs_munlocked=2446i      unevictable_pgs_cleared=591i      unevictable_pgs_stranded=0i      thp_fault_alloc=0i      thp_fault_fallback=0i      thp_collapse_alloc=0i      thp_collapse_alloc_failed=0i      thp_file_alloc=0i      thp_file_mapped=0i      thp_split_page=0i      thp_split_page_failed=0i      thp_deferred_split_page=0i      thp_split_pmd=0i      thp_zero_page_alloc=0i      thp_zero_page_alloc_failed=0i      thp_swpout=0i      thp_swpout_fallback=0i      balloon_inflate=0i      balloon_deflate=0i      balloon_migrate=0i      swap_ra=143524i      swap_ra_hit=102728i  . . .  Measure: stat_counters      ctxt=33420.377      btime=1593473114i      processes_forks=4.918      procs_running=1i      procs_blocked=0i  . . .  Measure: timestamp      datetime="2020-07-10T18:13:17"      UTC="2020-07-10T17:13:17"      snapshot_seconds=1i      snapshot_maxloops=2i      snapshot_loop=1i  . . .  Measure: uptime      days=10i      hours=17i      minutes=48i      users=4i    
What about the VIOS 3.1 Measures and Stats? 
If you run the njmon or nimon specifically for the VIOS 3.1 with the extra command line options: uvU
  • -v      : VIOS data on virtual disks, virtual FC and virtual networks
  • -u      : VIOS SSP data like pool, pv and LU
  • -U      : VIOS SSP data like -u plus VIOS cluster data. Warning this can add 2 seconds per VIOS in the SSP cluster
     
Then you get the following extra measures and statistics - only one example of each is shown.
    Virtual Networking  Measure: network_bridged network_bridged_name=ent0 network_adapter_type=Physical  Measure: network_bridged network_bridged_name=ent4 network_adapter_type=Virtual    Shared Storage Pool (SSP)  Measure: ssp_global  Measure: ssp_lu ssp_lu_name=SSPVolume_1  Measure: ssp_lu ssp_lu_name=blueroot  Measure: ssp_node ssp_node_name=redvios1.aixncc.uk.ibm.com  Measure: ssp_pv ssp_pv_name=hdisk4    Virtual Disks  Measure: vios_disk_target vios_disk_target_name=vopt192  Measure: vios_disk_target vios_disk_target_name=vtscsi14  Measure: vios_vhost vios_vhost_name=vhost10  Measure: vios_virtual_fcadapter vios_virtual_fcadapter_name=vfchost0 client_part_name=none  
VIOS Network bridged stats
  Measure: network_bridged network_bridged_name=ent4 network_adapter_type=Virtual      adapter_type="Virtual"      tx_packets=2036.900      tx_bytes=960788.882      tx_interrupts=0.000      tx_errors=0.000      tx_packets_dropped=0.000      tx_queue_size=0.000      tx_queue_len=0.000      tx_queue_overflow=0.000      tx_broadcast_packets=0.000      tx_multicast_packets=0.000      tx_carrier_sense=0.000      tx_DMA_underrun=0.000      tx_lost_CTS_errors=0.000      tx_max_collision_errors=0.000      tx_late_collision_errors=0.000      tx_deferred=0.000      tx_timeout_errors=0.000      tx_single_collision_count=0.000      tx_multiple_collision_count=0.000      rx_packets=1397.565      rx_bytes=662193.786      rx_interrupts=1147.065      rx_errors=0.000      rx_packets_dropped=0.000      rx_bad_packets=0.000      rx_multicast_packets=0.000      rx_broadcast_packets=0.000      rx_CRC_errors=0.000      rx_DMA_overrun=0.000      rx_alignment_errors=0.000      rx_noresource_errors=0.000      rx_collision_errors=0.000      rx_packet_tooshort_errors=0.000      rx_packet_toolong_errors=0.000      rx_packets_discardedbyadapter=0.000  
VIOS SSP Stats
  Measure: ssp_global      ClusterName="orbit"      PoolName="orbit"      TotalSpace_MB=6289408i      TotalUsedSpace_MB=3049957i    Measure: ssp_lu ssp_lu_name=SSPVolume_1      type="THIN_LU"      size_MB=32768i      free_MB=2901i      usage_MB=29868i      client_LPAR_id=17i      MTM="9009-22A067804940"      VTDname="vtscsi10"      DRCname="U9009.22A.7804940-V3-C8"      udid="f4583ef95b5c90fe6f7d4dfc31db213a"    Measure: ssp_lu ssp_lu_name=blueroot      type="THIN_LU"      size_MB=131072i      free_MB=106171i      usage_MB=9550i      client_LPAR_id=27i      MTM="9009-42A067804930"      VTDname="vtscsi9"      DRCname="U8408.E8E.21D494V-V1-C10"      udid="ecb6aad83bf1377ba2aaa8b8f6c54537"    Measure: ssp_node ssp_node_name=ambervios2.aixncc.uk.ibm.com      ipaddress="9.137.62.102"      MTMS="9009-22A067804940"      lparid=2i      ioslevel="3.1.1.25"      status="OK"      poolstatus="OK"    Measure: ssp_pv ssp_pv_name=hdisk10      capacity_MB=393216i      free_MB=393152i      tiername="SYSTEM"      failure_group="v7000_tan"  
VIOS Virtual optical and disks
  Measure: vios_disk_target vios_disk_target_name=vopt192      blocksize=2048i      size_mb=0i      free_mb=0i      xrate_read=0.000      xfers=0.000      read_blks=0.000      write_blks=0.000      read_mbps=0.000      write_mbps=0.000      busy=0.000      qdepth=0i      rserv_min=0.000      rserv_max=0.000      rserv_avg=0.000      rtimeout=0i      rfailed=0i      wserv_min=0.000      wserv_max=0.000      wserv_avg=0.000      wtimeout=0i      wfailed=0i      wqueue_time_min=0.000      wqueue_time_max=0.000      wqueue_time_avg=0.000      avgWQsz=0.000      avgSQsz=0.000      SQfull=0i      wq_depth=0i    Measure: vios_disk_target vios_disk_target_name=vtscsi2      blocksize=512i      size_mb=0i      free_mb=0i      xrate_read=0.748      xfers=29.163      read_blks=23.928      write_blks=2454.899      read_mbps=11.964      write_mbps=1227.449      busy=0.000      qdepth=0i      rserv_min=3935093.000      rserv_max=3935093.000      rserv_avg=7.686      rtimeout=0i      rfailed=0i      wserv_min=159241.000      wserv_max=761654.000      wserv_avg=0.639      wtimeout=0i      wfailed=0i      wqueue_time_min=0.000      wqueue_time_max=0.000      wqueue_time_avg=0.000      avgWQsz=0.000      avgSQsz=0.004      SQfull=0i      wq_depth=0i    Measure: vios_vhost vios_vhost_name=vhost12      adapter_type="Virtual SCSI/SAS Adapter"      devices=0i      size_mb=0i      free_mb=0i      capable_rate_kbps=0i      bsize=0i      transfers=6.730      rtransfers=0.000      wtransfers=6.730      read_kb=0.000      write_kb=0.000      read_time=0.000      write_time=0.000      time=0.000    Measure: vios_virtual_fcadapter vios_virtual_fcadapter_name=vfchost0 client_part_name=none      state="unknown"      InputRequests=0.000      OutputRequests=0.000      InputBytes=0.000      OutputBytes=0.000      EffMaxTransfer=0i      NoDMAResourceCnt=0i      NoCmdResourceCnt=0i      AttentionType="Link down"      SecondsSinceLastReset=0i      TxFrames=0.000      TxWords=0.000      RxFrames=0.000      RxWords=0.000      PortSpeed=0i      PortSupportedSpeed=0i      PortFcId=0i      PortWWN="0xc050760a69da0002"      adapter_type="Virtual Fibre Channel"      physical_name="fcs1"  
Examples of the extra stats for a VIOS with Shared Storage Pool (SSP):
- - - The End - - -

Additional Information


Find move content from Nigel Griffiths IBM (retired) here:

Document Location

Worldwide

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"ARM Category":[{"code":"","label":""}],"Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW1W1","label":"Power -\u003EPowerLinux"},"ARM Category":[{"code":"","label":""}],"Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Line of Business":{"code":"","label":""}}]

Document Information

Modified date:
20 December 2023

UID

ibm11165432