IBM Support

Using ndisk64 to test new disk set-ups like Flash Systems

How To


Summary

So you have some fancy new disks - how can you give them a good hammering to show their excellent performance? Read on.

Objective

Nigels Banner

Steps

This is a typical request from IBMers in services or direct from customers wanting to confirm a good disk setup or confirm disks deliver as promised. This time "... a test of SAN disk performance where LUN's are carved from an IBM Flash System with SVC. Primary objective is to show the IBM Flash System can do hundreds of thousands IOPS with response time <1ms with good throughput. This is for an Oracle RDBMS using ASM"

Wow! Those new Flash Systems are really the next generation of disks. Well disk testing is not trivial and you should assign a a good number of days - although Flash drives are much simpler to test than those ancient historicy brown spinning patters!!!  Remember, 2013 is the year the Winchester disk started to be phased out.  In 2010, roughly 900 Power Systems users at Technical Universities voted Solid State Drives as the most likely technology to be "business as usual" in fiver years time - that time is NOW - due to the price is down and the sizes are up enough for the change to Flash System. They even reduce the size of the machine to drive the I/O so you can trade a little Server costs against the Flash costs.

Back to ndisk64 - My response was:
I assume you have typed ./nmon64 and carefully read the output and worked examples.

To get to high I/O rates you need to run many threads (-M start at 32 or 64) against many files one per thread so set lots of them up before you start (-F filelist put the file names in the file called filelist). If you write data to the files, it trash file content. Do not use a real database file!

Smaller block sizes mean higher transaction rates but never ever go below 4KB (-b 4k).
Higher block sizes mean higher bandwidth (-b 128k or larger).

Most databases generate random I/O (-R) these days - even a table scan generates random I/O.

Make sure you run the workload for a while to get a stable rate - at least 5 minutes (-t 300 ).
Note: 1 minute for quick tests to get the right options.

Read and write - ratio is up to you. I would do pure-read, pure-write and mixed with read:write ratio at 80:20 - which is a typical ratio for databases.

Avoid the AIX file system cache as it is cheating (or put that any other way so fast every one knows you can't be doing real I/O) and confusing so use logical volumes - not sure what ASM presents to you!
For logical volumes, you have to specify the size: -s 2G

With previous, Solid Stat Drives I hit the maximum I/O rates on the third attempt.

Best of luck.

Then, I added as with any benchmark:

  • Make sure you have plenty of CPU time available - monitor it with nmon. I would start with 16 CPUs just to be safe.
  • And don't forget to make sure you have best to date Firmware and AIX TL plus service packs.
  • Don't forget the disk queue depth setting.

You can download the ndisk64 binary - it is part of the nstress tools.  Download from this article: 

https://www.ibm.com/support/pages/stress-test-your-aix-or-linux-server-nstress


Here is the help output so you don't need to install and run it to see them:

Usage: ./ndisk64_75 version 7.5
Complex Disk tests - sequential or random read and write mixture
./ndisk64_75 -S          Sequential Disk I/O test (file or raw device)
        -R          Random    Disk I/O test (file or raw device)

        -t <secs>   Timed duration of the test in seconds (default 5)

        -f <file>   use "File" for disk I/O (can be a file or raw device)
        -f <list>   list of filenames to use (max 16) [separators :,+]
                        example: -f f1,f2,f3  or -f /dev/rlv1:/dev/rlv2
        -F <file>   <file> contains list of filenames, one per line (upto 2047 files)
        -M <num>    Multiple processes used to generate I/O
        -s <size>   file Size, use with K, M or G (mandatory for raw device)
                        examples: -s 1024K   or   -s 256M   or   -s 4G
                        The default is 32MB
        -r <read%> Read percent min=0,max=100 (default 80 =80%read+20%write)
                        example -r 50 (-r 0 = write only, -r 100 = read only)
        -b <size>   Block size, use with K, M or G (default 4KB)
        -O <size>   first byte offset use with K, M or G (times by proc#)
        -b <list>   or use a colon separated list of block sizes (31 max sizes)
                        example -b 512:1k:2K:8k:1M:2m
        -q          flush file to disk after each write (fsync())
        -Q          flush file to disk via open() O_SYNC flag
        -i <MB>     Use shared memory for I/O MB is the size(max=256 MB)
        -v          Verbose mode = gives extra stats but slower
        -l          Logging disk I/O mode = see *.log but slower still
        -o "cmd"  Other command - pretend to be this other cmd when running
                        Must be the last option on the line
        -K num      Shared memory key (default 0xdeadbeef) allows multiple programs
                    Note: if you kill ndisk,  you may have a shared memory
                    segment left over. Use ipcs and then ipcrm to remove it.
        -p          Pure = each Sequential thread does read or write not both
        -P file     Pure with separate file for writers
        -C          Open files in Concurrent I/O mode O_CIO
        -D          Open files in Direct I/O mode O_DIRECT
        -z percent  Snooze percent - time spent sleeping (default 0)
                         Note: ignored for Async mode
To make a file use dd, for 8 GB: dd if=/dev/zero of=myfile bs=1M count=8196

Asynchronous I/O tests (AIO)
        -A         switch on Async I/O use: -S/-R, -f/-F and -r, -M, -s, -b, -C, -D to determine I/O types
                (JFS file or raw device)
        -x <min>   minimum outstanding Async I/Os (default=1, min=1 and min<max)
        -X <max>   maximum outstanding Async I/Os (default=8, max=1024)
        see above -f <file>  -s <size>   -R <read%>  -b <size>

For example:
        dd if=/dev/zero of=bigfile bs=1m count=1024
        ./ndisk64_75 -f bigfile -S -r100 -b 4096:8k:64k:1m -t 600
        ./ndisk64_75 -f bigfile -R -r75 -b 4096:8k:64k:1m -q
        ./ndisk64_75 -F filelist -R -r75 -b 4096:8k:64k:1m -M 16
        ./ndisk64_75 -F filelist -R -r75 -b 4096:8k:64k:1m -M 16 -l -v

For example:
        ./ndisk64_75 for Asynch compiled in version
        ./ndisk64_75 -A -F filelist -R -r50 -b 4096:8k:64k:1m -M 16 -x 8 -X 64

Additional Information


Other places to find content from Nigel Griffiths IBM (retired)

Document Location

Worldwide

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}}]

Document Information

Modified date:
14 June 2023

UID

ibm11119003