Using ndisk64 to test new disk set-ups like Flash Systems
nagger 100000MRSJ Comments (4) Visits (5241)
This is a typical request from IBMers in services or direct from customers wanting to confirm a good disk setup or confirm disks deliver as promised. This time "... a test of SAN disk performance where LUN's are carved from a IBM Flash System 820 with SVC. Primary objective is to show the IBM Flash System 820 can do hundreds of thousands IOPS with response time <1ms with good throughput. This is for an Oracle RDBMS using ASM"
Wow!! those new Flash Systems are really the next generation of disks. Well disk testing is not trivial and you should assign a a good number of days - although Flash drives are much simpler to test than those ancient history brown spinning patters!!! Remember, 2013 is the year the Winchester disk started to be phased out. In 2010, roughly 900 Power Systems users at Technical Universities voted Solid State Drives as the most likely technology to be "business as usual" in fiver years time - that time is NOW - due to the price is down and the sizes are up enough for the change to Flash System. They even reduces the size of the machine to drive the I/O so you can trade a little Server costs against the Flash costs.
Back to ndisk64 - My response was:
With previous, SSD's I hit the maximum I/O rates on the third attempt.
Then I added as with any benchmark:
Make sure you have plenty of CPU time available - monitor it with nmon. I would start with 16 CPUs just to be safe.
I hope this helps, cheers Nigel Griffiths.
Here is the latest ndisk64 binary version 7.5: I will put it on the AIX wiki nstress page.
Here is the help output so you don't need to install and run it to see them:
Usage: ./ndisk64_75 version 7.5 Complex Disk tests - sequential or random read and write mixture ./ndisk64_75 -S Sequential Disk I/O test (file or raw device) -R Random Disk I/O test (file or raw device) -t <secs> Timed duration of the test in seconds (default 5) -f <file> use "File" for disk I/O (can be a file or raw device) -f <list> list of filenames to use (max 16) [separators :,+] example: -f f1,f2,f3 or -f /dev/rlv1:/dev/rlv2 -F <file> <file> contains list of filenames, one per line (upto 2047 files) -M <num> Multiple processes used to generate I/O -s <size> file Size, use with K, M or G (mandatory for raw device) examples: -s 1024K or -s 256M or -s 4G The default is 32MB -r <read%> Read percent min=0,max=100 (default 80 =80%read+20%write) example -r 50 (-r 0 = write only, -r 100 = read only) -b <size> Block size, use with K, M or G (default 4KB) -O <size> first byte offset use with K, M or G (times by proc#) -b <list> or use a colon separated list of block sizes (31 max sizes) example -b 512:1k:2K:8k:1M:2m -q flush file to disk after each write (fsync()) -Q flush file to disk via open() O_SYNC flag -i <MB> Use shared memory for I/O MB is the size(max=256 MB) -v Verbose mode = gives extra stats but slower -l Logging disk I/O mode = see *.log but slower still -o "cmd" Other command - pretend to be this other cmd when running Must be the last option on the line -K num Shared memory key (default 0xdeadbeef) allows multiple programs Note: if you kill ndisk, you may have a shared memory segment left over. Use ipcs and then ipcrm to remove it. -p Pure = each Sequential thread does read or write not both -P file Pure with separate file for writers -C Open files in Concurrent I/O mode O_CIO -D Open files in Direct I/O mode O_DIRECT -z percent Snooze percent - time spent sleeping (default 0) Note: ignored for Async mode To make a file use dd, for 8 GB: dd if=/dev/zero of=myfile bs=1M count=8196 Asynchronous I/O tests (AIO) -A switch on Async I/O use: -S/-R, -f/-F and -r, -M, -s, -b, -C, -D to determine I/O types (JFS file or raw device) -x <min> minimum outstanding Async I/Os (default=1, min=1 and min<max) -X <max> maximum outstanding Async I/Os (default=8, max=1024) see above -f <file> -s <size> -R <read%> -b <size> For example: dd if=/dev/zero of=bigfile bs=1m count=1024 ./ndisk64_75 -f bigfile -S -r100 -b 4096:8k:64k:1m -t 600 ./ndisk64_75 -f bigfile -R -r75 -b 4096:8k:64k:1m -q ./ndisk64_75 -F filelist -R -r75 -b 4096:8k:64k:1m -M 16 ./ndisk64_75 -F filelist -R -r75 -b 4096:8k:64k:1m -M 16 -l -v For example: ./ndisk64_75 for Asynch compiled in version ./ndisk64_75 -A -F filelist -R -r50 -b 4096:8k:64k:1m -M 16 -x 8 -X 64