The IBM HTTP Server (IHS) ships with a cool little command line utility called Apache Bench (ab) because IHS is based on httpd. At its simplest, you pass the number of requests you want to send (-n), at what concurrency (-c) and the URL to benchmark. ab will return various statistics on the responses (mean, median, max, standard deviation, etc.). This is really useful when you want to "spot check" backend server performance or compare two different environments, because you do not need to install complex load testing software, and since IHS usually has direct access to WAS, you do not have to worry about firewalls, etc.
Below is an example execution.
$ cd $IHS/bin/ $ ./ab -n 1000 -c 10 http://ibm.com/ This is ApacheBench, Version 2.0.40-dev <$Revision: 16238 $> apache-2.0 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking ibm.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Finished 1000 requests Server Software: IBM_HTTP_Server Server Hostname: ibm.com Server Port: 80 Document Path: / Document Length: 227 bytes Concurrency Level: 10 Time taken for tests: 22.452996 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Non-2xx responses: 1000 Total transferred: 455661 bytes HTML transferred: 227000 bytes Requests per second: 44.54 [#/sec] (mean) Time per request: 224.530 [ms] (mean) Time per request: 22.453 [ms] (mean, across all concurrent requests) Transfer rate: 19.77 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 101 107 4.1 106 136 Processing: 107 115 6.0 114 186 Waiting: 106 115 5.9 114 185 Total: 208 223 7.3 221 292 Percentage of the requests served within a certain time (ms) 50% 221 66% 224 75% 226 80% 228 90% 232 95% 237 98% 245 99% 247 100% 292 (longest request)
The key things to look at are:
- Time taken for tests: This is how long it took for all requests finish. When comparing two environments, if your mean and median are similar but total time is worse in one case, this may suggest queueing effects.
- Failed requests, write errors, and Non-2xx responses: These may indicate some problem. See below for a caveat on "Failed requests."
- Requests per second: Throughput.
- Total: Look at min, mean, median, max and sd (standard deviation). Usually the mean is the place to start.
- Percentage of the requests served within a certain time: Response times on a percentile basis. Many customers look at 95%, but this is arbitrary and usually based on what percentage of requests are expected to have errors or do weird behavior.
Some important notes:
- ab has odd behavior in that it counts requests with varying Content-Length headers as "Failed requests" due to "length;" for example:
Complete requests: 200 Failed requests: 199 (Connect: 0, Length: 199, Exceptions: 0)
It is common to have different content lengths, so usually this can be disregarded (only if the "Length" number counts all the "failed" requests). There is a patch for this, but it has never made it into the core code: https://issues.apache.org/bugzilla/show_bug.cgi?id=27888.
- Non-2xx responses may or may not be okay. HTTP status codes are usually okay if they are 304 (Not Modified), for example. They are usually not okay if they are 4xx or 5xx. To get details on the response code, use "-v 2" which will print a warning for non-2xx response codes and list what the code was.
- ab is not a browser, so when you request a page, ab will not fetch any resources within the page such as images, scripts, iframes, etc.