Specifying the latency ranges for I/O

The I/O histogram latency ranges are used to categorize the I/O according to the latency time, in milliseconds, of the I/O operation.

A full set of latency ranges are produced for each size range. The latency ranges are the same for each size range.

The latency ranges are changed using a string of positive decimal numbers separated by semicolons (;). No white space is allowed within the latency range operand. Each number represents the upper bound of the I/O latency time (in milliseconds) for that range. The numbers must be monotonically increasing. If decimal places are present, they are truncated to tenths.

For example, the latency range operand:
1.3;4.59;10
represents these four latency ranges:
 0.0   to   1.3   milliseconds
 1.4   to   4.5   milliseconds
 4.6   to  10.0   milliseconds
10.1  and greater milliseconds
In this example, a read that completes in 0.85 milliseconds falls into the first latency range. A write that completes in 4.56 milliseconds falls into the second latency range, due to the truncation.

A latency range operand of = (equal sign) indicates that the current latency range is not to be changed. A latency range operand of * (asterisk) indicates that the current latency range is to be changed to the default latency range. If the latency range operand is missing, * (asterisk) is assumed. A maximum of 15 numbers may be specified, which produces 16 total latency ranges.

The latency times are in milliseconds. The default latency ranges are:
0.0     to    1.0   milliseconds
1.1     to   10.0   milliseconds
10.1    to   30.0   milliseconds
30.1    to  100.0   milliseconds
100.1   to  200.0   milliseconds
200.1   to  400.0   milliseconds
400.1   to  800.0   milliseconds
800.1   to 1000.0   milliseconds
1000.1 and greater  milliseconds

The last latency range collects all latencies greater than or equal to 1000.1 milliseconds. The latency ranges can be changed by using the rhist nr request.

For more information, see Processing of rhist nr.