Asynchronous I/O Subsystem

Synchronous input/output (I/O) occurs while you wait. Applications processing cannot continue until the I/O operation is complete. In contrast, asynchronous I/O (AIO) operations run in the background and do not block user applications. This improves performance, because I/O operations and application processing can run simultaneously.

Many applications, such as databases and file servers, take advantage of the ability to overlap processing and I/O. These AIO operations use various kinds of devices and files. Additionally, multiple AIO operations can run at the same time on one or more devices or files. Using AIO usually improves I/O throughput for these types of applications. The actual performance, however, depends partly on the number of concurrent I/O requests that the application can issue at one time. When the AIO fast path is not used, the performance also depends on how many AIO server processes that handle the I/O requests are running. For more information about the fast path, see Identifying the number of AIO servers used currently.

Each AIO request has a corresponding control block in the application's address space. When an AIO request is made, a handle is established in the control block. This handle is used to retrieve the status and the return values of the request.

Applications use the aio_read and aio_write subroutines to perform the I/O. Control returns to the application from the subroutine, as soon as the request has been queued. The application can then continue processing while the disk operation is being performed.

A kernel process (kproc), called an AIO server (AIOS), is in charge of each request from the time it is taken off the queue until it completes. The number of servers limits the number of disk I/O operations that can be in progress in the system simultaneously.

The default value of the minservers tunable is 3, and that of the maxservers tunable is 30. In systems that seldom run applications that use AIO, this is usually adequate. For environments with many disk drives and key applications that use AIO, the defaults might be too low. The result of a deficiency of servers is that disk I/O seems much slower than it should be. Not only do requests spend inordinate lengths of time in the queue, but the low ratio of servers to disk drives means that the seek-optimization algorithms have too few requests to work with for each drive.

Note: AIO does not work if the control block or buffer is created using mmap (mapping segments).

There are two AIO subsystems. The original AIX® AIO, now called LEGACY AIO, has the same function names as the Portable Operating System Interface (POSIX) compliant POSIX AIO. The major differences between the two involve different parameter passing. Both subsystems are defined in the /usr/include/sys/aio.h file. The _AIO_AIX_SOURCE macro is used to distinguish between the two versions.

Note: The _AIO_AIX_SOURCE macro used in the /usr/include/sys/aio.h file must be defined when using this file to compile an AIO application with the LEGACY AIO function definitions. The default compilation using the aio.h file is for an application with the new POSIX AIO definitions. To use the LEGACY AIO function definitions do the following in the source file:

#define _AIO_AIX_SOURCE 
#include <sys/aio.h>
or when compiling on the command line, type the following:

xlc ... -D_AIO_AIX_SOURCE ... classic_aio_program.c

For each AIO function there is a legacy and a POSIX definition. LEGACY AIO has an additional aio_nwait function, which although not a part of POSIX definitions, has been included in POSIX AIO to help those who want to port from LEGACY to POSIX definitions. POSIX AIO has an additional aio_fsync function, which is not included in LEGACY AIO. For a list of these functions, see Asynchronous I/O Subroutines.