Configuring the server
This section emphasizes recent server configuration enhancements that individually affect performance and are not covered later in broader topics.
Because Informix is scalable, it can be tuned to accommodate large
instances. The main parameter for tuning virtual processors (VPs) is the
VPCLASS parameter replaced the following
Listing 1 shows the syntax of the
Listing 1. VPCLASS syntax
classname in the
provides the name of the virtual-processor class that you are
configuring. The name is not case sensitive.
You can define new virtual-processor classes for user-defined routines or DataBlade® modules, or you can set values for a predefined virtual-processor class. The class names in Table 1 are predefined.
Table 1. Predefined VPCLASS class names
The classname variable is required. Unlike most configuration
VPCLASS has several option
fields that can appear in any order, separated by commas. You cannot
use any white space in the fields. VPCLASS has the following
optional secondary parameters:
Each virtual processor is instantiated as an operating system process.
You can therefore use
VPCLASS to dedicate a
class of activities to one oninit process. For more information, see
As mentioned, Informix is scalable and can be configured to accommodate and take advantage of multiple CPU machines. The number of CPU VPs can be figured, as shown in Listing 2.
Listing 2. Configuring for 3 CPU VPs
The JVP option of the
parameter sets the number of Java virtual processors. This parameter
is required when you use the IBM Informix JDBC Driver. On UNIX, you
must define multiple Java virtual processors to execute Java
user-defined routines in parallel.
For VP classes
soc, you must set the
NETTYPE configuration parameter's
VP_class field to
For example, you might set the VPCLASS parameter as shown in Listing 3.
Listing 3. VPCLASS parameters to use NET
VPCLASS shm,num=1 VPCLASS tli,num=1
NETTYPE parameter should be set as
shown in Listing 4.
Listing 4. NETTYPE parameters with NET
NETTYPE ipcshm,1,100,NET NETTYPE tlitcp,1,100,NET
Shared memory allocations to the Informix database server depend on several configuration parameters, as shown in Table 2.
Table 2. Shared memory parameters
|EXTSHMADD||Specifies the size of an added extension segment|
|SHMADD||Specifies the increment of memory that is added when the database server requests more memory|
|SHMBASE||Specifies the shared-memory base address and is computer dependent. The value depends on the platform and whether the processor is 32 bit or 64 bit. For information on which SHMBASE value to use, see the computer notes.|
|SHMTOTAL||Specifies the maximum amount of memory the database server is allowed to use|
|SHMVIRTSIZE||Specifies the size of the first piece of memory that the database server attaches|
|BUFFERPOOL||Configures shared-memory page cache. Number of buffers per (page size) pool, page size, and LRU parameters|
Carefully consider shared memory configuration
parameters. The major performance considerations are maximal page
BUFFERPOOL) and adequate virtual
memory allocation (
SHMVIRTSIZE). If initial
virtual memory is inadequate for long-term processing, dynamic
addition of virtual segments (
create excessive numbers of virtual segments during processing, which
can adversely affect performance.
DIRECT_IO parameter enables direct I/O and concurrent
I/O for performance enhancements with chunks defined using cooked
files. I/O using cooked files is generally slower than raw devices due
to an extra I/O layer and buffering used for read and write on the cooked files.
Informix has no control over this operating system subsystem. Certain
operating system platforms,
however, support direct I/O, which bypasses the I/O layer and
buffering for cooked file chunks. Performance for cooked files can
approach the performance of raw devices used for dbspace chunks.
Direct I/O can be used only for regular dbspace chunks. It is not used for temporary dbspaces. The file system and operating system must support direct I/O for the page size. Direct I/O is not supported with raw devices. Kernel asynchronous I/O (KAIO) is the preferred method of I/O for chunks that are placed on raw devices.
Concurrent I/O, currently supported on AIX®, adds the concurrent feature on top of direct I/O. Concurrent I/O allows multiple concurrent reads and writes to a file. The performance enhancement is most noticeable with I/O to single chunks striped across multiple disks.
To determine whether Informix is using direct or concurrent I/O for a chunk,
monitor the fifth position of the flags field of
onstat -d, as shown in Table 3.
Table 3. DIRECT_IO configuration
|DIRECT_IO setting||Effect||onstat -d flag|
|0||Direct I/O off||-|
|1||Direct I/O on||D|
|2||Direct and concurrent I/O on||C|
In some operating systems that enable direct I/O,
implementation uses KAIO. If direct I/O is
enabled, the database server tries to do the work with KAIO. The
number of AIO virtual processors may be reduced if KAIO is enabled.
This assumes that KAIO is turned on
KAIOOFF is not set in the
Windows does not support the
DIRECT_IO as direct I/O is
turned on by default on the Windows platform.
You can configure the
parameter in a variety of ways to collect performance data for
individual queries. Use
SQLTRACE to define
the scope of the tracing. The default settings in the configuration
file are shown in Listing 5.
Listing 5. Default settings
The trace data are stored in sysmaster tables and visible with the
onstat -g his command. For example,
you can view query plan cost estimates, number of rows returned, and
profile data. You can also enable and disable sqltracing using
admin() functions from the sysadmin
SQL tracing is particularly useful for studying individual queries that execute within applications that run many queries.
Informix sometimes performs light scans on large data tables, reading
many data pages at once and bypassing the buffer pool. You can turn on
light scans for compressed tables, tables with rows that are larger
than a page, and tables with any data, including VARCHAR, LVARCHAR,
and NVARCHAR types by enabling
BATCHEDREAD_TABLE in the configuration
file. This parameter is automatically enabled.
The light scans bypass the buffer pool and provide a performance
improvement for some queries. Monitor light scan activity using the
onstat -g scn command.
BATCHEDREAD_INDEX to direct the
server to fetch a set of keys from an index buffer when appropriate,
thereby decreasing buffer reads.