IBM was an early supporter of Linux and continues to demonstrate technology and market leadership on this platform. Enhancing Linux environments and exploiting the Linux kernel has been an important part of the IBM Informix Dynamic Server (IDS) Version 10.0.
IDS Version 10.0 contains numerous enhancements that can deliver benefits in Linux environments. These include:
- Support for new Linux platforms
- Transparent optimization and exploitation of Linux environments
- Exploitation of Linux 2.6 kernel features
- Utilization of asynchronous I/O and direct I/O for enhancing I/O performance
- Processor affinity to achieve improved scalability and parallelism
- Performance optimization using configurable page sizes
- Additional installation methods on Linux systems
- Simple Network Management Protocol (SNMP) support
- Interprocess communication with stream pipes
- Scalability on the 2.6 kernel
This article contains information on the new features of IBM Informix Dynamic Server Version 10.0 that are unique on Linux platforms. Many other IDS Version 10.0 features are available across many platforms, including Linux. For more information on non-Linux specific features features, see New features in IBM Informix Dynamic Server, Version 10.0.
New Linux platforms
IBM offers the flexibility and choice to deploy IBM Informix Dynamic Server Version 10.0 on a wide variety of hardware platforms and operating systems. The availability of 64-bit computing platforms presents new possibilities for increased performance of database servers, as well as database applications. 32-bit platforms have an inherent address-space limitation of 4 gigabytes (GB). Removal of this 4 GB limit on the address space of database servers allows for the creation of larger buffer pools, sort heaps, package caches, and other resources that can consume large amounts of memory. A 64-bit environment and the ability to address more than 4 GB of memory can greatly enhance the scalability and performance of databases. With IDS Version 10.0, IBM has optimized native 64-bit editions for all major Linux platforms.
The Linux platforms supported by the new IBM Informix Dynamic Server Version 10.0 are:
- x86 (32-bit edition for Intel Pentium-, Xeon-, and AMD Athlon-based systems)
- POWER® (IBM eServer™, iSeries™, and pSeries® systems)
- zSeries® (IBM eServer zSeries® systems)
- Intel EM64T (X86-64), which is available with the first IBM Informix Dynamic Server Version 10.0 fixpack (Q2 2005)
- IA64 (64-bit edition for Intel Itanium-based systems)
- AMD64 (64-bit edition for AMD Opteron- and Athlon64-based systems), which is available with the first IBM Informix Dynamic Server Version 10.0 fixpack
Support for the Linux 2.6 kernel
The Linux 2.6 kernel boasts many improvements for running Linux in enterprise environments and data centers. The Informix research and development team has worked with 2.6 kernels for some time, enhancing IDS to exploit new features in these kernels.
Kernel feature availability
Enterprise class distributions such as the following were shipped with kernel 2.6 at the time this paper was written:
- Red Hat Enterprise Linux AS release 4.0
- SUSE LINUX Enterprise Server 9
Kernel Asynchronous I/O
Kernel Asynchronous I/O (KAIO) allows applications to overlap processing with I/O operations for improved CPU and device utilization. With KAIO, processes do not need to wait for I/O requests to complete; the processes can continue while the I/O operations are completed.
Kernel Asynchronous I/O has been supported by the official Linux kernel since version 2.6.x.
IBM Informix Dynamic Server Version 10.0 supports Kernel Asynchronous I/O on character devices (which are also known as raw devices) and on block devices. Kernel Asynchronous I/O is enabled by default, and can be disabled by setting the environment variable KAIOOFF to
1 in the environment of the process that starts the database server.
Operating system requirements
Native Kernel AIO support is included in the mainline 2.6 kernel.
Important: The libaio.so library is required, regardless of the distribution or kernel level. At the time this article was written, IBM Informix Dynamic Server Version 10.0 requires a minimum libaio of 0.3.96-3.
If the version of libaio that is installed on your computer does not meet these minimum requirements, you should download the latest RPM Package Manager (RPM) from Red Hat or SUSE.
Kernel Asynchronous I/O configuration
Kernel Asynchronous I/O (KAIO) is enabled by default. You can disable this by specifying that KAIOOFF=1 in the environment of the process that starts Dynamic Server.
Once KAIO is enabled, IBM Informix Dynamic Server Version 10.0 attempts to dynamically load the libaio. If this shared library cannot be loaded, KAIO is disabled and a message is logged to the database server message log file.
When using KAIO, it is recommended that you run poll threads on separate virtual processors. Do this by specifying NET as the virtual processor (VP) class in the NETTYPE configuration parameter. For example, you can specify:
On Linux, there is a system-wide limit of the maximum number of parallel KAIO requests. The /proc/sys/fs/aio-max-nr file contains this value. The Linux system administrator can increase the value, for example, by using this command:
# echo new_value > /proc/sys/fs/aio-max-nr
The current number of allocated requests of all operating system processes is visible in the /proc/sys/fs/aio-nr file.
By default, IBM Informix Dynamic Version 10.0 allocates half of the maximum number of requests and assigns them equally to the number of configured CPU virtual processors. You can use the environment variable KAIOON to control the number of requests allocated per CPU virtual processor. You do this by setting KAIOON to the required value before starting the database server.
The minimum value for KAIOON is
100. If Linux is about to run out of KAIO resources, for example when dynamically adding many CPU virtual processors, warnings are printed in the online.log file. If this happens, the Linux system administrator should add KAIO resources as described above.
If your operating system supports KAIO, the CPU virtual processors (VPs) make I/O requests directly to the file instead of to the operating-system buffers. In this case, you should configure only one AIO VP, plus two additional AIO VPs for every buffered file chunk.
If you have a system that uses all of the physically available memory for applications (most prominently, for IDS), then Linux has very little memory for file system caching and file system I/O can be quite slow. Here is where KAIO can provide the most benefit, and this benefit is normally perceived as significant.
Direct I/O (DIO) is an alternative caching method that reduces CPU utilization for reads and writes by eliminating the copy that is transmitted from the file cache to the user buffer. A read/write against a file opened with the O_DIRECT flag causes data to be transferred directly between the user buffer and the disk. Direct I/O is turned on when KAIO is enabled and Linux kernel version 2.6.x. is detected.
Direct I/O (DIO) is an alternative caching policy that reduces CPU utilization for reads and writes by eliminating the copy from file cache to the user buffer. A read/write against a file opened with the O_DIRECT flag causes data to be transferred directly between the user buffer and the disk. Direct I/O is turned ON when KAIO is enabled and Linux kernel version 2.6.x. is detected.
When using the file system caching policy, which is the default policy for cooked files, I/O operations are performed in buffered mode. While this caching policy is extremely effective when the cache hit ratio is high, the caching policy has an overhead of making an extra copy of the buffer from the disk to the file cache (in the case of read) or from file cache to the disk (in the case of write). Since the buffer is already cached in the IDS buffer pool layer, this dual level of caching proves to be unnecessary in situations in which the file system cache hit ratio is low and many I/O operations are performed.
Processor affinity refers to binding a process or a set of processes to a specific CPU or a set of CPUs. The advantage of doing this is to override the system's built-in scheduler to force a process to run only on specified CPUs. This can provide some performance gains in Symmetric Multiprocessor (SMP) and Non-Uniform Memory Access (NUMA) environments because it is much more likely that the processor's cache will contain cached data for the process bound to that processor.
The NUMA architecture was designed to surpass the scalability limits of the SMP architecture. With SMP, all memory access is posted to the same shared memory bus. This works well for a relatively small number of CPUs, but a problem with the shared bus can occur if you have dozens, and even hundreds, of CPUs competing for access to the shared memory bus. NUMA alleviates these bottlenecks by limiting the number of CPUs on any one memory bus, and by connecting the various nodes through a high-speed interconnection.
When a process is scheduled onto different processors, there is little chance for cache hits, and you can experience performance degradation (Because of the migration, the cache must be filled again!).
You must be cautious when using this feature, since incorrect use can have a negative performance impact. Overriding what the kernel selects as best for the process can be tricky. Obtaining a significant performance improvement using processor affinity involves some experimentation because every workload is different and the kernel and I/O scheduler operate differently with every workload.
aff option of the VPCLASS configuration parameter to the numbers of the CPUs on which to bind CPU virtual processors in your $INFORMIXDIR/etc/$ONCONFIG file.
Note: Binding a CPU virtual processor to a processor does not prevent other processes from running on that processor. Application processes or other processes that you do not bind to a CPU are free to run on any available processor.
Operating System requirements
CPU affinity is a 2.6 kernel feature that has also been back-ported to Red Hat Enterprise Linux 3.
Installing the server on Linux without using Java
You can directly invoke the RPM Package Manager (RPM) and use a script-based installation to install Dynamic Server on Linux without using Java™. This procedure is similar to the server-installation procedure used before IBM Informix Dynamic Server Version 10.0. You can also use the InstallShield Multi-Platform (ISMP) installer, a Java-based installer that utilizes RPM, to install IBM Informix Dynamic Server on Linux.
To install IBM Informix Dynamic Server on a Linux system by directly invoking RPM:
- As user informix, create the IBM Informix product directory ($INFORMIXDIR).
- Set the $INFORMIXDIR environment variable to this directory.
( bash or /bin/sh )
export INFORMIXDIR= /usr7/informix
OR (for csh)
setenv INFORMIXDIR /usr7/informix
- Insert the media CD and mount the drive that contains the IBM Informix CD. The mount point is called $MEDIADIR.
- Read the license, which is located in $MEDIADIR/doc/license, and set the
$ACCEPTLICENSE environment variable to
( bash or /bin/sh )
OR (for csh)
setenv ACCEPTLICENSE yes
- Copy install_rpm and all *.rpm files from the media CD to $INFORMIXDIR.
- As root, run install_rpm from the $INFORMIXDIR directory, as follows:
For more information, see the IBM Informix Dynamic Server Installation Guide for UNIX and Linux.
IBM Informix Dynamic Server Version 10.0 supports the Simple Network Management Protocol (SNMP) agent on Linux. The SNMP agent is based on the PEER Network's SubAgent Development Kit.
The distribution includes the following files installed under the \$INFORMIXDIR directory:
- bin/onsnmp - This is the OnSNMP subagent, packaged as a separate process.
- bin/onsrvapd - This is the daemon which spawns a subagent for each server it discovers.
- snmp/peer/snmpdp - This is the PEER Network master agent.
- snmp/peer/CONFIG - This is the PEER Network configuration file.
- snmp/*V1.mib - MIB files define the instrumentation provided with OnSNMP. Files that contain "V1" in the filename conform to SNMP version 1 and share the widest acceptance; files that contain "V2" conform to SNMP version 2; files with the "my" extension are transitional files that might be used as a last resort.
The runsnmp.ksh script mentioned in the OnSNMP manual is not available in this platform. This script is not present in the \$INFORMIXDIR directory.
OnSNMP is supported on a host that is running one and only one server. If multiple servers are running on the same host, the subagents for subsequent servers might fail to start.
Example of how to start IDS SNMP service on Linux
$INFORMIXDIR/snmp/peer/snmpdp $INFORMIXDIR/snmp/peer/CONFIG NOV $INFORMIXDIR/bin/onsrvapd -l /tmp/onsrv -g 64
For more information, see the IBM Informix SNMP Subagent Guide.
Interprocess communication with Stream Pipes
Starting with the first fixpack for Version 10.0, IBM Informix Dynamic Server supports Interprocess communication (IPC) using stream pipes on Linux. Interprocess communication is implemented using UNIX® domain sockets.
To activate this protocol, the NETTYPE configuration parameter in the $INFORMIXDIR/etc/$ONCONFIG file and the nettype field in the $INFORIMXDIR/etc/sqlhosts file entry must contain "onipcstr".
Local 32-bit applications and tools can connect to the 64-bit server using the IPC stream pipe protocols.
For more information, see the IBM Informix Administrator's Reference.
Scalability of IDS on Linux kernel 2.6
The frequently heard statement "Linux does not scale" is not true. This statement was based on performance observations in Linux kernel 2.2. It has already been shown that the Linux kernel 2.4 provides excellent scaling behavior. With kernel 2.6, the CPU and process scaling can be improved even more.
A scaling test was made with an Informix 64-bit database on SLES8 (z900t) and SLES9 (z990). The purpose was not to run a typical database benchmark with huge memory and storage server requirements, but rather to test CPU and process scalability. For a workload, the test used transactions simulating a complete environment in which a population of terminal operators performs transactions against the database. The benchmark is centered around the main activities (transactions) of an order-entry environment.
The scaling in the SLES8 tests show scaling factors of 3.7 (4 CPUs), 6.9 (8 CPUs), 9.5 (12 CPUs) and 11.5 (16 CPUs). SLES9 on z990 shows a similar scaling. The throughput was 50 percent higher than with SLES8 and z900t when more than 1 CPU was used in the test.
Figure 1. Scaling text results
- Download the Informix Dynamic Server 10.0 90-day trial version.
- Visit the Linux Tech Support forum to share your questions and views on this article with the author and other readers.
- Read "New features in IBM Informix Dynamic Server, Version 10.0", developerWorks March 2005, for information on other new features in IBM Informix Dynamic Server Version 10.0.
- Visit the developerWorks Linux zone to find more resources for Linux developers
- Visit the Speed-start your Linux app site for the latest no-charge trial downloads for Linux (WebSphere Studio Application Developer, WebSphere Application Server, DB2 Universal Database, Tivoli Access Manager, and Tivoli Directory Server), as well as how-to articles and tech support.
- See Speed-start Web services to access Web services knowledge, tools, and skills. You'll also find the latest Java-based software development tools and middleware from IBM (trial editions), plus online tutorials and articles, and an online technical forum.
- Get involved in the developerWorks community by participating in developerWorks blogs.