Informix Experts

For a complete list of current SQL reserved words, see the IBM Informix Guide to SQL: Syntax.

Sanjit Chakraborty[Read More]


cheetahblog 3,512 Views


HTTPS protects the OAT webserver from eavesdropping, tampering, and message forgery. It protects the OAT webserver from hackers who are trying to secretly listen or interfere with the OAT network. Enabling HTTPS will encrypt OAT clients’ messages before sending it to the OAT webserver. This prevents hackers from listening over the line and stealing sensitive information. Sometimes hackers will setup a fake OAT webserver and deceive OAT clients as to whom the real OAT host is. Enabling HTTPS also allow OAT clients to authenticate with the OAT host, so that hackers cannot deceive OAT clients with a fake OAT webserver.

Please note that HTTPS only encrypts communication between OAT webserver and OAT client. It does not encrypt communication between IDS server and the OAT webserver.

IBM® Informix® Dynamic Server, Version 9.4 and later, enables encryption of data between IDS server and OAT webserver using an encryption communication support module. You can find more information in the following link:

Enabling HTTPS in OAT will involve the following steps:

  • 1. Replacing OAT’s Apache webserver with another mod_ssl enabled Apache webserver.
  • 2. Creating an encryption key and a certificate for your new OAT webserver, so that OAT clients can authenticate your webserver based on your certificate.
  • 3. Configure httpd.conf (the Apache configuration file) to enable HTTPS

Replacing OAT’s Apache Webserver with another mod_ssl enabled Apache Webserver

In order to use OAT with https, you will need to have an apache webserver, PHP with some extensions and the apache mod_ssl module. HTTPS functionality is provided by the apache mod_ssl module. Using the OAT automated installer will install an apache webserver and PHP with all the required PHP extensions, however apache webserver bundled with installer is NOT shipped with the mod_ssl module.

Therefore you have to install another apache webserver compiled with the mod_ssl module to replace OAT’s apache webserver. Then dynamically load OAT’s PHP apache handler (a file named, or php5apache2_2.dll on windows, which is the apache and php ‘glue’) to your new apache webserver.

Dynamically loading the mod_ssl module to OAT’s original apache webserver is not possible because the version of OAT’s original apache webserver is 2.2.4. Mod_ssl dynamic module only supports the older apache 1.3.x, It does not support OAT’s apache webserver.

On Linux and Mac OS X,

In order to compile Apache with mod_ssl support, you need to have OpenSSL installed. OpenSSL might have been installed on your Linux distribution already and you only need to find out the installation directory. Note that the OpenSSL binaries have to be 32-bit, because the binaries from the OAT automated installer are 32-bit. We suggest doing this only on a 32-bit machine so that there will be no architectural mismatch among the binaries.

If OpenSSL has not been installed already, you can download the latest source code release from here:

Then you have to compile the OpenSSL source code. Note that you have to compile 32-bit OpenSSL binaries because the binaries from the OAT automated installer are 32-bit. This can be done by setting CFLAGS to be –m32. Use the following commands:

cd /path/to/openssl/source/code
export CFLAGS=-m32
./config --prefix=/openssl/installation/directory/ --openssldir=/openssl/installation/directory/
make install

Stop OAT’s apache webserver by running the /oat/installation/directory/StopApache script. Rename OAT’s Apache_2.2.4 directory to Apache_2.2.4_noSSL. This is a will serve as a backup copy of the apache binaries. You will also need to use some configuration files from this backup Apache directory in later steps. Do make a copy.

Then you have to compile the latest Apache with mod_ssl support. Download the latest Apache source code and compile with the following commands. Note that you have to compile 32-bit Apache binaries because the binaries from the OAT automated installer are 32-bit. This can be done by setting CFLAGS to be –m32. The compilation prefix should match the old OAT apache’s directory (/oat/installation/directory/Apache_2.2.4/)

cd /path/to/apache/source/code
export CFLAGS=-m32
./configure --prefix= /oat/installation/directory/Apache_2.2.4/ --enable-so --enable-ssl --with-ssl= /openssl/installation/directory/
make install

Right now Apache should be installed with mod_ssl support. Use the following commands to check if mod_ssl is enabled in Apache.

cd /oat/installation/directory/Apache_2.2.4/bin/
./httpd –M
(Here you will see a list of Apache modules. See if the SSL module is on the list)

Then you have to change the Apache configurations file to load OAT’s PHP Apache handler (The PHP and Apache ‘glue’). Edit the Apache configuration file (/oat/installation/directory/Apache_2.2.4/conf/httpd.conf). Add the following lines. Uncomment if these lines exist but are commented out.

LoadModule php5_module /oat/installation/directory/PHP_5.2.4/
AddType application/x-httpd-php .php
PhpIniDir ‘/oat/installation/directory/PHP_5.2.4/lib’
Setenv INFORMIXDIR ‘/oat/installation/directory/Connect_3.50/’

Search for the line “Listen 80” in the httpd.conf file. This indicates the port number that the OAT webserver should run on. You should use the same port number as OAT’s original Apache webserver that comes with OAT installer.

Search for the line “ServerName” in the httpd.conf file. This indicates the name and the port that the server uses to identify itself. You should use the same ServerName as the original non-SSL Apache webserver that comes with OAT installer.

Search for the line “DirectoryIndex index.html” in the httpd.conf file. This sets the files that Apache will serve if a directory is requested. Change this line to “DirectoryIndex index.html index.php”

Copy the file /oat/installation/directory/Apache_2.2.4_noSSL/bin/envvars to /oat/installation/directory/Apache_2.2.4/bin/envvars. Apache will read this file to setup the Apache environment variables for OAT to run.

Copy the entire directory /oat/installation/directory/Apache_2.2.4_noSSL/htdocs/openadmin/ to /oat/installation/directory/Apache_2.2.4/htdocs/openadmin/. All the OAT source code resides in this directory.

Run the following commands to make sure that the PHP Apache handler is properly loaded.

cd /oat/installation/directory/Apache_2.2.4/bin/
./httpd –M
(Here you will see a list of Apache modules. See if the php5 module is on the list)

Right now your new webserver should be properly setup for OAT. You may start your webserver by running /oat/installation/directory/StartApache and visit OAT using your web browser.

Note that this server has mod_ssl enabled but HTTPS is not switched on yet. A few more steps are needed to enable HTTPS in the following sections.

On Windows,

First we have to download and install Openssl from the following website:

Then we need to setup Apache with mod_ssl support. Download the latest Win32 Binary including OpenSSL 0.9.8g (MSI Installer). This should be available here:

Stop the OAT Apache webserver. There should be a OpenAdmin shortcut in the start menu. You should be able to stop the OAT Apache webserver from there. Make sure that the Apache Monitor is not running on your system tray.

Rename OAT’s Apache_2.2.4 directory to Apache_2.2.4_noSSL. This is a will serve as a backup copy of the apache binaries. You will also need to use some configuration files from this Apache directory in later steps. Do make a copy.

Run the Apache MSI installer. Do a typical install and choose the installation directory to be /oat/installation/directory/Apache_2.2.4

Edit the Apache configuration file (/oat/installation/directory/Apache_2.2.4/conf/httpd.conf). Add or uncomment the following lines in the httpd.conf file:

LoadModule php5_module "c:\oat\installation\dir\PHP_5.2.4\php5apache2_2.dll"
LoadModule ssl_module modules/
AddType application/x-httpd-php .php
PhpIniDir 'c:\oat\installation\dir\PHP_5.2.4'

Search for the line “Listen 80” (or “Listen 8080”) in the httpd.conf file. This indicates the port number that the OAT webserver should run on. You should use the same port number as OAT’s original Apache webserver that comes with OAT installer.

Search for the line “ServerName” in the httpd.conf file. This indicates the name and the port that the server uses to identify itself. You should use the same ServerName as OAT’s original Apache webserver that comes with OAT installer.

Search for the line “DirectoryIndex index.html” in the httpd.conf file. This sets the files that Apache will serve if a directory is requested. Change this line to “DirectoryIndex index.html index.php”

Search for the line “setenv INFORMIXDIR” in the OAT’s original Apache’s configuration file (c:\oat\installation\dir\Apache_2.2.4_noSSL\conf\httpd.conf). This line sets the INFORMIXDIR variable in the Apache environment for OAT. This has NOT been set in your new Apache webserver yet. Copy this line to your new Apache webserver’s httpd.conf file. You can put this at the end of the httpd.conf file.

Copy the entire directory c:\oat\installation\dir\Apache_2.2.4_noSSL\htdocs\openadmin\ to c:\oat\installation\dir\Apache_2.2.4\htdocs\openadmin\. All the OAT source code resides in this directory.

Run the following commands in a command prompt to make sure that the PHP Apache handler and the mod_ssl modules are properly loaded.

cd c:\oat\installation\dir\Apache_2.2.4\bin\
httpd.exe –M
(Here you will see a list of Apache modules. See if the php5 module and the ssl_module are on the list)

Right now your new webserver should be properly setup for OAT. You may now click on ‘Start Apache Service for OpenAdmin’ in OpenAdmin Menu under the Start Menu. You should be able to access OAT using your web browser.

Note that this server has mod_ssl enabled but HTTPS is not switched on yet. A few more steps are needed to enable HTTPS in the following sections.

Creating an Encryption Key and a Certificate for your OAT Webserver

Keys are used in encryption and decryption. They usually come in pairs, the public key and private key. Public keys are used to encrypt messages and private keys are used to decrypt messages. The message encrypted by a public key can only be decrypted by its associated private key.

A HTTPS enabled webserver will have its own pair of public and private key. The webserver will make its public key available to all clients. But nobody else except the webserver will know its private key. Thus all clients will be able to encrypt messages, but only the webserver can decrypt the message. If a client wants to send encrypted messages to the webserver, he will encrypt the message with the public key provided by the webserver and then send out the message. The webserver will use its secret private key to decrypt the client’s message. Hackers who are trying to listen over the network and steal the client’s message will not be able to decrypt the client’s message, because the hacker does not have the private key.

A certificate is a document authenticating a person to be whom he claims to be. A HTTPS enabled webserver will have its certificate, signed by a trusted certificate authority, to authenticate itself to be the real webserver that the clients are talking to. Before a client talks to the webserver, he will be prompted to view and accept the webserver’s certificate. To the client can make sure that the webserver’s certificate is signed by a trusted certificate authority before proceeding with the communication. This prevents hackers from setting up fake webservers to deceive clients.

Once a webserver is HTTPS enabled, clients get to choose whether they want to do a normal connection to the webserver or a secure connection to the webserver. If clients want to do a normal connection, they will type “http://webserver_url” in their web browser. If clients want to do a secure connection, they will type “https://webserver_url”. In a secure connection, the webserver will send the client its certificate and its public key. The client will be prompted to view and accept the webserver’s certificate before the webpage is loaded, so that the client can be sure that the webserver is not a fake webserver created by hackers to deceive you. Once the client accepts the certificate, the client’s web browser will use the public key that it received from the webserver to encrypt communication. Only the webserver has the associated private key, thus only the webserver can decrypt the client’s encrypted communication. Hackers cannot secretly listen and decrypt the communication.

To generate private/public key pairs and the certificate, we use the openssl executable. You should be able to find it under the bin directory of your openssl installation.

To generate a private key, use the following command:

openssl genrsa -des3 -out privkey.pem 2048

The private key will be stored in the privkey.pem file. Store this file in a secure location because this is the webserver’s secret decryption key. This file will be used to generate the associated public key. When we generate the certificate, it will look for this file, it will generate its associated public key and include the public key in the certificate.

To generate a certificate, we create need to create a certificate signing request, and send the certificate signing request to a trusted certificate authority (Such as VeriSign). The authority will then issue you a certificate. Use the following command to generate a certificate request. For more information about the process of signing certificate requests, contact your certificate authority.

openssl req -new -key privkey.pem -out cert.csr

If you don’t want to deal with another certificate authority and you just want to create a certificate for yourself, you can create a self-signed certificate. Note that this is not the recommended way of creating a certificate.

openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095

Openssl will prompt you to enter your personal information. After that the certificate will be stored in the cacert.pem file. This file will be displayed to web clients to verify your identity. It will also include the public key for web clients, thus they can start encrypting communication.

Openssl will also prompt you to enter a pass phrase. This is an extra layer of security. You will need to enter your pass phrase when you start your webserver.

For more information about encryption keys, please consult the OpenSSL documentations:

For more information about certificates, please consult the OpenSSL documentations:

Configure httpd.conf (the Apache configuration file) to enable HTTPS

Edit the apache configuration file.The file should be /oat/installation/directory/Apache_2.2.4/conf/httpd.conf

Search for the following line:

#Include conf/extra/httpd-ssl.conf

This line is commented out by default. Uncomment it so that httpd.conf will include the apache ssl configuration file.

Then edit the apache ssl configuration file.The file should be /oat/installation/directory/Apache_2.2.4/conf/extra/httpd-ssl.conf

HTTPS will require one more ssl port. By default, the ssl port number is set to be 443. Make sure this port is available. If you wish to change this port, edit the Listen directive and the Virtual Host section of the httpd-ssl.conf file.

Search for the ‘SSLCertificateKeyFile’ directive and the ‘SSLCertificateFile’ directive in the httpd-ssl.conf file. These two directives indicate the location of your private key file and the certificate file. Make sure that they point to the privkey.pem and the cacert.pem created in the previous section.

Search for the ‘SSLCipherSuite’ directive. This directive indicates the ciphers for your HTTPS webserver. By default the HTTPS webserver will accept all encryption ciphers. If you wish to accept only the seven strongest ciphers, edit the directive as follow, or else you can keep the default configuration:

SSLProtocol all

For more information about HTTPS configurations, please refer to the following website:

Final Testing

Your OAT webserver should be secured with HTTPS right now. Restart your webserver and launch OAT with your web browser. Instead of using http://hostname:portnumber/openadmin, try using https://hostname:ssl_portnumber/openadmin. You will be prompted to view and accept OAT webserver’s certificate before the OAT login page is launched.

On Linux, you can restart your webserver by running the StopApache script and then the StartApache script in the OAT installation directory. You will be prompt to enter the pass phrase before you can start you webserver.

On Windows, you have to restart your webserver with the command prompt. Starting the webserver with Apachemonitor.exe or with the start menu shortcuts doesn’t work because they do not support pass phrases. Use the following command to restart your webserver.

httpd –k restart

If you wish to use Apachemonitor.exe or with the start menu shortcuts to control your webserver, you have to disable pass phrases. Note that this reduces the level of security.

Run the following command:

cd c:/openssl/installation/directory/bin/
openssl rsa -in privkey.pem -out privkey_nopassphrase.pem

Now an unencrypted copy of your private key will be stored in the privkey_nopassphrase.pem file. Edit your httpd-ssl.conf file SSLCertificateKeyFile directive to use the privkey_nopassphrase.pem file instead of the privkey.pem file. Then you will not be prompted to enter a pass phrase when starting your webserver. Apachemonitor.exe and the start menu shortcuts should work now.

Leo Chan

[Read More]

OAT Version 2.23 Available Now!

cheetahblog Tags:  openadmin oat 3,484 Views

The newest version of OpenAdmin Tool for IDS, Version 2.23, has just been released! Download the new version today to get the enhanced SQL Explorer and version 2.0 of the ER Plugin.

New feature highlights:

The SQL Explorer has been newly redesigned. New features include

  • Filters and search fields for viewing SQL tracing data.
  • Support for different SQL trace levels - global or user - so you can manage what kinds of SQL information are traced for IDS V11.50.xC1 servers.
  • Support for suspending and resuming history tracing, without releasing resources, on IDS V11.50.xC1 servers.

ER Plug-in Version 2.0: Version 2.0 of OAT’s ER plug-in includes a new Replicate Explorer and becomes the first step in supporting ER administration graphically through OAT.

  • The Replicate Explorer now lets you monitor your replicates and replicate sets.
  • Screenshot of the Replicate Explorer

  • The Node Details -> Configuration page now supports the editing and updating of ER configuration parameters

Download OAT Version 2.23 now at

For additional information on OpenAdmin Tool for IDS, including feature details, screenshots and demos, go to

Erika Von Bargen

[Read More]

Tune logical and physical log automatically

cheetahblog Tags:  extendable auto_tune_server_size dynamic_logs auto_llog 3,467 Views

Starting with version 12.10.xC3 Informix can automatically tune the logical and physical log  as needed.  

You can set the AUTO_LLOG configuration parameter to enable Informix to dynamically add logical logs when lack of logical logs causes frequent checkpoints, blocking checkpoints,  long checkpoints etc. In addition, with AUTO_LLOG configuration parameter you can specify a dbspace in which to create new logical log files and the size of all logical log files at which the server stops adding logs for performance.

You are most probably thinking what about the the DYNAMIC_LOGS configuration parameters that exists since Informix version 10. The AUTO_LLOG and the DYNAMIC_LOGS configuration parameters works in different situations and do not interact with each other. When the AUTO_LLOG configuration parameter is enabled, logical logs are added to improve checkpoint performance. When the DYNAMIC_LOGS configuration parameter is enabled, logical logs are added under more urgent conditions, such as when a long transaction threatens to block the server. For example, the maximum size that is specified in the AUTO_LLOG configuration parameter does not affect the amount of log space that can be added by the DYNAMIC_LOGS configuration parameter. Similarly, the value of AUTO_LLOG configuration parameter does not affect the amount of log space that you can add manually.

Now, Informix also allows to expand the size of the physical log as needed to improve performance by creating an extendable chunk for the physical log. An extendable chunks   can be automatically extend when additional storage space is required.

Typically, physical log get created in the root dbspace, unless you create an Informix instance during installation. However, for optimal performance it is better to create a separate dedicated dbspace for physical log on a different disk from the root dbspace and move the physical log out of root dbspace. In case, an instance created during installation the physical log create in a separate dbspace called plogspace with a default size that depends on the value of the AUTO_TUNE_SERVER_SIZE configuration parameter.

By default, the chunk that you assign to the plogspace is extendable, therefore, the initial size of the chunk can be small. The database server automatically expands the chunk when the physical log requires more space.

-Sanjit Chakraborty

Dynamically Changing Configuration parameters for Enterprise Replication

cheetahblog Tags:  dynamic er configuration | 1 Comment 3,454 Views
er_dynamic_configIn IDS 11 you can dynamically change,add or removethe Enterprise Replication configuration parameters.  Thechangestakes effect immediately, while the replication is running. Following are the three commands to use, which are explainedindetail along with examples.

cdrchange config

Use the cdr change config command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1.

For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7

Before changing the values you can confirm the existing values by doing:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:1-2,10   

Now lets change the values to 2,3,4,5 and 7:

$cdr change config "CDR_SUPPRESS_ATSRISWARN 2-5,7"
 WARNING:The value specifed updated in-memory only.

The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfrom
ONCONFIG will take effect.  Now lets confirm the new values:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7   

cdradd config

Use the cdr add config
command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.

Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.

$cdr add config "CDR_SUPPRESS_ATSRISWARN 10"
 WARNING:The value specifed updated in-memory only.

We can confirm the new values from "onstat -g cdr config" as following.

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7,10   

cdrremove config

Use the cdr removeconfig command to removethe existingvalue of an Enterprise Replication configuration parameter whilereplication is active. The value persists while replication is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1.

Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.

$cdr remove config "CDR_SUPPRESS_ATSRISWARN 2-5"
 WARNING:The value specifed updated in-memory only.

Confirm your changes:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting: 7,10 

NOTE: Thecommands changeconfiguration parameters only in memory; they do not update theONCONFIG file. To update environment variables in the ONCONFIG file,use the CDR_ENV configuration parameter.

The following table shows which kind of changes are valid for eachEnterprise Replication configuration parameter.

Table 1.1
ConfigurationParameter cdr add config cdrchange config cdr remove config

Vijay Lolabattu
[Read More]

New onstat option to display sysdbprocedure cache

cheetahblog Tags:  onstat sysdbopen cache sysdbclose procedures 3,413 Views
In IDS 11.10, you can configure session properties using sysdbopen() and sysdbclose() procedures, see Configuring Session Properties. Theseprocedures are stored in sysprocedures system catalog. A memory cache is maintained for these procedures to facilitatefaster searching of procedure identifiers. A new onstat option is available to print statistical information about thismemory cache. You can determine how many users are accessing a database, the sysdb procedures defined for public andother users from this information. A sample output is shown:
    $ onstat -g cac sproc

    IBM Informix Dynamic Server Version 11.10.FC1 -- On-Line -- Up 4 days 20:30:27 -- 38912 Kbytes

    Sysdbproc Cache: Number of lists : 31

    list# refcnt dbname Public Procids Username User Procids UsrList# (open,close) (open,close)-------------------------------------------------------------------------------------7 0 sysmaster (-1, 0)16 1 testdb (345, 346) suma (347, 348) 14 informix (343, 344) 12218 0 sysutils (-1, 0)21 0 sysuser (-1, 0)

    list#List number to which the database name hashes
    refcntNumber of sessions accessing this database
    dbnameDatabase name
    Public ProcidsProcedure ids of public.sysdbopen() and public.sysdbclose(). The value will be (-1,0) if there is no public or user defined sysdb procedures for that database
    UsernameUser name of any users who have their own sysdbopen() and sysdbclose() procedures for that database
    User procidsProcedures ids of sysdb procedures for a particular user in a database
    UsrList#A user hash list to which the user belongs

The number of lists is determined by the onconfig parameter DS_HASHSIZE. The maximum number of database entries for a list is determined by DS_POOLSIZE. This is a soft limit as the limit is checked only when you close a database. When you close a database, if the number of database entries for that list is greater than DS_POOLSIZE, IDS deletes all excess database entries in that list that has a reference count of 0.

Suma Vinod[Read More]

Optimizer Directives in ANSI Join Queries

cheetahblog Tags:  directives optimizer joins ansi 3,393 Views
In IDS 11.10, all optimizer directives including external directives can be used in queries using ANSI joins.Using
optimizer directives might enable you to force a favorable query execution path thus improving performance.

Here is a description of the different optimizer directives and how they work with ANSI join queries.

Optimization Goal DirectivesFIRST_ROWS, ALL_ROWSDirectives enabled for ANSI join queries
Access Method DirectivesFULL, INDEX, AVOID_INDEX, AVOID_FULL,INDEX_SJ, AVOID_INDEX_SJDirectives enabled for ANSI join queries
Explain-Mode DirectivesEXPLAIN, AVOID_EXECUTEDirectives enabled for ANSI join queries
External DirectivesDirectives enabled for ANSI join queries
Join Method DirectivesUSE_HASH, USE_NL, AVOID_HASH, AVOID_NL, /BUILD, /PROBEDirectives enabled for ANSI Inner Joins
Directives not enabled for ANSI Inner Joins in views
Directives enabled for ANSI Outer Joins that can be transformed into ANSI Inner joins
Directives disabled for all other ANSI Outer Joins
Join Order DirectivesOrderedDirectives enabled for ANSI Inner Joins
Directives enabled for ANSI Left Outer joins if it doesn't conflict with the ordering requirement for Outer Joins
Directives disabled for all other ANSI Outer Joins

Suma Vinod[Read More]

IBM OpenAdmin Tool for IDS -- Version 2.20 – Available Now!!

cheetahblog Tags:  idsadmin openadmin oat 3,390 Views

OpenAdmin Tool for IDS has been greatly enhanced in version 2.20 with a completely redesigned user interface, a new automated installer, and lots of new IDS admin functionality. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.

New feature highlights of OpenAdmin Tool for IDS version 2.20 include:

  • Automated Installer: For users who want an easy, hassle-free install, there is a new automated GUI installer that will install and configure Apache, PHP, I-Connect, and OAT. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.

  • All-New Look: OAT version 2.20 has a completely new and improved UI

  • Automated Update Statistics: Configure rules and define a schedule to have IDS automatically maintain your database server's statistics

  • Onconfig Recommendations: Get recommendations for your onconfig settings that are tuned to your specific database server instance; modify dynamic onconfig parameters directly through OAT

  • Enhanced Mach11 Support: Including a Connection Manager interface, an SDS Setup Wizard, and the ability to start and stop Mach11 servers remotely

  • Historical Performance Graphs: View and monitor historical trends of key performance measurements

  • Task Scheduler Wizard: Use OAT’s new wizard to define new tasks in the scheduler

  • Read-Only Access: Define groups in OAT that can monitor IDS database servers but not perform administrative tasks

  • And the list goes on…. Virtual Processor Administration, System Integrity Checks, a Privileges Manager, environment variable support on OAT connections, the ability to kill database sessions remotely through OAT, and more!

Download OAT version 2.20 now at, or at in the Members Area.

Erika Von Bargen[Read More]

Non-blocking Checkpoints - Part 1

cheetahblog Tags:  non-blocking interval checkpoint 3,315 Views
Prior to IDS Version 9, IDS supported a blocking checkpoint algorithmthat causes the system to block all transactional updates while thebufferpool is flushed to disk. In IDS Version 9 onwards, an additionalcheckpoint algorithm, called Fuzzy Checkpoint, was introduced. FuzzyCheckpoints limit the number of updates required to be flushed to diskduring checkpoint processing.

With the new non-blocking checkpoint algorithm in Cheetah, IDS ensuresthat we virtually don't block transactional updates while the entirebufferpool is flushed to disk. Transactionalupdates are blocked only for very small duration of time required to flush thepartition partitions (tablespace tablespaces) to the bufferpool and tograb the restart point. (Restart point is wherefastrecovery starts. We log the checkpoint and use that LSN as the restartpoint for the checkpoint.)

As part of this feature, the new "onstat-g ckp" command helps DBAs to view and analyze checkpointinformation and performance advisories. The "onstat -g ckp" command isexplained in more detail in thispost.

ScottLashley's whitepaper on developerWorks explains the internalsabout non-blocking checkpoints in detail.

In Part 2, we shall learn how to tune checkpoints and modify the relevant onconfig parameters (RTO_SERVER_RESTART, AUTO_CKPTS,AUTO_LRU_TUNING, etc.)

Mirav Kapadia[Read More]

New Onstat Commands

cheetahblog Tags:  onstat 3,300 Views
Beginning with IDS version 11.10, Informix introduced several new onstat commands to enhance server maintenance and monitoring. Database Administrator can take advantage of these new onstat commands to perform their regular activities.

Following are list of new onstat commands:

     Onstst Commands                Description   onstat -g his       Prints SQL statement tracing information using SQLTRACE                        configuration parameter. For more information on SQL tracing,                        see following articles:                       Tracing SQL in Cheetah Part 1                       Tracing SQL in Cheetah Part 2

onstat -g ckp Print checkpoint information and checkpoint related advisory. For more information on checkpoint advisory, see following article: Checkpoint Performance Advisory

onstat -g pqs Allows dynamically track a session's query plan. For more information see following article: onstat -g pqs onstat -g smx Prints Server Multiplexer group information for servers using SMX. Server Multiplexer Group (SMX) is a communications interface that supports encrypted multiplexed network connections between servers in high availability environments. SMX provides a reliable, secure, high-performance communication mechanism between database server instances.

onstat -g rss Prints Remote Standalone Secondary (RSS) server information. The output of this command differs slightly depending on whether the command is run on the primary server or on the RS secondary server.

onstat -g sds Print Shared Disk Secondary (SDS) server information

onstat -g ipl Shows index page logging status information

onstat -g cat Prints information from the Enterprise Replication global catalog. The global catalog contains a summary of information about the defined servers, replicates, and replicate sets on each of the servers within the enterprise.

onstat -g cdr config Prints the settings of Enterprise Replication configuration parameters and environment variables that can be set with the CDR_ENV configuration parameter.

For a complete list of onstat commands, see the IBM Informix Administrator's Reference Guide.

Sanjit Chakraborty
[Read More]

OpenAdmin Tool for IDS XAMPP tutorial updated

cheetahblog Tags:  idsadmin oat 3,262 Views
In March an article on installing IDSAdmin in a XAMPP environment was published on this blog. This article has just been updated by Inge Halilovic with instructions and screen shots for the latest available version of OpenAdmin Tool downloadable from the Free Product Download Site. The new version of OAT has improved graphics, simpler installation, new features such as support for Mach 11 topology, and several bug fixes.

Guy Bowerman[Read More]

New Shared Memory Segment

cheetahblog 3,259 Views

A new shared memory segment has been introduced in the Informix version 12.10. The 12.10.xC2 onwards the bufferpool has been moved out of the resident segment and new bufferpool segment has been added  which shown as class 'B' in the 'onstat -g seg' output. The BUFFERPOOL configuration parameter determines the amount of buffers allocated to this new segment when the database server is started.


IBM Informix Dynamic Server Version 12.10.UC2DE -- On-Line -- Up 00:20:35 -- 85899522520 Kbytes

Segment Summary:
id         key        addr     size       ovhd     class blkused  blkfree
32768      52564801   44000000 3424256    243124   R     836      0       
65537      52564802   44344000 67108864   394328   V     6776     9608    
98306      52564803   48344000 110305280  1        B     26930    0       
Total:     -          -        180838400  -        -     34542    9608    

-Rashmi Chawak

Modified on by cheetahblog

Improved Statistics Maintenance

cheetahblog Tags:  explain sampling statistics 3,194 Views
  • Automaticstatistics collection.

IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.

  • Query statistics in EXPLAIN output.

You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN.  The  querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel.  The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid command. Following is an example.


In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.

If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).

If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe specified percent of rows and for the specified confidence level.

If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.

For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.


Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.

  • New FILETO option to redirect  SET EXPLAIN output to a filethat you desire.

When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.

The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user.  The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file.  Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.

SETEXPLAIN FILE TO '/tmp/explain.vijay'

These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.

Vijay Lolabattu
[Read More]

C-Style Comment Indicator with DB-Access

cheetahblog Tags:  dbaccess comment 3,182 Views

Earlier versions of Informix Dynamic Server (IDS) had restriction with C-style comment indicators ( /* ... */ ) in SQL statements, within the DB–Access operations.

Without the C-style comment indicator it was not compliance with SQL-99 standard and caused problems while migrate from other databases server.

Starting with IDS Version 11.10, restrictions with C-style comment are no longer in effect. Any of the following styles SQL comment indicators can be use in all DB–Access statements.

  • Double hyphen ( -- ) complies with the ANSI/ISO standard for SQL
  • Braces ( { } ) are an Informix extension to the ANSI/ISO standard
  • C-style slash-and-asterisk ( /* . . . */ ) comply with the SQL-99 standard

The comment symbol indicator can be choosing depends on ANSI/ISO compliance requirements.

Comment symbol indicator can be use on the same line with the SQL statement. It can spans across single-line or multiple-line, but cannot be nested.

DB-Access will filter any comments before sending SQL statement to the database server.

Examples:  -- Double hyphen comment  SELECT * FROM orders;     -- Selects all columns and rows                            -- from the orders table

{ Comment using braces } SELECT * FROM orders; { Selects all columns and rows from the orders table }

/* C-style comment */ SELECT * FROM orders; /* Selects all columns and rows from the orders table */

Sanjit Chakraborty
[Read More]

Report from MacWorld

cheetahblog Tags:  macos 3,162 Views
MacWorld was an amazing event this past week in San Francisco at the Moscone Center. Although this is generally a consumer oriented event there were many companies with an enterprise focus showing their solutions. IBM was in the smallest size pedestal in the Enterprise Solutions Area -- People would constantly walk by and say - "IBM ---- the biggest company with the smallest pedestal" - We smiled and told them we make the biggest impact!!!!

As you know we just announced our beta for the Mac OS X this week and were at the show to start to get the word out and meet customers and business partners who are interested in an "Industrial Strength" data base for their application. There was a ton of traffic in the ped - -People were at first surprised and then pleased to see us there as they have been looking for a dependable data base that is secure and scalable to build their applications on. Many developers who have a long history using Informix were thrilled to see the announcement and anxious to try the beta.

  • MAC users are very happy to see an enterprise class database on the LATEST MAC OS.
  • Looking for something better than what is currently available on MAC which just hasn't been reliable and has not scaled to meet their needs.
  • Some really like the idea of OAT that is open source and allow users to customize, especially the free part.
  • One mid sized system integrator commented " We are glad to see IBM supporting the MAC platform as we are building applications to take into Government and Financial Markets - We need a data base that our customers can depend on. IDS is exactly what we are looking for."

Terri Gerber[Read More]

Sunsetting Older Informix Information Centers

cheetahblog Tags:  centers information center info 3,151 Views

The following Informix information centers will be sunset in the near future:


  • Informix 11.10 information center
  • Informix 10 information center
  • Informix products information center

After these information centers are sunset, links to them will be redirected automatically to the resources listed below. You can download final versions of the product documentation, which are being provided as-is, from these resources:

Information centers for current Informix releases will be replaced by the new IBM Knowledge Center, which is currently in Beta. Links from the current information centers will be redirected automatically to the new IBM Knowledge Center.

Consider updating your bookmarks now, and explore the IBM Knowledge Center, especially the Informix collections:

To learn more about the IBM Knowledge Center strategy, follow the IBM Technical Content blog at

-Randy House

Installing multiple copies of IDS on Windows - 11.50.xC2

cheetahblog Tags:  install installation 3,118 Views

Introduction:Multiple installation of IBM Informix Dynamic Server is now enabled in IDS 11.50 xC2. With this, users will be able to install multiples copies of IDS on a single machine with the ability to create instance(s) per installation. For example, a user can have 11.50 xC1 and 11.50 xC2 co-existing on the same machine with instances related to each installation running on that machine.

With this new effort, a user that has an installation of IDS 11.50 can invoke setup.exe and will be presented with existing installation of IDS on the machine. The user can then make a choice between installing a new copy of IDS and maintaining an (one of the) existing copy(ies). If there is an existing installation of IDS 11.50 on a machine, a user that launches setup.exe in graphical mode will be presented a panel similar to the one below:

Multiple Installation Screenshot

However, with an existing installation of IDS 11.50, launching setup.exe in silent mode will require a maintenance response file. Without a maintenance response file or with an installation response file, the installer will abort. To install a subsequent copy of IDS 11.50 in silent mode, the user should pass the ‘-multiple’ command line option to setup.exe.

In addition to the "-multiple" command line option, there are 2 command line options for maintenance namely "-path" and "-instnum". These 2 new command line options are used to launch maintenance mode for a specified installation. Since they perform the same functionality, they should be used interchangeably and not in conjunction with one another.

Usages of the newly implemented options:

  • -multiple usage

  • setup.exe -multiple

    This option can be used in conjunction with other installation command line options. E.g.

    setup.exe -multiple -s -f1"C:\TEMP\server.ini" -f2"C:\TEMP\server.log"

  • -path usage

  • setup.exe -path [installation path]

    The user invokes setup.exe with the ‘-path’ option passing the installation path for which maintenance is required. This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -path [installation path] -s -f1"C:\uninst.ini"

  • -instnum usage

  • setup -instnum [installation number]

    The user invokes setup.exe with the ‘-instnum’ option passing installation number for which maintenance is required. Note that each installation is tagged with an installation number in shortcut menu (except for the first installation which doesn’t have a tag). This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -instnum [installation number] -s -f1"C:\uninst.ini"

    For more information on installing IDS, visit

    Tosin Ajayi

    [Read More]

    IDS "Cheetah 2" Beta on Mac OS X (Leopard)

    cheetahblog Tags:  mac 3,099 Views
    Very aptly termed as... "The Big Cats Unite!".

    This was officially announced today at MacWorld'08 - IDS "Cheetah 2" first beta version on MacOS 10.5 (Leopard) is now available for download here.

    Here's the press announcement.

    Mirav Kapadia[Read More]

    ON-Bar whole system parallelize backup/restore

    cheetahblog 1 Comment 3,039 Views

    Prior to Cheetah,  a whole-system backup (onbar -b-w)  was serial backup of all storage spaces andlogical logs based on a single checkpoint. That time is stored with thebackup information. Theadvantage of using a whole-system backup was  that you canrestore the storage spaces with or without the logical logs.Because the data in all storage spaces is consistent in a whole-systembackup, you do not need to restore the logical logs to make the dataconsistent. Level 0, 1, or 2 backups are supported.

    One of the new features of Cheetah is Backup/Restore dbspacesin parallel with whole system functionality .  With parallelbackup , multiple processes run simultaneously  each processbacking up a different dbspace . In most cases, parallel backups complete fasterthan serialbackups, which use only one process.

    ONCONFIG parameter  "BAR_MAX_BACKUP" isused for whole system backups/restores as well. TheBAR_MAX_BACKUP parameter specifies the maximum number of parallelprocesses that are allowed for each onbar command. Both UNIX andWindows support parallel backups. Although the database server defaultvalue for BAR_MAX_BACKUP is 4, the onconfig.std value is 0.

    To specify parallel backups and restores, including parallel wholesystem backups and restores, set BAR_MAX_BACKUP to a value higher than1. For example, if you set BAR_MAX_BACKUP to 3 and execute anON–Bar command, ON–Bar will spawn the maximum 3 processes concurrently.

    If  BAR_MAX_BACKUP to 0, the system creates as manyON–Bar processes as needed. The number of ON–Barprocesses is limited only by the number of storage spaces or the amountof memory available to the database server, whichever is less.

    To perform a serial backup or restore, including a serialwhole system backup or restore, set BAR_MAX_BACKUP to 1. 

    Also in Cheetah,  storage manager efficiency isimproved  by improvingdbspace ordering .  "Biggest" dbspaces are backed up first and the "smallest"last.Dbspace with most pages used is the "biggest" and will be backed upfirst. This ensures better parallelism, no matter how BAR_MAX_BACKUP isset or how many pages are to be backed up in different dbspaces. Neworder of backup/restore is also effective for non-wholesystembackup/restore. Restore is done in the same order as backup to ease storagemanager's  access to stored objects.

    To implement new ordering of dbspaces for backup and restore changes are made to the bar_instance table in the sysutils Database .
    New column "ins_backup_order" (integer default 0 not null) is added to the table bar_instance .
    Also changes are made to ixbar file :
    New field with backup sequence integer (Last field in the line ) is added in ixbar file.
    Value of the new field is always "0" (zero) for log backup objects.

    Even if the BAR_MAX_BACKUP is set to 1 for serial backup,the dbspaces will be backed up using the new dbspace order .

    Backup Operation
    Only 1 checkpoint for all dbspaces - just before backup of rootdbs .
    Rootdbs is backed up first, without parallelism (no change here).
    Before image processor threads "arcbackup2" are started at this time, one for each dbspace  (more threads running in parallel.)
    As each dbspace backup completes, the respective "arcbackup2" thread exits ( less "arcbackup2" threads as backup progresses. )
    New order of backup. The order is based on the used-pages count at the start time of backup. Dbspace with most pages used is the "biggest" and will be backed up first.

    Restore Operation
    Not much changed, only it is now parallel.
    Rootdbs is restored first, without parallelism (no change here).
    Restore dbspaces in same order as they were backed up. Use the value of "ins_backup_order" to determine correct order. If BAR_MAX_BACKUP is changed between backup and restore, the objects will still be restored in the same order, however the timing relative to each other may differ significantly. This could have negative effects on SM performance .

    Rashmi Chawak[Read More]

    Primary can not send log that requested by RSS

    cheetahblog 2,947 Views

    Other day I was setting up a high availability cluster environment and ran into to an interesting problem. I followed all necessary instruction for setup a RSS server. However, RSS stuck in recovery process and message log on primary serer reported error that could not send log. For example, when executed following command on RSS server to set data replication type:

          onmode -d RSS <primary server name>


     The RSS server stuck in recovery mode and message log on primary server showing following messages:

            RSS Server <RSS server name> - state is now connected             
            Can not send log <log number>     


    The log number mentioned in error message was not close to the current log on primary or RSS server. For example, current log on primary was 7438 and on RSS 7436 but message log stating ‘Can not send log 825241904’. So, from where server getting a out of sequence log number?


    Initially I though it some kind of corruption. However, after some investigation figured out, I was using delayed application (DELAY_APPLY) on RSS server and the directory specified with LOG_STAGING_DIR configuration parameter holding some unwanted file(s). For example, file 'ifmxUniqueLog_825241904' in LOG_STAGING_DIR. So, during recovery RSS server requested to primary for log number 825241904 but that log not exists on primary server.       


    Once I removed all files from the LOG_STAGING_DIR directory on RSS server able to successfully set the high availability cluster environment. Conclusion, next time you try to setup a RSS with DELAY_APPLY, make sure nothing is in the LOG_STAGING_DIR on RSS server.


    -Sanjit Chakraborty

    How to change multiple configuration parameters at once?

    cheetahblog 2,923 Views

    At present most of the Informix configuration parameters can modify dynamically. Typically, users run 'onmode -wf' or 'onmode -wm' command to dynamically change a specific configuration parameter. But there may be a situation when you need to modify multiple parameters at once. For example, you have ten Informix instance and want to tune some of SQL statement cache configuration parameters on all the Informix instances.


    Now, you can import a configuration file using onmode command to change multiple configuration parameter at once. Importing a configuration file is often faster and more convenient than changing one configuration parameter at a time. You can use the 'onmode -wi' command to import a file that contains new values for multiple configuration parameters. If the parameters are dynamically tunable, the database server applies the new values. The import operation ignores the configuration parameters in the file that are not dynamically tunable and if the new parameter values same as the existing value.


    However, you need to keep a note hare that an import operation changes the values of configuration parameters only in the memory. It does not modify the values in the ONCONFIG file on disk.


    Let's take the same example as before, modify some of SQL statement cache configuration parameters using import operation. Currently, you have following settings:

    STMT_CACHE         0
    STMT_CACHE_SIZE    512

    You create a '' file under '/tmp' directory with following values to modify the  configuration parameters :

    STMT_CACHE         1
    STMT_CACHE_SIZE    512


    Next, you run the following command to import configuration values from file named '' in the the /tmp directory:

    $ onmode -wi /tmp/
       3 parameters successfully modified. See online log for more info.

    The above command generates appropriate messages on screen and the message log. Following is an excerpt of the message log:

    14:50:42  Importing configuration from '/tmp/':
    14:50:42  Value of STMT_CACHE has been changed to 1.
    14:50:42  Value of STMT_CACHE_NOLIMIT has been changed to 1.
    14:50:42  Value of STMT_CACHE_HITS has been changed to 1.
    14:50:42  Config Import Complete. 3 tunable parameters processed.
              3 modified, 0 unchanged, 0 could not be tuned.


    You can notice only three out of the five parameters has been changed because either those are not dynamically tunable or using same existing value.


    As the 'onmode -wi' command import a configuration file, you can also export configuration parameters from memory to a file using 'onmode -we' command.


    -Sanjit Chakraborty


    Time types implemented as BIGINT User-Defined Types

    cheetahblog 2,920 Views

    BIGINT Time


    I wasn't happy with the overall efficiency of the built-in time types; so, I created a couple of user-defined types which represent time using distinct types of BIGINT. In these UDTs, a TIMEPOINT is the number of microseconds since the UNIX epoch, January 1, 1970 00:00:00 GMT, and a MICROSECOND is simply a duration of time. The internal implementation of a type affects three factors of database operations: storage space, performance, and application development. This is an implementation of time types that has great potential to improve upon Informix's internal implementation in all three aspects. However, the potential does not always live up to theory in practice. Let's look at when it does, when it does not, and why.


    Storage Space

    The storage space requirements for DATETIME and INTERVAL types can be calculated using this formula, in bytes:


    (total number of digits for all fields)/2 + 1

    For example, a DATETIME YEAR TO FRACTION(5) requires (4+2+2+2+2+2+5)/2 + 1 = 11 bytes; you have to round up. A 64-bit encoding always requires 8 bytes, never more and never less. This matters more or less according to your table definition. Examples:


    The table cust_calls2 is derived from the stores7 database table cust_calls.


    create table cust_calls2


    customer_num integer,

    call_dtime datetime year to minute,

    call_interval interval minute(4) to second,

    user_id char(18) default user,

    call_code char(1),

    call_descr varchar(240),

    res_dtime datetime year to minute,

    res_descr varchar(240)



    The table cust_calls3 replaces the server native types with their UDT counterparts.


    create table cust_calls3


    customer_num integer,

    call_tp timepoint,

    call_mu microsecond,

    user_id char(18) default user,

    call_code char(1),

    call_descr varchar(240),

    res_dtime datetime year to minute,

    res_descr varchar(240)



    So, if you load these tables with a lot of rows, how much space do you save?


    Let's run the following:





    nrows::bigint as nRows,


    npused::bigint as nPages,

    (npused * pagesize / (1024*1024))::decimal(5) as megabytes

    from systables where tabname like 'cust_calls%';




    tabname cust_calls2

    rowsize 523

    nrows 3670016

    pagesize 2048

    npages 1223339

    megabytes 2389.3


    tabname cust_calls3

    rowsize 528

    nrows 3670016

    pagesize 2048

    npages 1223339

    megabytes 2389.3


    Under these conditions, there is no space saving at all. Why is this?


    The page size is 2048 and the row sizes are 523 and 528. That means that you can only get 3 rows per page regardless of which table definition you use. Also, the 64-bit implementation has a slightly larger row size because the DATETIME and INTERVAL definitions do not cover the full range of their type capabilities. You will not see a savings on space unless you can increase the rows to page ratio.


    So, what happens when you have smaller rows and higher precision DATETIME/INTERVAL definitions? Let's take an extreme case. In this test, the table definitions only have time types in them.


    create table cust_calls2a


    call_dtime datetime year to fraction(5),

    call_interval interval day(9) to fraction(5)



    create table cust_calls3a


    call_tp timepoint,

    call_mu microsecond



    This time the table statistics are:


    tabname cust_calls2a

    rowsize 23

    nrows 3670016

    pagesize 2048

    npages 49595

    megabytes 96.865




    tabname cust_calls3a

    rowsize 16

    nrows 3670016

    pagesize 2048

    npages 36337

    megabytes 70.971


    In this condition, we are looking at a space savings over 26%. Your database probably has a mix of conditions; so, your overall space savings probably lies somewhere between 0 and 26 percent.



    Using the same tables as above, let's exercise the server with some queries. In each case the maximum value for the time column is selected.


    select max(call_tp) -- or call_dtime

    from cust_calls3;

    Since there are no indices, this forces the server to perform N-1 comparisons, where N is the number of rows. Just to be clear, these tests were run on an older laptop and the server is in no way optimized. The relative performance across conditions should be reasonably robust, but your mileage may vary. Especially, if you are using a solid state drive, the IO time will be a lot less and so the computational time will be proportionately more; this should dramatically change the relative performance percentages reported here.

    The time to find the maximum value for call_dtime was approximately 9 percent faster than doing the same for call_tp. In practice, the operations performed by the server are faster for this structure,


    typedef struct dtime {
       short dt_qual;
       dec_t dt_dec;
    } dtime_t;

    than they are for an integer base type. How could this be? Informix math libraries convert everything to a decimal prior to doing the math; so, the TIMEPOINT (BIGINT) types might be being converted to decimal prior to doing the comparison. The conversion time might be adding to the overall processing time. I played with some queries, and found that there was enough difference between

    select count(i::decimal(32))

    from x1m


    select count(i)

    from x1m

    to make this hypothesis plausible. It is not confirmed, but it is plausible. In this test, i is a column in a view of sysmaster:sysdual with 1 meg rows.

    It is time to take a look at the abridged versions of the tables. Here we have a complete reversal; the query on max(call_tp) was 24 percent faster than the query on max(call_dtime), on average. I suspect this is because there is a greater information density on the disk, 26 percent denser, for my UDTs and so the IO time is reduced, and that more than makes up for the increased CPU processing overhead.

    The summary is that for tables with a low proportion of time values, using TIME_POINT and MILLISECOND does not gain you much in disk space savings, and can actually cost you a little in CPU cycles. However, for tables with a high proportion of high precision time values, using these UDTs can save you in the neighborhood of 26 percent on disk space and around 24 percent in overall query execution time.


    Application Development

    I wrote up the concepts for why types like these would be nice to have for application development before implementing them. That material is covered in the write-up, “Timestamps with Time Zone Independence Implemented as 64-bit Integers”. Since these are not built-in types, you cannot use API methods to directly bind them to application time value types. For example, you cannot call ResultSet.getTimestamp() on a TIMEPOINT column. However, using them is pretty simple. You just pass the values through the API as BIGINT (64-bit) values, converting to or from the application language native types. Below are some snippets of Java code that do this.

    Inserting TIMEPOINT and MICROSECOND values using Java types

    // insert Instant based on GMT and Duration

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (?, ?)");

    Instant timepoint =; // current date and time with millisecond precision;

    long msec_since_epoch = timepoint.toEpochMilli(); // milliseconds after epoch GMT

    pst.setLong(1, msec_since_epoch * 1000);

    // This bypasses the JDBC driver converting Timestamp to

    // the timezone of the driver's JRE; there are 1000 micros

    // in a milli.


    Duration dur = Duration.ofNanos(123456000); // 0.123456 seconds

    long musec = dur.toNanos() / 1000;

    pst.setLong(2, musec);


    // What will happen here is the driver will pass the long values to the server,

    // and the server will leverage the implicit casts to convert the long values

    // into a TIMEPOINT and a MICROSECOND. Long(BIGINT) to the UDT types should be

    // a no-op cast.



    Inserting TIMEPOINT and MICROSECOND using integer literals

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (" +

    // seconds since epoch + nanosecond offset

    String.valueOf(timepoint.getEpochSecond()*1000000 + timepoint.getNano()/1000) +

    ", " + "12345654321)"); // 12345.654321 seconds


    // The server will convert the literals in string format to BIGINT, and then no-op cast the

    // BIGINTs to the UDT types.



    Inserting TIMEPOINT and MICROSECOND using DATETIME and INTERVAL literals

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (" +

    "datetime(1970-01-02 00:00:00.54321) year to fraction(5), " +

    "INTERVAL (654321.54321) SECOND(6) TO FRACTION(5))");


    // Here the literal values will be passed to the server embedded in the SQL statement,

    // the server will convert the literal strings to DATETIME and INTERVAL types, and then

    // it will apply the casts from these types to the UDT types using the UDR conversion

    // routines defined for the casts.




    ResultSet r = stmt.executeQuery("select time, duration from tab1");

    long time, muSec;

    Duration dur;

    System.out.println("Selecting records: ");



    time = r.getLong(1);

    muSec = r.getLong(2);


    // Create an Instant using the number of whole seconds and the remaining nanos.

    timepoint = Instant.ofEpochSecond(time/1000000, (time % 1000000) * 1000);

    dur = Duration.ofNanos(muSec * 1000);


    System.out.println("Unformatted time: " + timepoint.toString() +

    "\tDuration: " + String.valueOf(dur.getSeconds()*1000000000 + dur.getNano()) +

    " nanoseconds");


    // What time zone do we want? Let's pick New Zealand.

    ZoneId zone = ZoneId.of("Pacific/Auckland");

    ZonedDateTime localZoneTime = ZonedDateTime.ofInstant(timepoint, zone);

    DateTimeFormatter format = DateTimeFormatter.ofPattern("MMM d yyyy hh:mm a");


    System.out.printf("Local time: %s %s\n", localZoneTime.format(format), zone);



    Unformatted time: 2014-10-02T23:02:27.780Z Duration: 123456000 nanoseconds

    Local time: Oct 3 2014 12:02 PM Pacific/Auckland

    Unformatted time: 2014-10-02T23:02:27.780Z Duration: 12345654321000 nanoseconds

    Local time: Oct 3 2014 12:02 PM Pacific/Auckland

    Unformatted time: 1970-01-02T00:00:00.543210Z Duration: 654321543210000 nanoseconds

    Local time: Jan 2 1970 12:00 PM Pacific/Auckland



    In terms of storage space, if the time values are a small portion of your table rows, you will not get much, if any, space savings, but if time values are a large portion of your table rows, you can see up to, in round numbers, 25 percent savings is space requirements. The server side operations on these types are not more efficient than the native types because the server converts integer types to decimal prior to performing any math operations, and the DATETIME and INTERVAL values are already in DECIMAL format. However, you can get around 25 percent better overall query performance because more information can be obtained per disk IO operation. Converting between database types and application types is more efficient in terms of CPU cycles because the conversion only involves integer operations. More significantly, the database values are time zone independent and you do not have to code your applications for awareness of the time zone in which the application was running when the values were stored.


    The attached files include the SQL routines to define the MICROSECOND and TIMEPOINT types, routines to convert them to and from the native time types, and simple math operations for addition, subtraction, multiplication, and division. Included is a write-up containing the theoretical reasoning for implementing these types. There are also a variety of SQL test scripts and a Java program demonstrating read and write operations on these types in an application.


    The download code is more of a proof of concept than a fully-fledged, for-production-use implementation. The conversion routines that exist could be better optimized, and operations beyond simple math could be implemented. Comments and recommendations for improvement are welcome.


    -Chris Golledge

    Modified on by cheetahblog

    New Beta Drop available

    cheetahblog Tags:  cheetah 2,885 Views
    The latest Beta drop of IDS Cheetah is now available, with some important new features integrated into the code line..

    New features available in Beta drop 5 include:

    • AIX 64-bit platform build available for download
    • Web Feature Service for Spatial Data - Delivers spatial information as a web service for your Service Oriented Architecture
    • Label Based Access Control (LBAC) - Control data access at a row and column level using security labels
    • Continuous Availability with multiple HDR remote secondary servers

    The following Infocenter page has a complete list of new features for each Beta drop:

    Guy Bowerman[Read More]

    RENAME support in Enterprise Replication

    cheetahblog Tags:  er rename replication 2,852 Views
    From IDS 10.0x onwards, Enterprise Replication (ER) supports alteroperations on a replicated table while replication is active however, RENAMEsupport was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.

    When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database is correctlyupdated.

    Vijay Lolabattu[Read More]

    Inplace Upgrade of IDS on Windows

    cheetahblog 2,712 Views
    Depending on your existing Database Server setup, you might be able to upgrade directly to the
    current version by basically installing the product in a new directory, copying a few configuration
    files, and starting the new server to convert your database data. You can upgrade directly from any
    of the following products: Dynamic Server Version 11.10, 10.00, 9.40, 9.30, 9.21, or 7.31.

    Upgrading is an in-place migration method that uses your existing test and production hardware. The
    operating system on those machines must be supported by Dynamic Server Version 11.50. Also, you must
    have enough space for the system database data conversion.

    Upgrading consists of these steps:
      1. Prepare your system. That includes removing outstanding in-place alters, closing all transactions,
      verifying the integrity of the data with oncheck, and performing a level-0 backup. If you are
      using Enterprise Replication or High Availability Data Replication, stop replication.

      2. Install the new product on the machine.
      Important: Do not install it over the existing product.

      3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

      4. Start the Dynamic Server Version 11.50 instance. The database data is automatically converted.

      5. Run UPDATE STATISTICS MEDIUM for non-leading index columns, then run UPDATE STATISTICS FOR PROCEDURE.
    This type of migration minimizes the risk of introducing errors. You can always revert from the new
    server to the old one. In the event of a problem during reversion, you can restore the level-0 backup.

    This method works perfectly on Windows thought it is not documented well.
      1. When installing IDS 11.50 on Windows, choose the option "Install to a default/different directory".


      2. Make sure that you do not initialize the server when installing.

      Initialize Screen

      3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

      4. Bring the server up using Control Panel->Services or any other method without initializing.

      5. Monitor the online.log for the Conversion Successful message.

    Once the upgrade has completed successfully, you can remove the old instance. When you run the
    uninstaller make sure that you select "Retains all databases, but removes server binaries".


    If for some reason, you like to go revert to the previous instance, restore it from the level 0 backup.

    The only time this will not work on Windows is if you are upgrading from a 11.10.UC1 version to
    another 11.10 version say 11.10.UC2.

    Suma Vinod[Read More]

    Identifying locks from Updatable Secondary Servers

    cheetahblog Tags:  onstat; lock proxy; 2,610 Views



    Introduction to the Binary Datablade

    cheetahblog Tags:  datablade bdt 2,482 Views
    So now that we've covered the BTS blade, let's look at another newdatablade, while first introduced in 10.00.FC6, the Binary Datablade(BDT for short), needs some introduction.

    The BDT blade is designed to allow for the storage and manipulation of binary encoded strings. It supports two new datatypes, andallows for logical operations like AND, OR, XOR and NOT to be usedagainst these datatypes.

    Like any official datablade, it is registered/installed by the use ofthe blademgr function.

    Registering the BDT

    After registering it to your database you now have two additional UDT's

    • binaryvar- UDT which stores binary string of up to 255 characters
    • binary18- UDT which stores a fixed 18 byte binary string. If the value insertedinto this UDT is less tha 18 bytes,  the field is right paddedwith zeroes.
    Binary data is entered into the colums by use of  ascii hexvalues. You can choose to preface the value by "0x" or not. The datamust also be entered in pairs (since a byte of information is from 00to FF in hex). That means inserting the following will generate anerror:
    But inserting the Following will not generate an error:

    Currently the above datatypes can be manipulated by the followingfunctions:

    • bit_and() -performs a logical AND on two BDT datatypes or one BDT datatype and astring constant.
    • bit_complement() - performs a logical NOT on a single BDT datatype.
    • bit_or()- performs a logical OR on two BDT datatypes or one BDTdatatype and a string constant.
    • bit_xor()-performs a logical XOR on two BDT datatypes or one BDT datatype and astring constant.

    Also the following aggregates support the BDT datatypes:
    • DISTINCT()
    • MAX()
    • MIN() 
    Pleas note the following are not supported:
    • LIKE

    The BDT Blade supports b-tree indices for both single column andcomposite indices.

    So now that we have the details, let create an example. We will createa table in the stores_demo database called bdt_example, and insert somerows.

    CREATE TABLE bdt_example (bin_id serial, bin_val binary18) ;
    INSERT into bdt_example values (0, '102039847137');
    INSERT into bdt_example values (0, '0xFE');
    INSERT into bdt_example values(0, '49424d434f52504f524154494f4e32303036');

    So let's run :

    SELECT DISTINCT (bin_val) FROM bin_val;

    BDT Select Distinct

    As you can see the values for the binary18 which were inserted that hadless than 18 bytes had zeroes padded on the end.

    So let's see what the complement of the data would be:

    SELECT bit_complement(bin_val) from bdt_example;

    select with bit complement

    Again you can see where the logical not was run against these threevalues.

    The BDT datablade also has some additional diagnostic type functions.They are:

    • BDTRelease() - This function tells you what version of theBDT datablade you are using.
    • BDTTrace(filename)- This function specifies the location of the debug/trace file for theBDT datablade.
    • LENGTH(column)- This function specifies the length, in bytes, of the BDT columnspecified.
    • OCTECT_LENGTH (column)- This function specifies the length, in bytes, of the BDT columnspecified.
    The LENGTH and OCTECT_LENGTH return the same values for BDT datatypes.

    One additional addendum. The BDT is fully supported for use in ER.

    Additional information on the Binary Datablade can be found in TheBuilt-In DataBlade Modules User's Guide

    Mark Jamison[Read More]

    Tracking object state changes within Enterprise Replication (ER)

    cheetahblog Tags:  cdr_features track logcommands 2,407 Views

    Have you ever run into a situation where certain ER administrative commands executed and that caused ER to fail? Without knowing the exact sequence of activities, it would be very difficult to identify the actual commands that caused the problem. Unfortunately, not all ER commands generate alerts or update message log. If you did not use a script to run ER administrative commands, there would be no easy way to track the sequence of activity caused the failure. 


    An undocumented mechanism could be use for track ER object state changes. Once you enable this tracking mechanism, Informix server generates information about ER administrative operations in server message log file with full command line arguments, which could help analyzing ER issues. By default it would be disable. To enable the tracking either use –

    • Add the following entry to the ONCONFIG configuration file and restart the Informix server.

                     CDR_FEATURES   LOGCOMMANDS


    • Dynamically change the configuration settings with following command:

                cdr change onconfig "CDR_FEATURES LOGCOMMANDS"

    For example, if you run 'cdr define server --ats=/test/data2 --ris=/test/data2 --connect cdr2 --init g_cdr2 --sync g_cdr1' to define a replication server that would generate following information in message log: 

    10:23:12 CDR remote administrative operation 'define serv -A /test/data2 -R /test/data2 -S g_cdr1 -I g_cdr2' execution succeeded.


    -Sanjit Chakraborty


    cheetahblog 2,245 Views
    Welcome to the IDS Experts Blog, an international team blog authored by IBM software engineers. These experts from the Informix Technical Support and R&D labs will be posting about a variety of topics related to Informix and IDS. A popular topic over the next few months is likely to be the 11.10 release of Informix Dynamic Server currently in Open Beta.

    If you have suggestions on articles you'd like to see from support or development engineers please add a comment to the blog. Your feedback to this blog goes directly to engineers working on Informix products.[Read More]

    IBM Informix on Cloud

    Erika Von Bargen Tags:  informix bluemix cloud 2,111 Views

    I wanted to highlight some exciting news that was announced this past October: IBM Informix on Cloud.

    IBM Informix on Cloud is a hosted cloud offering where you get a pre-configured Advanced Enterprise Edition Informix server on a SoftLayer machine. After the instance is deployed and configured, you get complete control over the instance and the system. This hosted offering provides you the full features of an on-premise Informix instance without the cost, complexity, and risk of managing your own infrastructure.

    Some potential uses of IBM Informix on Cloud :

    • Quickly scale out your Informix deployment for additional capacity and increased data availability
    • Rapid deployment of development and test instances with no up front capital expense
    • Easily test out the new functionality of the latest 12.10 release of the Informix database server in the cloud without impacting your existing systems

    To find out more information about IBM Informix on Cloud:

    Executing transactions in the wire listener

    Erika Von Bargen Tags:  rest transactions nosql informix listener 1,967 Views

    Today I'd like to highlight one of the lesser known features, a hidden gem if you will, in the Informix wire listener: support for transactions.


    Transactions in Mongo and REST?

    If you are coming from a traditional relational database background, transactions are a very familiar and truly essential feature. But take a moment to consider two of the application paradigms that the listener enables: MongoDB and REST.

    Let's start with the Mongo listener type which enables applications written using any of the vast array of MongoDB drivers to run against Informix. MongoDB, and many other NoSQL databases, do not have any notion of transactions. They do not have the ACID (Atomicity, Consistency, Isolation, Durability) properties of relational databases; instead they use an "eventually consistent" model. But there are many application scenarios where transactionality is critical. In the Mongo world, if you need to use transactions, you as the application developer would have to implement them yourself by implementing a two-phased commit inside of your application. By contrast, in the Informix wire listener, transaction support is built in and is readily available to all Mongo clients.

    Second, let's consider the REST listener type. REST by definition is stateless. Therefore the notion of bundling up multiple requests into a single transaction does not traditionally fit into this model either. But the Informix wire listener does enable ways to make this happen even for REST clients should your application require it.

    Ok, now that we've covered why this transaction support is a unique value add, let's dive into the semantics.


    Listener support for transactions

    The listener offers two different ways to execute a transaction: transaction mode and batched transactions. Both of these methods of running transactions are available to both Mongo and REST clients. We'll explore each of these in turn.


    Option 1: Transaction mode

    In this method, you first issue a command to put the listener in transaction mode. Then execute as many inserts, updates, or deletes as you would like using the normal syntax. None of these statements will be committed until you issue a commit command. There is also a rollback command should you need to roll back the transaction. You remain in transaction mode until you explicitly exit it. Therefore any inserts, updates, or delete run after the first commit command will be part of the next transaction and will remain uncommitted until you issue another commit. When you are done using transactions and want the listener to go back to the default behavior of auto committing each statement, you issue a final command to exit transaction mode.


    Transaction mode for Mongo clients

    Here's an example using the Mongo shell syntax. We'll take the classic transaction example of transferring funds from one account to another. Here's we'll transfer $1000 from a certain customer's checking account to their savings account.

    > db.runCommand({transaction:"enable"})
    { "ok" : 1 }
    > db.accounts.update({customer_num:191187, type:"checking"}, {$inc:{balance:-1000}})
    WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
    > db.accounts.update({customer_num:191187, type:"saving"}, {$inc:{balance:1000}})
    WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
    > db.runCommand({transaction:"commit"})
    { "ok" : 1 }

    > db.runCommand({transaction:"disable"})
    { "ok" : 1 }

    The first command, db.runCommand({transaction:"enable"}) , puts your session into transaction mode. The following two statements update the accounts table. Keep in mind that accounts could be a regular relational table or a JSON collection. It doesn't really matter; the syntax from the application's perspective is the same for both. In the Mongo update statement, we pass two documents. The first is the condition that determines which rows or documents get updated; the second contains the update operation itself. So, for example, db.accounts.update({customer_num:191187, type:"checking"}, {$inc:{balance:-1000}}) means update the account of type "checking" for customer number 191187 by incrementing the "balance" field by -1000 (so really a decrement operation). After the updates, we commit the transaction with db.runCommand({transaction:"commit"}). Finally, we disable (exit) transaction mode with db.runCommand({transaction:"disable"}).

    It is not required to disable transaction mode right away. If you have other transactions to run, you can continue right along after the commit and the next statements will be part a second transaction. The db.runCommand({transaction:"disable"}) is critical though because until you run that, all statements are uncommitted until an explicit db.runCommand({transaction:"commit"}). So don't forget the last part if you want to go back to the default auto commit behavior!

    There are two more options in the transaction command that are worth nothing.

    To rollback a transaction, run

    > db.runCommand({transaction:"rollback"})
    { "ok" : 1 }

    To check if are are currently in transaction mode or not, you can use the "status" option. The "enabled" field in the response shows whether transactions have been enabled on the session (i.e. whether you are in transaction mode) and the "supported" field indicates whether the current database supports transactions.

    > db.runCommand({transaction:"status"})
    { "enabled" : true, "supported" : true, "ok" : 1 }

    Transaction mode in REST

    All of the commands just described in the Mongo section can be run through REST. Below I will present the exact same transfer of funds example, but a couple of things first:

    • For transactions to work with REST, you have to be using cookies! When you authenticate to the listener for the first time, the listener will include a Set-Cookie directive in the response which will include an identifier for your session. This needs to be set in the cookie of every subsequent REST request that is part of the transaction. Cookies are always a good practice when using the REST listener to avoid getting a new session each time, but they are absolutely required if you want to use transaction mode in REST.
    • A few reminders on REST syntax.
      • A GET request runs a query. A GET request on the $cmd psuedo collection runs a command, which is what you use to run the transaction commands.
      • PUT on a table or collection does an update. The update condition is passed in the query parameter in the URL; the update operation is passed in the data portion of your request.
      • If you need more of a refresher on REST syntax, check out this page of the IBM Informix Knowledge Center

    Here's the same example as above, just using REST syntax:

    GET /mydb/$cmd?query={transaction:"enable"}

    PUT /mydb/accounts?query={customer_num:191187, type:"checking"}

         Data: {$inc:{balance:-1000}}


    PUT /mydb/accounts?query={customer_num:191187, type:"saving"}

         Data: {$inc:{balance:1000}})

    GET /mydb/$cmd?query={transaction:"commit"}


    GET /mydb/$cmd?query={transaction:"disable"}


    Option 2: Batched transaction


    The second way of doing transactions in the listener is to send a set of statements as a single command and instruct the listener to run them as a single transaction. This is done using the "execute" option of the listener's transaction command. When using this method, the listener will handle doing the start transaction and commit (or rollback) statements for you under the covers. No need for you to run transaction enable, commit, or disable.

    This support was added starting in 12.10.xC7.


    Batched transaction with Mongo clients

    The syntax for running a batch of commands as a single transaction is as follows.

    > db.runCommand({"transaction" : "execute", 
     "commands" : [
    	{"update":"accounts", "updates" : [ 
                 { "q" : { "customer_num" : 191187, type: "checking"}, 
                   "u": {$inc:{balance:-1000}} } 
             ] },
    	{"update":"accounts", "updates" : [ 
                 { "q" : { "customer_num" : 191187, type: "savings"}, 
                   "u": {$inc:{balance:1000}} } 
             ] }

    You just specify the transaction command with the execute option ({"transaction" : "execute"}) and then in the "commands" field pass any array of documents that represent your insert, update, or delete statements. The syntax for these individual commands is just as if you were to run insert, update, or delete as a command in the Mongo syntax.

    If all statements in your batched transaction succeed, the listener will automatically commit it.The result returned will include an array that indicates the number of documents updated (or inserted or deleted).

        "ok" : 1,
        "result" : [
                "ok" : 1,
                "n" : 1,
                "nModified" : 1
                "ok" : 1,
                "n" : 1,
                "nModified" : 1

    If there is a failure in one of the statements, the listener will roll back your transaction and return you a response that indicates which command failed and the failure message.


    Batched transaction in REST

    Here's the same example in a REST request.

    GET /mydb/$cmd?query={"transaction":"execute", "commands":[{"update":"accounts", "updates":[{"q":{ "customer_num":191187, type:"checking"}, "u":{$inc:{balance:-1000}}}]}, {"update":"accounts", "updates":[{"q":{"customer_num":191187, type:"savings"}, "u":{$inc:{balance:1000}}}]}]}

    That's just the exact same document as above passed in a GET request on the $cmd psuedo collection.


    Batched transaction with a finally clause

    There's one more thing! We added to the listener's batched transaction command the ability to add a finally clause. The finally clause is just a set of statements that will always be run at the end of the transaction no matter if the transaction gets committed or rolled back.

    This actually allows you to do things outside the traditional notion of transactions. One of the things about the listener is that it tries to efficiently use resources across sessions, by using a connection pool for example. This means that in normal auto commit mode, you are not guaranteed to run your statements on the exact same JDBC connection each time, which is perfectly acceptable when each statement is auto committed independently. Transaction mode, on the other hand, will bind a single connection to your session until you exit transaction mode. This, combined with the SQL passthrough feature, allows you to use transactions to do any sequence of statements that need to be run on the same JDBC connection.

    For example if you need to set an environment variable before running a query (and you don't want to set this environment variable globally for all connections created by the listener). The best way to do this is to execute it through the listener's batched transaction command to ensure that your query is run on the same JDBC connection that your environment variable was set on. In this use case, the finally block is key because you want to make sure the environment variable gets unset before that connection goes back to the connection pool, and you need to make sure this unsetting of the environment variable happens whether or not the rest of your transaction succeeds.

    > db.runCommand({"transaction" : "execute", 
     "commands" : [
        {"find" : "system.sql", "filter" : {"$sql" : 
                  "SET ENVIRONMENT USE_DWA 'ACCELERATE ON'" } },
        {"find" : "system.sql", "filter" : {"$sql" : 
                  "SELECT SUM(s.amount) as sum FROM sales AS s 
                  WHERE s.prid = 100 GROUP BY" } }
    "finally" : [{"find":"system.sql", "filter" : {"$sql" : 

    This example also shows how you can run queries (the find command) within the batched transaction syntax. This particular example is running SQL passthrough queries, but regular queries on tables and collections work as well. The query results are returned as part of the response to {"transaction" : "execute"} command.



    So that's it! Transaction support as you've come to know it in relational databases such as IBM Informix now available to applications using REST or Mongo client drivers!

    To find the documentation on the listener transaction command check out this page in the IBM Informix Knowledge Center.

    Modified on by Erika Von Bargen

    Informix JDBC Driver on Maven!

    Brian Hughes Tags:  informix maven jdbc 1,874 Views

    I'm happy to announce that Informix JDBC driver is out on Maven Central!  

    Using a distribution platform such as Maven, it's now easier than ever to download, upgrade, and utilize the Informix JDBC driver in your applications.  You can bypass the traditional installer and download site and get the driver you need quickly and efficiently.


    Here is a link to the Maven page for the new driver.  On the page they have examples for many build systems (Gradle, Maven, SBT, and more) on how to include our driver.  You can also directly download the the jar file from the site.


    The group we use for Maven is ''.  This will be where we push out relevant technologies (like JDBC driver) that make sense to have a home on Maven. Wondering where ifxjdbc.jar and ifxjdbcx.jar files went?  We combined the functionality into a single jar file.  Now you can just download and manage one file.

    One Listener, Three Ways to Connect your Applications to Informix

    Erika Von Bargen Tags:  nosql rest informix listener mqtt 1,838 Views

    Hopefully, you've already heard about the Informix wire listener. But did you know that there are actually three different types of listeners? This means that the listener provides application developers with three completely different paradigms for connecting their applications to the Informix database server.


    Suppose you are rapidly developing a new application and crave the flexibility and power that the schemaless "NoSQL" data model will provide you to adapt to use cases going forward that you cannot necessarily foresee at the current moment. Or maybe you want a quick and easy way to get data into and out of Informix for a web application developed with the newest and hottest web framework, and you want to do so in a way that doesn't make you think or worry about whether the underlying data is in a relational, time series, or JSON schema. Or maybe you are even developing an Internet of Things applications for a wide array of sensors that will be gathering data that needs to be pushed to your database in a way that is simple, light weight, and asynchronous. The Informix wire listener can help you build such solutions, and many more! And it allows you to build such applications all the while harnessing the performance, scalability, and reliability in the enterprise-class, embeddable database that is Informix.


    So what are these three listener types?

    • The Mongo listener

    The Mongo listener was the first and original listener type and allowed applications developed for MongoDB to run against Informix without modification. This enabled an entirely different type of application -- flexible, schemaless, and in line with a rapid application development paradigm -- to be run against the Informix database server. It also enabled a whole new ecosystem of modern and open source drivers, those developed for the MongoDB protocol, to be used to connect applications to Informix.

    But the original Mongo listener type was about more than just Mongo compatibility. It enabled a new "hybrid" application model -- one that combined traditional relational data with the new NoSQL, JSON-based data. The listener was developed to seamlessly work with any type of data that Informix supports, be it relational, JSON, or time series.

    Want to learn more about using Mongo drivers to connect to Informix? Check out a great tutorial written by Martin Fuerderer and hosted on the Informix and NoSQL blog. This tutorial takes you step by step through understanding the listener, NoSQL data, and how to use Mongo syntax and tools to connect to Informix.

    • The REST listener

    REST. So ubiquitous in the world of web applications. This was the next evolution of the Informix wire listener. The listener's REST interface allows any application to get data into and out of Informix using HTTP. A simple interface that opened up Informix connectivity to just about every modern programming language without the need for an Informix-specific driver.

    Want to learn about using REST to communicate with Informix? Head on over to Martin Fuerderer's article First Steps with the REST API and then check out the REST API syntax page in the Informix Knowledge Center to get more details and examples of the REST syntax for accessing and querying data.

    • The MQTT listener

    New to 12.10.xC7, the MQTT listener type enables you to use the MQTT publish model to insert data into the Informix database server. This is perfect for Internet of Things scenarios where data collected on the "edge" needs to be published and stored in a central data store in a light weight and asynchronous way.

    Want to learn more about using MQTT to publish data to Informix? Check out  Working with the new 12.10.xC7 MQTT Listener over on Shawn Moe's The Lenexa View blog for an in-depth discussion of the MQTT listener, including some sample Java code.


    And last but not least, it is worth reiterating that all three listener types work seamlessly on relational, JSON, timeseries data, or even a combination. No matter the underlying way in which the data is stored, the Informix wire listener will manage those details automatically for you, allowing you as the application developer to focus on your application instead of the nuances of data access.

    Want to get started, but need more info on how to start different or multiple listener types? You can find that here in the IBM Informix Knowledge Center.

    Modified on by Erika Von Bargen

    Simply Powerful and Flexible – IBM Informix on Cloud

    Pradeep Natarajan 1,720 Views

    In this latest article published today in the IBM Data Management blog, Pradeep Muthalpuredathe, Senior Manager for Informix Engineering & Support shares his thoughts about the Informix on Cloud offering. Get started with this powerful offering today.

    Also check out the newly redesigned landing page for IBM Informix! All the IBM Informix resources you need are in one place - product offerings, select customer references, videos, demos, white papers, code samples, and downloads.

    Dormant, no more!

    Pradeep Natarajan 1,575 Views


    [Image source:]

    First, apologies to those who have been following this blog regularly in the past! Unfortunately, this blog had remained dormant for far too long. Make no mistake, it was just the blog that had been dormant, and neither the product nor the experts behind it. Our team has been busy enhancing our beloved IBM Informix product as well as producing content for the user community. As evidence, you will see a series of posts erupting here that point to various blogs, presentations, sample applications, and tutorials available in the public domain.

    It is time to revive this blog and provide an avenue for our experts to blanket the user community with Informix wisdom!

    Modified on by Pradeep Natarajan

    On-Disk Encryption available in 12.10.xC8

    Pradeep Natarajan 1,369 Views

    Hear Ye! Hear Ye!Have you heard? IBM Informix 12.10.xC8 is publicly available for download! The most significant update to the product in this release includes on-disk encryption, also known as Encryption At Rest (EAR). The EAR feature allows you to encrypt the data at the lowest level, on the disk. Now you can rest easy, knowing that your data is encrypted and secure in the event of a physical disk loss or theft. In a world where low cost computers are physically accessible, increasingly used for temporary storage, aggregation, and analytics of data, it is vital to keep the data secure. You can start encrypting your storage spaces by upgrading to 12.10.xC8.

    Here is a nice blog written by our architect@Shawn Moe (smoe) to help you get started. ​For complete information on this feature, please refer to our documentation page