Informix Experts

For a complete list of current SQL reserved words, see the IBM Informix Guide to SQL: Syntax.

Sanjit Chakraborty[Read More]

IBM OpenAdmin Tool for IDS -- Version 2.20 – Available Now!!

cheetahblog Tags:  idsadmin openadmin oat 2,894 Views

OpenAdmin Tool for IDS has been greatly enhanced in version 2.20 with a completely redesigned user interface, a new automated installer, and lots of new IDS admin functionality. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.

New feature highlights of OpenAdmin Tool for IDS version 2.20 include:

  • Automated Installer: For users who want an easy, hassle-free install, there is a new automated GUI installer that will install and configure Apache, PHP, I-Connect, and OAT. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.

  • All-New Look: OAT version 2.20 has a completely new and improved UI

  • Automated Update Statistics: Configure rules and define a schedule to have IDS automatically maintain your database server's statistics

  • Onconfig Recommendations: Get recommendations for your onconfig settings that are tuned to your specific database server instance; modify dynamic onconfig parameters directly through OAT

  • Enhanced Mach11 Support: Including a Connection Manager interface, an SDS Setup Wizard, and the ability to start and stop Mach11 servers remotely

  • Historical Performance Graphs: View and monitor historical trends of key performance measurements

  • Task Scheduler Wizard: Use OAT’s new wizard to define new tasks in the scheduler

  • Read-Only Access: Define groups in OAT that can monitor IDS database servers but not perform administrative tasks

  • And the list goes on…. Virtual Processor Administration, System Integrity Checks, a Privileges Manager, environment variable support on OAT connections, the ability to kill database sessions remotely through OAT, and more!

Download OAT version 2.20 now at, or at in the Members Area.

Erika Von Bargen[Read More]

Monitor Resource Contention

cheetahblog Tags:  bth 2,810 Views

Two new onstat commands introduced in 12.10.xC2 to view the dependencies between blocking and waiting threads. A running threads take ownership of various objects and resources; for example, buffers, locks, mutexes, decision support memory etc. Contention for these resources among hundreds or thousands of threads can result in chains of  dependencies.

Now, you can use the 'onstat -g bth' command to display the dependencies between blocking and waiting threads. Next, use the 'onstat -g BTH' command to display session and stack information for the blocking threads.

Here is a scenario where these onstat commands can be helpful. It possible a  thread that is blocked waiting to enter a critical section might own a row lock for which another thread is waiting. The second thread might be blocking a third thread that is waiting in the MGM query queue. Usually, the duration of such contention is short and user never notice any problem. However, if a thread is blocked long enough, you might need to identify the source of the contention. The 'onstat -g bth' command discovers the chains of dependency and displays blocker threads followed by waiting threads, in order.

The following is a sample output of the 'onstat -g bth' command where multiple threads are waiting on resources.


Highest level blocker(s)
 tid      name                 session
 48       sqlexec              26

Threads waiting on resources
 tid      name                 blocking resource              blocker
 49       sqlexec              MGM                            48
 13       readahead_0          Condition (ReadAhead)           -
 50       sqlexec              Lock (0x4411e578)              49
 51       sqlexec              Lock (0x4411e578)              49
 52       sqlexec              Lock (0x4411e578)              49
 53       sqlexec              Lock (0x4411e578)              49
 57       bf_priosweep()       Condition (bp_cond)             -
 58       scan_1.0             Condition (await_MC1)           -
 59       scan_1.0             Condition (await_MC1)           -

In the above example, four threads are waiting for a lock that is owned by thread 49. However, that's not the actual problem. The thread 49 is waiting for MGM resources that are owned by thread 48. So, originally problem started with the thread 48. Next, run the 'onstat -g BTH' command to see the session and stack information of thread 48.  Following is the example for 'onstat -g BTH' output:

Stack for thread: 48 sqlexec
 base: 0x00000000461a3000
  len:   69632
   pc: 0x00000000017b32c3
  tos: 0x00000000461b2e30
state: ready
   vp: 1

0x00000000017b32c3 (oninit) yield_processor_svp
0x00000000017bca6c (oninit) mt_wait
0x00000000019d4e5c (oninit) net_buf_get
0x00000000019585bf (oninit) recvsocket
0x00000000019d1759 (oninit) tlRecv
0x00000000019ce62d (oninit) slSQIrecv
0x00000000019c43ed (oninit) pfRecv
0x00000000019b2580 (oninit) asfRecv
0x000000000193db2a (oninit) ASF_Call
0x0000000000c855dd (oninit) asf_recv
0x0000000000c8573c (oninit) _iread
0x0000000000c835cc (oninit) _igetint
0x0000000000c72a9e (oninit) sqmain
0x000000000194bb38 (oninit) listen_verify
0x000000000194ab8a (oninit) spawn_thread
0x0000000001817de3 (oninit) th_init_initgls
0x00000000017d3135 (oninit) startup

Highest level blocker(s)
 tid      name                 session
 48       sqlexec              26      

session          effective                      #RSAM    total   used   dynamic
id      user     user      tty   pid   hostname threads  memory  memory explain
26      informix -         45    31041 darwin   2        212992  186568 off

Program :

tid      name     rstcb            flags    curstk   status
48       sqlexec  448bc5e8         ---P---  4560     ready-
58       scan_1.0 448bb478         Y------  896      cond wait  await_MC1 -

Memory pools    count 2
name         class addr           totalsize  freesize   #allocfrag #freefrag
26           V     45fcc040       208896     25616      189        16        
26*O0        V     462ad040       4096       808        1          1         

name           free       used           name           free       used      
overhead       0          6576           mtmisc         0          72        
resident       0          72             scb            0          240       
opentable      0          7608           filetable      0          1376      
log            0          33072          temprec        0          17744     
blob           0          856            keys           0          176       
ralloc         0          55344          gentcb         0          2240      
ostcb          0          2992           sqscb          0          21280     
sql            0          11880          xchg_desc      0          1528      
xchg_port      0          1144           xchg_packet    0          440       
xchg_group     0          104            xchg_priv      0          336       
hashfiletab    0          1144           osenv          0          2520      
sqtcb          0          15872          fragman        0          1024      
shmblklist     0          416            sqlj           0          72        
rsam_seqscan   0          368            

sqscb info
scb              sqscb          optofc   pdqpriority optcompind  directives
4499c1c0         461c1028       0        100         2           1         

Sess       SQL            Current       Iso Lock       SQL  ISAM F.E.
Id         Stmt type      Database      Lvl Mode       ERR  ERR  Vers  Explain    
26         SELECT         stores        CR  Not Wait   0    0    9.24  Off        

Current statement name : unlcur

Current SQL statement (5) :
  select * from systables,syscolumns,sysfragments

Last parsed SQL statement :
  select * from systables,syscolumns,sysfragments


-Sanjit Chakraborty

Modified on by cheetahblog

New onstat option to display sysdbprocedure cache

cheetahblog Tags:  onstat sysdbopen cache sysdbclose procedures 2,788 Views
In IDS 11.10, you can configure session properties using sysdbopen() and sysdbclose() procedures, see Configuring Session Properties. Theseprocedures are stored in sysprocedures system catalog. A memory cache is maintained for these procedures to facilitatefaster searching of procedure identifiers. A new onstat option is available to print statistical information about thismemory cache. You can determine how many users are accessing a database, the sysdb procedures defined for public andother users from this information. A sample output is shown:
    $ onstat -g cac sproc

    IBM Informix Dynamic Server Version 11.10.FC1 -- On-Line -- Up 4 days 20:30:27 -- 38912 Kbytes

    Sysdbproc Cache: Number of lists : 31

    list# refcnt dbname Public Procids Username User Procids UsrList# (open,close) (open,close)-------------------------------------------------------------------------------------7 0 sysmaster (-1, 0)16 1 testdb (345, 346) suma (347, 348) 14 informix (343, 344) 12218 0 sysutils (-1, 0)21 0 sysuser (-1, 0)

    list#List number to which the database name hashes
    refcntNumber of sessions accessing this database
    dbnameDatabase name
    Public ProcidsProcedure ids of public.sysdbopen() and public.sysdbclose(). The value will be (-1,0) if there is no public or user defined sysdb procedures for that database
    UsernameUser name of any users who have their own sysdbopen() and sysdbclose() procedures for that database
    User procidsProcedures ids of sysdb procedures for a particular user in a database
    UsrList#A user hash list to which the user belongs

The number of lists is determined by the onconfig parameter DS_HASHSIZE. The maximum number of database entries for a list is determined by DS_POOLSIZE. This is a soft limit as the limit is checked only when you close a database. When you close a database, if the number of database entries for that list is greater than DS_POOLSIZE, IDS deletes all excess database entries in that list that has a reference count of 0.

Suma Vinod[Read More]

Which version of CSDK should be used on Windows 7 platform?

cheetahblog Tags:  odbc csdk windows7 2,781 Views

Both CSDK version 3.50 and CSDK 3.70 are supported on Windows 7 operating system, which makes difficult to decide which one to use. A useful way to decide which CSDK to use is: 
consider what version of Informix Server you are using. By design, the CSDK version will be the "latest 3.x for 11.x". Here “x” is the same minor version number “.50” or .”70” as the database engine, e.g., Informix 11.50 and CSDK 3.50; or Informix 11.70 and CSDK 3.70.                 
In regard to the bit level, 64-bit windows will support either 32-bit ('.TC') or 64 bit (.'FC') CSDK. Although the bit level is not significant for determining which one to use, it is important to note that the ODBC manager on Windows 7 is 64-bit. If you want use a 64-bit CSDK, you will have no issues. However, a 32-bit CSDK will require an older ODBC manager.       


-David Jay                      

Dynamically Changing Configuration parameters for Enterprise Replication

cheetahblog Tags:  dynamic er configuration | 1 Comment 2,739 Views
er_dynamic_configIn IDS 11 you can dynamically change,add or removethe Enterprise Replication configuration parameters.  Thechangestakes effect immediately, while the replication is running. Following are the three commands to use, which are explainedindetail along with examples.

cdrchange config

Use the cdr change config command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1.

For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7

Before changing the values you can confirm the existing values by doing:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:1-2,10   

Now lets change the values to 2,3,4,5 and 7:

$cdr change config "CDR_SUPPRESS_ATSRISWARN 2-5,7"
 WARNING:The value specifed updated in-memory only.

The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfrom
ONCONFIG will take effect.  Now lets confirm the new values:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7   

cdradd config

Use the cdr add config
command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.

Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.

$cdr add config "CDR_SUPPRESS_ATSRISWARN 10"
 WARNING:The value specifed updated in-memory only.

We can confirm the new values from "onstat -g cdr config" as following.

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7,10   

cdrremove config

Use the cdr removeconfig command to removethe existingvalue of an Enterprise Replication configuration parameter whilereplication is active. The value persists while replication is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1.

Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.

$cdr remove config "CDR_SUPPRESS_ATSRISWARN 2-5"
 WARNING:The value specifed updated in-memory only.

Confirm your changes:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting: 7,10 

NOTE: Thecommands changeconfiguration parameters only in memory; they do not update theONCONFIG file. To update environment variables in the ONCONFIG file,use the CDR_ENV configuration parameter.

The following table shows which kind of changes are valid for eachEnterprise Replication configuration parameter.

Table 1.1
ConfigurationParameter cdr add config cdrchange config cdr remove config

Vijay Lolabattu
[Read More]

Encrypted Communications for HDR

cheetahblog Tags:  hdr encryption 2,729 Views

In IDS Cheetah release, you can encrypt communication between an HDR pair, to secure the transmission of data over unsecured networks, including the internet. After you enable encryption, the HDR primary encrypts the data before sending it to the HDR Secondary server. The HDR secondary server decrypts the data. Use new ONCONFIG parameter ENCRYPT_HDR to enable encryption between the HDR pair. You can also customized encryption using following parameters.

Configuration Parameter Default value Comments/Description
ENCRYPT_HDR 0 0 - disable, 1 - enable HDR encryption
ENCRYPT_CIPHERS allbut:<ecb> Defines ciphers and modes that can be used by the current database session.
The following ciphers are currently supported:
  • des (64-bit key), des3 (Triple DES), desx (Extended DES, 128-bit key)
  • aes/aes128 (128-bit key), aes192 (192-bit key), aes256 (256-bit key)

  • bf-1 Blowfish (64-bit key), bf-2 (128-bit key), bf-3 (192-bit key)
ENCRYPT_MAC medium Controls the level of message authentication code (MAC) generation.

  • off - does not use MAC generation.
  • low - uses XOR folding on all messages.
  • medium - uses SHA1 MAC generation for all messages greater than 20 bytes long
    and XOR folding on smaller messages.
  • high - uses SHA1 MAC generation on all messages.

ENCRYPT_MACFILE builtin A list of the full path names of MAC key files.
ENCRYPT_SWITCH 60,60 Defines the frequency (in minutes) at which ciphers, secret keys are renegotiated.
cipher_switch_time, key_switch_time
  • To use your own MAC key file
    • Execute the following command to generate MAC Key file.
      $INFORMIXDIR/bin/GenMacKey -o  /usr/informix/etc/MacKey1.dat
    • Copy MacKey1.dat over to the paired server

    • Update ENCRYPT_MACFILE configuration parameter on both the servers as shown below
      ENCRYPT_MACFILE    /usr/informix/etc/MacKey1.dat,builtin

  • NOTE - HDR and Enterprise Replication (ER) share the same encryption configuration parameters: ENCRYPT_CIPHERS, ENCRYPT_MAC, ENCRYPT_MACFILE and ENCRYPT_SWITCH.

Nilesh Ozarkar[Read More]

How to find the application that running a particular SQL?

cheetahblog Tags:  onstat 2,721 Views

It is not uncommon that an inefficient SQL statement causing overall performance problem. In that scenario, first you need to identify the problematic SQL, next find out the user and application that executing the SQL. Here is a glimpse to address the second part of the problem.


There are different possible ways to find out the user and application running a particular SQL. The ‘onstat -g ses <sessionid>’ command is the simple way figure out the application that running a SQL. Do you notice any thing new in the following ‘onstat -g ses <session id>’ output?


IBM Informix Dynamic Server Version 11.70.FC5 -- On-Line -- Up 16:41:52 -- 1048576 Kbytes


session           effective                 #RSAM    total  used   dynamic

id       user     user   tty pid   hostname threads  memory memory explain

34       informix -      1   19387 garion   1        278528 217656 off


Program :



tid      name     rstcb            flags    curstk   status

176      sqlexec  145629268        B--PX--  13103    yield bufwait-


Memory pools    count 2

name         class addr              totalsize  freesize   #allocfrag #freefrag

34           V     145b99040        274432     60064      376        27

34*O0        V     145b0d040        4096       808        1          1


name           free       used           name           free       used

overhead       0          6576           scb            0          144

opentable      0          13880          filetable      0          4312

ru             0          600            log            0          16536


sqscb info

scb              sqscb            optofc   pdqpriority optcompind  directives

142e211c0        145b53028        0        0           0           1


Sess       SQL            Current            Iso Lock       SQL  ISAM F.E.

Id         Stmt type      Database     Lvl Mode       ERR  ERR  Vers  Explain

34         INSERT         stores       NL  Not Wait   0    0    9.24  Off


Current statement name : inscur


Current SQL statement (997) :

  insert into customer


You are right. There is a new ‘Program’ section in above onstat output. Now, you can use the ‘onstat -g ses <session id>’ command to display the full path of the client program that used in a session. Use the client program information to monitor or control the application access to the database. One thing you should know that the program information is sent by the client itself, and name of the program totally depends on the client.


-Sanjit Chakraborty


After an upgrade query running slow

cheetahblog Tags:  opt_seek_factor slow upgrade 2,712 Views

You have upgraded to Informix 11.70.xC3 or higher version. After upgrade encountered some of your queries running slow. The common observation, a query was using index path prior to upgrade and running faster. After an upgrade the same query is doing a sequential scan. Using an index directive the query uses the index and runs much faster than the sequential scan. However, the index path has a higher estimated cost associated with it than the sequential scan.

Also, after an upgrade it could be seen that the optimizer chooses a different index in the newer version of Informix than it did in the prior version and causing the query to run slower.  

A newer costing functionality was added in 11.70.FC3 and higher in the 11.70 family. This functionality was added to account for the seek time required to traverse an index (especially large indexes). There were situations where queries using an index would take longer to return than a sequential scan because of all the I/O involved.

You can control the new costing functionality using the OPT_SEEK_FACTOR configuration parameter. This parameter allows to set the "weight" of an I/O seek cost. The range is 0 to 25 and default is 6. Making it lower causes the seek cost to be lower which lowers the estimated cost of using an index path.

You can revert to the old costing method by setting OPT_SEEK_FACTOR to 0 in the ONCONFIG file.

- Grant Flander

C-Style Comment Indicator with DB-Access

cheetahblog Tags:  dbaccess comment 2,695 Views

Earlier versions of Informix Dynamic Server (IDS) had restriction with C-style comment indicators ( /* ... */ ) in SQL statements, within the DB–Access operations.

Without the C-style comment indicator it was not compliance with SQL-99 standard and caused problems while migrate from other databases server.

Starting with IDS Version 11.10, restrictions with C-style comment are no longer in effect. Any of the following styles SQL comment indicators can be use in all DB–Access statements.

  • Double hyphen ( -- ) complies with the ANSI/ISO standard for SQL
  • Braces ( { } ) are an Informix extension to the ANSI/ISO standard
  • C-style slash-and-asterisk ( /* . . . */ ) comply with the SQL-99 standard

The comment symbol indicator can be choosing depends on ANSI/ISO compliance requirements.

Comment symbol indicator can be use on the same line with the SQL statement. It can spans across single-line or multiple-line, but cannot be nested.

DB-Access will filter any comments before sending SQL statement to the database server.

Examples:  -- Double hyphen comment  SELECT * FROM orders;     -- Selects all columns and rows                            -- from the orders table

{ Comment using braces } SELECT * FROM orders; { Selects all columns and rows from the orders table }

/* C-style comment */ SELECT * FROM orders; /* Selects all columns and rows from the orders table */

Sanjit Chakraborty
[Read More]

Restoring critical administrative files with onbar utility

cheetahblog Tags:  file -cf onbar critical 2,677 Views

It is already known to us that onbar backs up some critical administrative files during storage space backup. These critical files are as follows:

- The onconfig file

- UNIX: The sqlhosts file

- The ON-Bar emergency boot file: ixbar.servernum

- The server boot file: oncfg_servername.servernum


There may be a situation when you need to restore one of these files, for example: replace disks, restore to a second computer system (imported restore) etc. However, previously there was no easy way to restore these files from onbar backup. You need to depend on the Informix Technical Support to perform this operation.


Starting with Inofrmix vesion 12.10.xC2, you can restore the critical files that you backed up with onbar utility. Onbar cold restore has an option now to restore these files or you can only restore these file without perform the storate space restore.


Use 'onbar -r -cf yes' to restore critical file during cold restore. Alternatively, you can use 'onbar -r -cf only' to extract critical files while Informix server is offline.


-Sanjit Chakraborty

New Onstat Commands

cheetahblog Tags:  onstat 2,670 Views
Beginning with IDS version 11.10, Informix introduced several new onstat commands to enhance server maintenance and monitoring. Database Administrator can take advantage of these new onstat commands to perform their regular activities.

Following are list of new onstat commands:

     Onstst Commands                Description   onstat -g his       Prints SQL statement tracing information using SQLTRACE                        configuration parameter. For more information on SQL tracing,                        see following articles:                       Tracing SQL in Cheetah Part 1                       Tracing SQL in Cheetah Part 2

onstat -g ckp Print checkpoint information and checkpoint related advisory. For more information on checkpoint advisory, see following article: Checkpoint Performance Advisory

onstat -g pqs Allows dynamically track a session's query plan. For more information see following article: onstat -g pqs onstat -g smx Prints Server Multiplexer group information for servers using SMX. Server Multiplexer Group (SMX) is a communications interface that supports encrypted multiplexed network connections between servers in high availability environments. SMX provides a reliable, secure, high-performance communication mechanism between database server instances.

onstat -g rss Prints Remote Standalone Secondary (RSS) server information. The output of this command differs slightly depending on whether the command is run on the primary server or on the RS secondary server.

onstat -g sds Print Shared Disk Secondary (SDS) server information

onstat -g ipl Shows index page logging status information

onstat -g cat Prints information from the Enterprise Replication global catalog. The global catalog contains a summary of information about the defined servers, replicates, and replicate sets on each of the servers within the enterprise.

onstat -g cdr config Prints the settings of Enterprise Replication configuration parameters and environment variables that can be set with the CDR_ENV configuration parameter.

For a complete list of onstat commands, see the IBM Informix Administrator's Reference Guide.

Sanjit Chakraborty
[Read More]

OpenAdmin Tool for IDS XAMPP tutorial updated

cheetahblog Tags:  idsadmin oat 2,664 Views
In March an article on installing IDSAdmin in a XAMPP environment was published on this blog. This article has just been updated by Inge Halilovic with instructions and screen shots for the latest available version of OpenAdmin Tool downloadable from the Free Product Download Site. The new version of OAT has improved graphics, simpler installation, new features such as support for Mach 11 topology, and several bug fixes.

Guy Bowerman[Read More]

Optimizer Directives in ANSI Join Queries

cheetahblog Tags:  directives optimizer joins ansi 2,618 Views
In IDS 11.10, all optimizer directives including external directives can be used in queries using ANSI joins.Using
optimizer directives might enable you to force a favorable query execution path thus improving performance.

Here is a description of the different optimizer directives and how they work with ANSI join queries.

Optimization Goal DirectivesFIRST_ROWS, ALL_ROWSDirectives enabled for ANSI join queries
Access Method DirectivesFULL, INDEX, AVOID_INDEX, AVOID_FULL,INDEX_SJ, AVOID_INDEX_SJDirectives enabled for ANSI join queries
Explain-Mode DirectivesEXPLAIN, AVOID_EXECUTEDirectives enabled for ANSI join queries
External DirectivesDirectives enabled for ANSI join queries
Join Method DirectivesUSE_HASH, USE_NL, AVOID_HASH, AVOID_NL, /BUILD, /PROBEDirectives enabled for ANSI Inner Joins
Directives not enabled for ANSI Inner Joins in views
Directives enabled for ANSI Outer Joins that can be transformed into ANSI Inner joins
Directives disabled for all other ANSI Outer Joins
Join Order DirectivesOrderedDirectives enabled for ANSI Inner Joins
Directives enabled for ANSI Left Outer joins if it doesn't conflict with the ordering requirement for Outer Joins
Directives disabled for all other ANSI Outer Joins

Suma Vinod[Read More]

Improved Statistics Maintenance

cheetahblog Tags:  explain sampling statistics 2,604 Views
  • Automaticstatistics collection.

IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.

  • Query statistics in EXPLAIN output.

You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN.  The  querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel.  The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid command. Following is an example.


In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.

If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).

If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe specified percent of rows and for the specified confidence level.

If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.

For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.


Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.

  • New FILETO option to redirect  SET EXPLAIN output to a filethat you desire.

When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.

The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user.  The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file.  Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.

SETEXPLAIN FILE TO '/tmp/explain.vijay'

These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.

Vijay Lolabattu
[Read More]

Installing multiple copies of IDS on Windows - 11.50.xC2

cheetahblog Tags:  install installation 2,582 Views

Introduction:Multiple installation of IBM Informix Dynamic Server is now enabled in IDS 11.50 xC2. With this, users will be able to install multiples copies of IDS on a single machine with the ability to create instance(s) per installation. For example, a user can have 11.50 xC1 and 11.50 xC2 co-existing on the same machine with instances related to each installation running on that machine.

With this new effort, a user that has an installation of IDS 11.50 can invoke setup.exe and will be presented with existing installation of IDS on the machine. The user can then make a choice between installing a new copy of IDS and maintaining an (one of the) existing copy(ies). If there is an existing installation of IDS 11.50 on a machine, a user that launches setup.exe in graphical mode will be presented a panel similar to the one below:

Multiple Installation Screenshot

However, with an existing installation of IDS 11.50, launching setup.exe in silent mode will require a maintenance response file. Without a maintenance response file or with an installation response file, the installer will abort. To install a subsequent copy of IDS 11.50 in silent mode, the user should pass the ‘-multiple’ command line option to setup.exe.

In addition to the "-multiple" command line option, there are 2 command line options for maintenance namely "-path" and "-instnum". These 2 new command line options are used to launch maintenance mode for a specified installation. Since they perform the same functionality, they should be used interchangeably and not in conjunction with one another.

Usages of the newly implemented options:

  • -multiple usage

  • setup.exe -multiple

    This option can be used in conjunction with other installation command line options. E.g.

    setup.exe -multiple -s -f1"C:\TEMP\server.ini" -f2"C:\TEMP\server.log"

  • -path usage

  • setup.exe -path [installation path]

    The user invokes setup.exe with the ‘-path’ option passing the installation path for which maintenance is required. This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -path [installation path] -s -f1"C:\uninst.ini"

  • -instnum usage

  • setup -instnum [installation number]

    The user invokes setup.exe with the ‘-instnum’ option passing installation number for which maintenance is required. Note that each installation is tagged with an installation number in shortcut menu (except for the first installation which doesn’t have a tag). This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -instnum [installation number] -s -f1"C:\uninst.ini"

    For more information on installing IDS, visit

    Tosin Ajayi

    [Read More]

    Tune logical and physical log automatically

    cheetahblog Tags:  extendable auto_tune_server_size dynamic_logs auto_llog 2,560 Views

    Starting with version 12.10.xC3 Informix can automatically tune the logical and physical log  as needed.  

    You can set the AUTO_LLOG configuration parameter to enable Informix to dynamically add logical logs when lack of logical logs causes frequent checkpoints, blocking checkpoints,  long checkpoints etc. In addition, with AUTO_LLOG configuration parameter you can specify a dbspace in which to create new logical log files and the size of all logical log files at which the server stops adding logs for performance.

    You are most probably thinking what about the the DYNAMIC_LOGS configuration parameters that exists since Informix version 10. The AUTO_LLOG and the DYNAMIC_LOGS configuration parameters works in different situations and do not interact with each other. When the AUTO_LLOG configuration parameter is enabled, logical logs are added to improve checkpoint performance. When the DYNAMIC_LOGS configuration parameter is enabled, logical logs are added under more urgent conditions, such as when a long transaction threatens to block the server. For example, the maximum size that is specified in the AUTO_LLOG configuration parameter does not affect the amount of log space that can be added by the DYNAMIC_LOGS configuration parameter. Similarly, the value of AUTO_LLOG configuration parameter does not affect the amount of log space that you can add manually.

    Now, Informix also allows to expand the size of the physical log as needed to improve performance by creating an extendable chunk for the physical log. An extendable chunks   can be automatically extend when additional storage space is required.

    Typically, physical log get created in the root dbspace, unless you create an Informix instance during installation. However, for optimal performance it is better to create a separate dedicated dbspace for physical log on a different disk from the root dbspace and move the physical log out of root dbspace. In case, an instance created during installation the physical log create in a separate dbspace called plogspace with a default size that depends on the value of the AUTO_TUNE_SERVER_SIZE configuration parameter.

    By default, the chunk that you assign to the plogspace is extendable, therefore, the initial size of the chunk can be small. The database server automatically expands the chunk when the physical log requires more space.

    -Sanjit Chakraborty

    New Shared Memory Segment

    cheetahblog 2,512 Views

    A new shared memory segment has been introduced in the Informix version 12.10. The 12.10.xC2 onwards the bufferpool has been moved out of the resident segment and new bufferpool segment has been added  which shown as class 'B' in the 'onstat -g seg' output. The BUFFERPOOL configuration parameter determines the amount of buffers allocated to this new segment when the database server is started.


    IBM Informix Dynamic Server Version 12.10.UC2DE -- On-Line -- Up 00:20:35 -- 85899522520 Kbytes

    Segment Summary:
    id         key        addr     size       ovhd     class blkused  blkfree
    32768      52564801   44000000 3424256    243124   R     836      0       
    65537      52564802   44344000 67108864   394328   V     6776     9608    
    98306      52564803   48344000 110305280  1        B     26930    0       
    Total:     -          -        180838400  -        -     34542    9608    

    -Rashmi Chawak

    Modified on by cheetahblog

    ON-Bar whole system parallelize backup/restore

    cheetahblog 1 Comment 2,507 Views

    Prior to Cheetah,  a whole-system backup (onbar -b-w)  was serial backup of all storage spaces andlogical logs based on a single checkpoint. That time is stored with thebackup information. Theadvantage of using a whole-system backup was  that you canrestore the storage spaces with or without the logical logs.Because the data in all storage spaces is consistent in a whole-systembackup, you do not need to restore the logical logs to make the dataconsistent. Level 0, 1, or 2 backups are supported.

    One of the new features of Cheetah is Backup/Restore dbspacesin parallel with whole system functionality .  With parallelbackup , multiple processes run simultaneously  each processbacking up a different dbspace . In most cases, parallel backups complete fasterthan serialbackups, which use only one process.

    ONCONFIG parameter  "BAR_MAX_BACKUP" isused for whole system backups/restores as well. TheBAR_MAX_BACKUP parameter specifies the maximum number of parallelprocesses that are allowed for each onbar command. Both UNIX andWindows support parallel backups. Although the database server defaultvalue for BAR_MAX_BACKUP is 4, the onconfig.std value is 0.

    To specify parallel backups and restores, including parallel wholesystem backups and restores, set BAR_MAX_BACKUP to a value higher than1. For example, if you set BAR_MAX_BACKUP to 3 and execute anON–Bar command, ON–Bar will spawn the maximum 3 processes concurrently.

    If  BAR_MAX_BACKUP to 0, the system creates as manyON–Bar processes as needed. The number of ON–Barprocesses is limited only by the number of storage spaces or the amountof memory available to the database server, whichever is less.

    To perform a serial backup or restore, including a serialwhole system backup or restore, set BAR_MAX_BACKUP to 1. 

    Also in Cheetah,  storage manager efficiency isimproved  by improvingdbspace ordering .  "Biggest" dbspaces are backed up first and the "smallest"last.Dbspace with most pages used is the "biggest" and will be backed upfirst. This ensures better parallelism, no matter how BAR_MAX_BACKUP isset or how many pages are to be backed up in different dbspaces. Neworder of backup/restore is also effective for non-wholesystembackup/restore. Restore is done in the same order as backup to ease storagemanager's  access to stored objects.

    To implement new ordering of dbspaces for backup and restore changes are made to the bar_instance table in the sysutils Database .
    New column "ins_backup_order" (integer default 0 not null) is added to the table bar_instance .
    Also changes are made to ixbar file :
    New field with backup sequence integer (Last field in the line ) is added in ixbar file.
    Value of the new field is always "0" (zero) for log backup objects.

    Even if the BAR_MAX_BACKUP is set to 1 for serial backup,the dbspaces will be backed up using the new dbspace order .

    Backup Operation
    Only 1 checkpoint for all dbspaces - just before backup of rootdbs .
    Rootdbs is backed up first, without parallelism (no change here).
    Before image processor threads "arcbackup2" are started at this time, one for each dbspace  (more threads running in parallel.)
    As each dbspace backup completes, the respective "arcbackup2" thread exits ( less "arcbackup2" threads as backup progresses. )
    New order of backup. The order is based on the used-pages count at the start time of backup. Dbspace with most pages used is the "biggest" and will be backed up first.

    Restore Operation
    Not much changed, only it is now parallel.
    Rootdbs is restored first, without parallelism (no change here).
    Restore dbspaces in same order as they were backed up. Use the value of "ins_backup_order" to determine correct order. If BAR_MAX_BACKUP is changed between backup and restore, the objects will still be restored in the same order, however the timing relative to each other may differ significantly. This could have negative effects on SM performance .

    Rashmi Chawak[Read More]

    Report from MacWorld

    cheetahblog Tags:  macos 2,500 Views
    MacWorld was an amazing event this past week in San Francisco at the Moscone Center. Although this is generally a consumer oriented event there were many companies with an enterprise focus showing their solutions. IBM was in the smallest size pedestal in the Enterprise Solutions Area -- People would constantly walk by and say - "IBM ---- the biggest company with the smallest pedestal" - We smiled and told them we make the biggest impact!!!!

    As you know we just announced our beta for the Mac OS X this week and were at the show to start to get the word out and meet customers and business partners who are interested in an "Industrial Strength" data base for their application. There was a ton of traffic in the ped - -People were at first surprised and then pleased to see us there as they have been looking for a dependable data base that is secure and scalable to build their applications on. Many developers who have a long history using Informix were thrilled to see the announcement and anxious to try the beta.

    • MAC users are very happy to see an enterprise class database on the LATEST MAC OS.
    • Looking for something better than what is currently available on MAC which just hasn't been reliable and has not scaled to meet their needs.
    • Some really like the idea of OAT that is open source and allow users to customize, especially the free part.
    • One mid sized system integrator commented " We are glad to see IBM supporting the MAC platform as we are building applications to take into Government and Financial Markets - We need a data base that our customers can depend on. IDS is exactly what we are looking for."

    Terri Gerber[Read More]

    Non-blocking Checkpoints - Part 1

    cheetahblog Tags:  non-blocking interval checkpoint 2,489 Views
    Prior to IDS Version 9, IDS supported a blocking checkpoint algorithmthat causes the system to block all transactional updates while thebufferpool is flushed to disk. In IDS Version 9 onwards, an additionalcheckpoint algorithm, called Fuzzy Checkpoint, was introduced. FuzzyCheckpoints limit the number of updates required to be flushed to diskduring checkpoint processing.

    With the new non-blocking checkpoint algorithm in Cheetah, IDS ensuresthat we virtually don't block transactional updates while the entirebufferpool is flushed to disk. Transactionalupdates are blocked only for very small duration of time required to flush thepartition partitions (tablespace tablespaces) to the bufferpool and tograb the restart point. (Restart point is wherefastrecovery starts. We log the checkpoint and use that LSN as the restartpoint for the checkpoint.)

    As part of this feature, the new "onstat-g ckp" command helps DBAs to view and analyze checkpointinformation and performance advisories. The "onstat -g ckp" command isexplained in more detail in thispost.

    ScottLashley's whitepaper on developerWorks explains the internalsabout non-blocking checkpoints in detail.

    In Part 2, we shall learn how to tune checkpoints and modify the relevant onconfig parameters (RTO_SERVER_RESTART, AUTO_CKPTS,AUTO_LRU_TUNING, etc.)

    Mirav Kapadia[Read More]

    Primary can not send log that requested by RSS

    cheetahblog 2,484 Views

    Other day I was setting up a high availability cluster environment and ran into to an interesting problem. I followed all necessary instruction for setup a RSS server. However, RSS stuck in recovery process and message log on primary serer reported error that could not send log. For example, when executed following command on RSS server to set data replication type:

          onmode -d RSS <primary server name>


     The RSS server stuck in recovery mode and message log on primary server showing following messages:

            RSS Server <RSS server name> - state is now connected             
            Can not send log <log number>     


    The log number mentioned in error message was not close to the current log on primary or RSS server. For example, current log on primary was 7438 and on RSS 7436 but message log stating ‘Can not send log 825241904’. So, from where server getting a out of sequence log number?


    Initially I though it some kind of corruption. However, after some investigation figured out, I was using delayed application (DELAY_APPLY) on RSS server and the directory specified with LOG_STAGING_DIR configuration parameter holding some unwanted file(s). For example, file 'ifmxUniqueLog_825241904' in LOG_STAGING_DIR. So, during recovery RSS server requested to primary for log number 825241904 but that log not exists on primary server.       


    Once I removed all files from the LOG_STAGING_DIR directory on RSS server able to successfully set the high availability cluster environment. Conclusion, next time you try to setup a RSS with DELAY_APPLY, make sure nothing is in the LOG_STAGING_DIR on RSS server.


    -Sanjit Chakraborty

    IDS "Cheetah 2" Beta on Mac OS X (Leopard)

    cheetahblog Tags:  mac 2,447 Views
    Very aptly termed as... "The Big Cats Unite!".

    This was officially announced today at MacWorld'08 - IDS "Cheetah 2" first beta version on MacOS 10.5 (Leopard) is now available for download here.

    Here's the press announcement.

    Mirav Kapadia[Read More]

    New Beta Drop available

    cheetahblog Tags:  cheetah 2,446 Views
    The latest Beta drop of IDS Cheetah is now available, with some important new features integrated into the code line..

    New features available in Beta drop 5 include:

    • AIX 64-bit platform build available for download
    • Web Feature Service for Spatial Data - Delivers spatial information as a web service for your Service Oriented Architecture
    • Label Based Access Control (LBAC) - Control data access at a row and column level using security labels
    • Continuous Availability with multiple HDR remote secondary servers

    The following Infocenter page has a complete list of new features for each Beta drop:

    Guy Bowerman[Read More]

    RENAME support in Enterprise Replication

    cheetahblog Tags:  er replication rename 2,288 Views
    From IDS 10.0x onwards, Enterprise Replication (ER) supports alteroperations on a replicated table while replication is active however, RENAMEsupport was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.

    When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database is correctlyupdated.

    Vijay Lolabattu[Read More]

    Inplace Upgrade of IDS on Windows

    cheetahblog 2,251 Views
    Depending on your existing Database Server setup, you might be able to upgrade directly to the
    current version by basically installing the product in a new directory, copying a few configuration
    files, and starting the new server to convert your database data. You can upgrade directly from any
    of the following products: Dynamic Server Version 11.10, 10.00, 9.40, 9.30, 9.21, or 7.31.

    Upgrading is an in-place migration method that uses your existing test and production hardware. The
    operating system on those machines must be supported by Dynamic Server Version 11.50. Also, you must
    have enough space for the system database data conversion.

    Upgrading consists of these steps:
      1. Prepare your system. That includes removing outstanding in-place alters, closing all transactions,
      verifying the integrity of the data with oncheck, and performing a level-0 backup. If you are
      using Enterprise Replication or High Availability Data Replication, stop replication.

      2. Install the new product on the machine.
      Important: Do not install it over the existing product.

      3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

      4. Start the Dynamic Server Version 11.50 instance. The database data is automatically converted.

      5. Run UPDATE STATISTICS MEDIUM for non-leading index columns, then run UPDATE STATISTICS FOR PROCEDURE.
    This type of migration minimizes the risk of introducing errors. You can always revert from the new
    server to the old one. In the event of a problem during reversion, you can restore the level-0 backup.

    This method works perfectly on Windows thought it is not documented well.
      1. When installing IDS 11.50 on Windows, choose the option "Install to a default/different directory".


      2. Make sure that you do not initialize the server when installing.

      Initialize Screen

      3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

      4. Bring the server up using Control Panel->Services or any other method without initializing.

      5. Monitor the online.log for the Conversion Successful message.

    Once the upgrade has completed successfully, you can remove the old instance. When you run the
    uninstaller make sure that you select "Retains all databases, but removes server binaries".


    If for some reason, you like to go revert to the previous instance, restore it from the level 0 backup.

    The only time this will not work on Windows is if you are upgrading from a 11.10.UC1 version to
    another 11.10 version say 11.10.UC2.

    Suma Vinod[Read More]

    Sunsetting Older Informix Information Centers

    cheetahblog Tags:  centers information center info 2,214 Views

    The following Informix information centers will be sunset in the near future:


    • Informix 11.10 information center
    • Informix 10 information center
    • Informix products information center

    After these information centers are sunset, links to them will be redirected automatically to the resources listed below. You can download final versions of the product documentation, which are being provided as-is, from these resources:

    Information centers for current Informix releases will be replaced by the new IBM Knowledge Center, which is currently in Beta. Links from the current information centers will be redirected automatically to the new IBM Knowledge Center.

    Consider updating your bookmarks now, and explore the IBM Knowledge Center, especially the Informix collections:

    To learn more about the IBM Knowledge Center strategy, follow the IBM Technical Content blog at

    -Randy House

    Time types implemented as BIGINT User-Defined Types

    cheetahblog 2,147 Views

    BIGINT Time


    I wasn't happy with the overall efficiency of the built-in time types; so, I created a couple of user-defined types which represent time using distinct types of BIGINT. In these UDTs, a TIMEPOINT is the number of microseconds since the UNIX epoch, January 1, 1970 00:00:00 GMT, and a MICROSECOND is simply a duration of time. The internal implementation of a type affects three factors of database operations: storage space, performance, and application development. This is an implementation of time types that has great potential to improve upon Informix's internal implementation in all three aspects. However, the potential does not always live up to theory in practice. Let's look at when it does, when it does not, and why.


    Storage Space

    The storage space requirements for DATETIME and INTERVAL types can be calculated using this formula, in bytes:


    (total number of digits for all fields)/2 + 1

    For example, a DATETIME YEAR TO FRACTION(5) requires (4+2+2+2+2+2+5)/2 + 1 = 11 bytes; you have to round up. A 64-bit encoding always requires 8 bytes, never more and never less. This matters more or less according to your table definition. Examples:


    The table cust_calls2 is derived from the stores7 database table cust_calls.


    create table cust_calls2


    customer_num integer,

    call_dtime datetime year to minute,

    call_interval interval minute(4) to second,

    user_id char(18) default user,

    call_code char(1),

    call_descr varchar(240),

    res_dtime datetime year to minute,

    res_descr varchar(240)



    The table cust_calls3 replaces the server native types with their UDT counterparts.


    create table cust_calls3


    customer_num integer,

    call_tp timepoint,

    call_mu microsecond,

    user_id char(18) default user,

    call_code char(1),

    call_descr varchar(240),

    res_dtime datetime year to minute,

    res_descr varchar(240)



    So, if you load these tables with a lot of rows, how much space do you save?


    Let's run the following:





    nrows::bigint as nRows,


    npused::bigint as nPages,

    (npused * pagesize / (1024*1024))::decimal(5) as megabytes

    from systables where tabname like 'cust_calls%';




    tabname cust_calls2

    rowsize 523

    nrows 3670016

    pagesize 2048

    npages 1223339

    megabytes 2389.3


    tabname cust_calls3

    rowsize 528

    nrows 3670016

    pagesize 2048

    npages 1223339

    megabytes 2389.3


    Under these conditions, there is no space saving at all. Why is this?


    The page size is 2048 and the row sizes are 523 and 528. That means that you can only get 3 rows per page regardless of which table definition you use. Also, the 64-bit implementation has a slightly larger row size because the DATETIME and INTERVAL definitions do not cover the full range of their type capabilities. You will not see a savings on space unless you can increase the rows to page ratio.


    So, what happens when you have smaller rows and higher precision DATETIME/INTERVAL definitions? Let's take an extreme case. In this test, the table definitions only have time types in them.


    create table cust_calls2a


    call_dtime datetime year to fraction(5),

    call_interval interval day(9) to fraction(5)



    create table cust_calls3a


    call_tp timepoint,

    call_mu microsecond



    This time the table statistics are:


    tabname cust_calls2a

    rowsize 23

    nrows 3670016

    pagesize 2048

    npages 49595

    megabytes 96.865




    tabname cust_calls3a

    rowsize 16

    nrows 3670016

    pagesize 2048

    npages 36337

    megabytes 70.971


    In this condition, we are looking at a space savings over 26%. Your database probably has a mix of conditions; so, your overall space savings probably lies somewhere between 0 and 26 percent.



    Using the same tables as above, let's exercise the server with some queries. In each case the maximum value for the time column is selected.


    select max(call_tp) -- or call_dtime

    from cust_calls3;

    Since there are no indices, this forces the server to perform N-1 comparisons, where N is the number of rows. Just to be clear, these tests were run on an older laptop and the server is in no way optimized. The relative performance across conditions should be reasonably robust, but your mileage may vary. Especially, if you are using a solid state drive, the IO time will be a lot less and so the computational time will be proportionately more; this should dramatically change the relative performance percentages reported here.

    The time to find the maximum value for call_dtime was approximately 9 percent faster than doing the same for call_tp. In practice, the operations performed by the server are faster for this structure,


    typedef struct dtime {
       short dt_qual;
       dec_t dt_dec;
    } dtime_t;

    than they are for an integer base type. How could this be? Informix math libraries convert everything to a decimal prior to doing the math; so, the TIMEPOINT (BIGINT) types might be being converted to decimal prior to doing the comparison. The conversion time might be adding to the overall processing time. I played with some queries, and found that there was enough difference between

    select count(i::decimal(32))

    from x1m


    select count(i)

    from x1m

    to make this hypothesis plausible. It is not confirmed, but it is plausible. In this test, i is a column in a view of sysmaster:sysdual with 1 meg rows.

    It is time to take a look at the abridged versions of the tables. Here we have a complete reversal; the query on max(call_tp) was 24 percent faster than the query on max(call_dtime), on average. I suspect this is because there is a greater information density on the disk, 26 percent denser, for my UDTs and so the IO time is reduced, and that more than makes up for the increased CPU processing overhead.

    The summary is that for tables with a low proportion of time values, using TIME_POINT and MILLISECOND does not gain you much in disk space savings, and can actually cost you a little in CPU cycles. However, for tables with a high proportion of high precision time values, using these UDTs can save you in the neighborhood of 26 percent on disk space and around 24 percent in overall query execution time.


    Application Development

    I wrote up the concepts for why types like these would be nice to have for application development before implementing them. That material is covered in the write-up, “Timestamps with Time Zone Independence Implemented as 64-bit Integers”. Since these are not built-in types, you cannot use API methods to directly bind them to application time value types. For example, you cannot call ResultSet.getTimestamp() on a TIMEPOINT column. However, using them is pretty simple. You just pass the values through the API as BIGINT (64-bit) values, converting to or from the application language native types. Below are some snippets of Java code that do this.

    Inserting TIMEPOINT and MICROSECOND values using Java types

    // insert Instant based on GMT and Duration

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (?, ?)");

    Instant timepoint =; // current date and time with millisecond precision;

    long msec_since_epoch = timepoint.toEpochMilli(); // milliseconds after epoch GMT

    pst.setLong(1, msec_since_epoch * 1000);

    // This bypasses the JDBC driver converting Timestamp to

    // the timezone of the driver's JRE; there are 1000 micros

    // in a milli.


    Duration dur = Duration.ofNanos(123456000); // 0.123456 seconds

    long musec = dur.toNanos() / 1000;

    pst.setLong(2, musec);


    // What will happen here is the driver will pass the long values to the server,

    // and the server will leverage the implicit casts to convert the long values

    // into a TIMEPOINT and a MICROSECOND. Long(BIGINT) to the UDT types should be

    // a no-op cast.



    Inserting TIMEPOINT and MICROSECOND using integer literals

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (" +

    // seconds since epoch + nanosecond offset

    String.valueOf(timepoint.getEpochSecond()*1000000 + timepoint.getNano()/1000) +

    ", " + "12345654321)"); // 12345.654321 seconds


    // The server will convert the literals in string format to BIGINT, and then no-op cast the

    // BIGINTs to the UDT types.



    Inserting TIMEPOINT and MICROSECOND using DATETIME and INTERVAL literals

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (" +

    "datetime(1970-01-02 00:00:00.54321) year to fraction(5), " +

    "INTERVAL (654321.54321) SECOND(6) TO FRACTION(5))");


    // Here the literal values will be passed to the server embedded in the SQL statement,

    // the server will convert the literal strings to DATETIME and INTERVAL types, and then

    // it will apply the casts from these types to the UDT types using the UDR conversion

    // routines defined for the casts.




    ResultSet r = stmt.executeQuery("select time, duration from tab1");

    long time, muSec;

    Duration dur;

    System.out.println("Selecting records: ");



    time = r.getLong(1);

    muSec = r.getLong(2);


    // Create an Instant using the number of whole seconds and the remaining nanos.

    timepoint = Instant.ofEpochSecond(time/1000000, (time % 1000000) * 1000);

    dur = Duration.ofNanos(muSec * 1000);


    System.out.println("Unformatted time: " + timepoint.toString() +

    "\tDuration: " + String.valueOf(dur.getSeconds()*1000000000 + dur.getNano()) +

    " nanoseconds");


    // What time zone do we want? Let's pick New Zealand.

    ZoneId zone = ZoneId.of("Pacific/Auckland");

    ZonedDateTime localZoneTime = ZonedDateTime.ofInstant(timepoint, zone);

    DateTimeFormatter format = DateTimeFormatter.ofPattern("MMM d yyyy hh:mm a");


    System.out.printf("Local time: %s %s\n", localZoneTime.format(format), zone);



    Unformatted time: 2014-10-02T23:02:27.780Z Duration: 123456000 nanoseconds

    Local time: Oct 3 2014 12:02 PM Pacific/Auckland

    Unformatted time: 2014-10-02T23:02:27.780Z Duration: 12345654321000 nanoseconds

    Local time: Oct 3 2014 12:02 PM Pacific/Auckland

    Unformatted time: 1970-01-02T00:00:00.543210Z Duration: 654321543210000 nanoseconds

    Local time: Jan 2 1970 12:00 PM Pacific/Auckland



    In terms of storage space, if the time values are a small portion of your table rows, you will not get much, if any, space savings, but if time values are a large portion of your table rows, you can see up to, in round numbers, 25 percent savings is space requirements. The server side operations on these types are not more efficient than the native types because the server converts integer types to decimal prior to performing any math operations, and the DATETIME and INTERVAL values are already in DECIMAL format. However, you can get around 25 percent better overall query performance because more information can be obtained per disk IO operation. Converting between database types and application types is more efficient in terms of CPU cycles because the conversion only involves integer operations. More significantly, the database values are time zone independent and you do not have to code your applications for awareness of the time zone in which the application was running when the values were stored.


    The attached files include the SQL routines to define the MICROSECOND and TIMEPOINT types, routines to convert them to and from the native time types, and simple math operations for addition, subtraction, multiplication, and division. Included is a write-up containing the theoretical reasoning for implementing these types. There are also a variety of SQL test scripts and a Java program demonstrating read and write operations on these types in an application.


    The download code is more of a proof of concept than a fully-fledged, for-production-use implementation. The conversion routines that exist could be better optimized, and operations beyond simple math could be implemented. Comments and recommendations for improvement are welcome.


    -Chris Golledge

    Modified on by cheetahblog

    How to change multiple configuration parameters at once?

    cheetahblog 2,138 Views

    At present most of the Informix configuration parameters can modify dynamically. Typically, users run 'onmode -wf' or 'onmode -wm' command to dynamically change a specific configuration parameter. But there may be a situation when you need to modify multiple parameters at once. For example, you have ten Informix instance and want to tune some of SQL statement cache configuration parameters on all the Informix instances.


    Now, you can import a configuration file using onmode command to change multiple configuration parameter at once. Importing a configuration file is often faster and more convenient than changing one configuration parameter at a time. You can use the 'onmode -wi' command to import a file that contains new values for multiple configuration parameters. If the parameters are dynamically tunable, the database server applies the new values. The import operation ignores the configuration parameters in the file that are not dynamically tunable and if the new parameter values same as the existing value.


    However, you need to keep a note hare that an import operation changes the values of configuration parameters only in the memory. It does not modify the values in the ONCONFIG file on disk.


    Let's take the same example as before, modify some of SQL statement cache configuration parameters using import operation. Currently, you have following settings:

    STMT_CACHE         0
    STMT_CACHE_SIZE    512

    You create a '' file under '/tmp' directory with following values to modify the  configuration parameters :

    STMT_CACHE         1
    STMT_CACHE_SIZE    512


    Next, you run the following command to import configuration values from file named '' in the the /tmp directory:

    $ onmode -wi /tmp/
       3 parameters successfully modified. See online log for more info.

    The above command generates appropriate messages on screen and the message log. Following is an excerpt of the message log:

    14:50:42  Importing configuration from '/tmp/':
    14:50:42  Value of STMT_CACHE has been changed to 1.
    14:50:42  Value of STMT_CACHE_NOLIMIT has been changed to 1.
    14:50:42  Value of STMT_CACHE_HITS has been changed to 1.
    14:50:42  Config Import Complete. 3 tunable parameters processed.
              3 modified, 0 unchanged, 0 could not be tuned.


    You can notice only three out of the five parameters has been changed because either those are not dynamically tunable or using same existing value.


    As the 'onmode -wi' command import a configuration file, you can also export configuration parameters from memory to a file using 'onmode -we' command.


    -Sanjit Chakraborty


    Introduction to the Binary Datablade

    cheetahblog Tags:  datablade bdt 1,931 Views
    So now that we've covered the BTS blade, let's look at another newdatablade, while first introduced in 10.00.FC6, the Binary Datablade(BDT for short), needs some introduction.

    The BDT blade is designed to allow for the storage and manipulation of binary encoded strings. It supports two new datatypes, andallows for logical operations like AND, OR, XOR and NOT to be usedagainst these datatypes.

    Like any official datablade, it is registered/installed by the use ofthe blademgr function.

    Registering the BDT

    After registering it to your database you now have two additional UDT's

    • binaryvar- UDT which stores binary string of up to 255 characters
    • binary18- UDT which stores a fixed 18 byte binary string. If the value insertedinto this UDT is less tha 18 bytes,  the field is right paddedwith zeroes.
    Binary data is entered into the colums by use of  ascii hexvalues. You can choose to preface the value by "0x" or not. The datamust also be entered in pairs (since a byte of information is from 00to FF in hex). That means inserting the following will generate anerror:
    But inserting the Following will not generate an error:

    Currently the above datatypes can be manipulated by the followingfunctions:

    • bit_and() -performs a logical AND on two BDT datatypes or one BDT datatype and astring constant.
    • bit_complement() - performs a logical NOT on a single BDT datatype.
    • bit_or()- performs a logical OR on two BDT datatypes or one BDTdatatype and a string constant.
    • bit_xor()-performs a logical XOR on two BDT datatypes or one BDT datatype and astring constant.

    Also the following aggregates support the BDT datatypes:
    • DISTINCT()
    • MAX()
    • MIN() 
    Pleas note the following are not supported:
    • LIKE

    The BDT Blade supports b-tree indices for both single column andcomposite indices.

    So now that we have the details, let create an example. We will createa table in the stores_demo database called bdt_example, and insert somerows.

    CREATE TABLE bdt_example (bin_id serial, bin_val binary18) ;
    INSERT into bdt_example values (0, '102039847137');
    INSERT into bdt_example values (0, '0xFE');
    INSERT into bdt_example values(0, '49424d434f52504f524154494f4e32303036');

    So let's run :

    SELECT DISTINCT (bin_val) FROM bin_val;

    BDT Select Distinct

    As you can see the values for the binary18 which were inserted that hadless than 18 bytes had zeroes padded on the end.

    So let's see what the complement of the data would be:

    SELECT bit_complement(bin_val) from bdt_example;

    select with bit complement

    Again you can see where the logical not was run against these threevalues.

    The BDT datablade also has some additional diagnostic type functions.They are:

    • BDTRelease() - This function tells you what version of theBDT datablade you are using.
    • BDTTrace(filename)- This function specifies the location of the debug/trace file for theBDT datablade.
    • LENGTH(column)- This function specifies the length, in bytes, of the BDT columnspecified.
    • OCTECT_LENGTH (column)- This function specifies the length, in bytes, of the BDT columnspecified.
    The LENGTH and OCTECT_LENGTH return the same values for BDT datatypes.

    One additional addendum. The BDT is fully supported for use in ER.

    Additional information on the Binary Datablade can be found in TheBuilt-In DataBlade Modules User's Guide

    Mark Jamison[Read More]

    Tracking object state changes within Enterprise Replication (ER)

    cheetahblog Tags:  cdr_features track logcommands 1,862 Views

    Have you ever run into a situation where certain ER administrative commands executed and that caused ER to fail? Without knowing the exact sequence of activities, it would be very difficult to identify the actual commands that caused the problem. Unfortunately, not all ER commands generate alerts or update message log. If you did not use a script to run ER administrative commands, there would be no easy way to track the sequence of activity caused the failure. 


    An undocumented mechanism could be use for track ER object state changes. Once you enable this tracking mechanism, Informix server generates information about ER administrative operations in server message log file with full command line arguments, which could help analyzing ER issues. By default it would be disable. To enable the tracking either use –

    • Add the following entry to the ONCONFIG configuration file and restart the Informix server.

                     CDR_FEATURES   LOGCOMMANDS


    • Dynamically change the configuration settings with following command:

                cdr change onconfig "CDR_FEATURES LOGCOMMANDS"

    For example, if you run 'cdr define server --ats=/test/data2 --ris=/test/data2 --connect cdr2 --init g_cdr2 --sync g_cdr1' to define a replication server that would generate following information in message log: 

    10:23:12 CDR remote administrative operation 'define serv -A /test/data2 -R /test/data2 -S g_cdr1 -I g_cdr2' execution succeeded.


    -Sanjit Chakraborty

    Identifying locks from Updatable Secondary Servers

    cheetahblog Tags:  onstat; lock proxy; 1,861 Views




    cheetahblog 1,776 Views
    Welcome to the IDS Experts Blog, an international team blog authored by IBM software engineers. These experts from the Informix Technical Support and R&D labs will be posting about a variety of topics related to Informix and IDS. A popular topic over the next few months is likely to be the 11.10 release of Informix Dynamic Server currently in Open Beta.

    If you have suggestions on articles you'd like to see from support or development engineers please add a comment to the blog. Your feedback to this blog goes directly to engineers working on Informix products.[Read More]

    Dormant, no more!

    Pradeep Natarajan 507 Views


    [Image source:]

    First, apologies to those who have been following this blog regularly in the past! Unfortunately, this blog had remained dormant for far too long. Make no mistake, it was just the blog that had been dormant, and neither the product nor the experts behind it. Our team has been busy enhancing our beloved IBM Informix product as well as producing content for the user community. As evidence, you will see a series of posts erupting here that point to various blogs, presentations, sample applications, and tutorials available in the public domain.

    It is time to revive this blog and provide an avenue for our experts to blanket the user community with Informix wisdom!

    Modified on by Pradeep Natarajan

    One Listener, Three Ways to Connect your Applications to Informix

    Erika Von Bargen Tags:  nosql rest informix listener mqtt 416 Views

    Hopefully, you've already heard about the Informix wire listener. But did you know that there are actually three different types of listeners? This means that the listener provides application developers with three completely different paradigms for connecting their applications to the Informix database server.


    Suppose you are rapidly developing a new application and crave the flexibility and power that the schemaless "NoSQL" data model will provide you to adapt to use cases going forward that you cannot necessarily foresee at the current moment. Or maybe you want a quick and easy way to get data into and out of Informix for a web application developed with the newest and hottest web framework, and you want to do so in a way that doesn't make you think or worry about whether the underlying data is in a relational, time series, or JSON schema. Or maybe you are even developing an Internet of Things applications for a wide array of sensors that will be gathering data that needs to be pushed to your database in a way that is simple, light weight, and asynchronous. The Informix wire listener can help you build such solutions, and many more! And it allows you to build such applications all the while harnessing the performance, scalability, and reliability in the enterprise-class, embeddable database that is Informix.


    So what are these three listener types?

    • The Mongo listener

    The Mongo listener was the first and original listener type and allowed applications developed for MongoDB to run against Informix without modification. This enabled an entirely different type of application -- flexible, schemaless, and in line with a rapid application development paradigm -- to be run against the Informix database server. It also enabled a whole new ecosystem of modern and open source drivers, those developed for the MongoDB protocol, to be used to connect applications to Informix.

    But the original Mongo listener type was about more than just Mongo compatibility. It enabled a new "hybrid" application model -- one that combined traditional relational data with the new NoSQL, JSON-based data. The listener was developed to seamlessly work with any type of data that Informix supports, be it relational, JSON, or time series.

    Want to learn more about using Mongo drivers to connect to Informix? Check out a great tutorial written by Martin Fuerderer and hosted on the Informix and NoSQL blog. This tutorial takes you step by step through understanding the listener, NoSQL data, and how to use Mongo syntax and tools to connect to Informix.

    • The REST listener

    REST. So ubiquitous in the world of web applications. This was the next evolution of the Informix wire listener. The listener's REST interface allows any application to get data into and out of Informix using HTTP. A simple interface that opened up Informix connectivity to just about every modern programming language without the need for an Informix-specific driver.

    Want to learn about using REST to communicate with Informix? Head on over to Martin Fuerderer's article First Steps with the REST API and then check out the REST API syntax page in the Informix Knowledge Center to get more details and examples of the REST syntax for accessing and querying data.

    • The MQTT listener

    New to 12.10.xC7, the MQTT listener type enables you to use the MQTT publish model to insert data into the Informix database server. This is perfect for Internet of Things scenarios where data collected on the "edge" needs to be published and stored in a central data store in a light weight and asynchronous way.

    Want to learn more about using MQTT to publish data to Informix? Check out  Working with the new 12.10.xC7 MQTT Listener over on Shawn Moe's The Lenexa View blog for an in-depth discussion of the MQTT listener, including some sample Java code.


    And last but not least, it is worth reiterating that all three listener types work seamlessly on relational, JSON, timeseries data, or even a combination. No matter the underlying way in which the data is stored, the Informix wire listener will manage those details automatically for you, allowing you as the application developer to focus on your application instead of the nuances of data access.

    Want to get started, but need more info on how to start different or multiple listener types? You can find that here in the IBM Informix Knowledge Center.

    Modified on by Erika Von Bargen

    Executing transactions in the wire listener

    Erika Von Bargen Tags:  rest transactions nosql informix listener 317 Views

    Today I'd like to highlight one of the lesser known features, a hidden gem if you will, in the Informix wire listener: support for transactions.


    Transactions in Mongo and REST?

    If you are coming from a traditional relational database background, transactions are a very familiar and truly essential feature. But take a moment to consider two of the application paradigms that the listener enables: MongoDB and REST.

    Let's start with the Mongo listener type which enables applications written using any of the vast array of MongoDB drivers to run against Informix. MongoDB, and many other NoSQL databases, do not have any notion of transactions. They do not have the ACID (Atomicity, Consistency, Isolation, Durability) properties of relational databases; instead they use an "eventually consistent" model. But there are many application scenarios where transactionality is critical. In the Mongo world, if you need to use transactions, you as the application developer would have to implement them yourself by implementing a two-phased commit inside of your application. By contrast, in the Informix wire listener, transaction support is built in and is readily available to all Mongo clients.

    Second, let's consider the REST listener type. REST by definition is stateless. Therefore the notion of bundling up multiple requests into a single transaction does not traditionally fit into this model either. But the Informix wire listener does enable ways to make this happen even for REST clients should your application require it.

    Ok, now that we've covered why this transaction support is a unique value add, let's dive into the semantics.


    Listener support for transactions

    The listener offers two different ways to execute a transaction: transaction mode and batched transactions. Both of these methods of running transactions are available to both Mongo and REST clients. We'll explore each of these in turn.


    Option 1: Transaction mode

    In this method, you first issue a command to put the listener in transaction mode. Then execute as many inserts, updates, or deletes as you would like using the normal syntax. None of these statements will be committed until you issue a commit command. There is also a rollback command should you need to roll back the transaction. You remain in transaction mode until you explicitly exit it. Therefore any inserts, updates, or delete run after the first commit command will be part of the next transaction and will remain uncommitted until you issue another commit. When you are done using transactions and want the listener to go back to the default behavior of auto committing each statement, you issue a final command to exit transaction mode.


    Transaction mode for Mongo clients

    Here's an example using the Mongo shell syntax. We'll take the classic transaction example of transferring funds from one account to another. Here's we'll transfer $1000 from a certain customer's checking account to their savings account.

    > db.runCommand({transaction:"enable"})
    { "ok" : 1 }
    > db.accounts.update({customer_num:191187, type:"checking"}, {$inc:{balance:-1000}})
    WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
    > db.accounts.update({customer_num:191187, type:"saving"}, {$inc:{balance:1000}})
    WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
    > db.runCommand({transaction:"commit"})
    { "ok" : 1 }

    > db.runCommand({transaction:"disable"})
    { "ok" : 1 }

    The first command, db.runCommand({transaction:"enable"}) , puts your session into transaction mode. The following two statements update the accounts table. Keep in mind that accounts could be a regular relational table or a JSON collection. It doesn't really matter; the syntax from the application's perspective is the same for both. In the Mongo update statement, we pass two documents. The first is the condition that determines which rows or documents get updated; the second contains the update operation itself. So, for example, db.accounts.update({customer_num:191187, type:"checking"}, {$inc:{balance:-1000}}) means update the account of type "checking" for customer number 191187 by incrementing the "balance" field by -1000 (so really a decrement operation). After the updates, we commit the transaction with db.runCommand({transaction:"commit"}). Finally, we disable (exit) transaction mode with db.runCommand({transaction:"disable"}).

    It is not required to disable transaction mode right away. If you have other transactions to run, you can continue right along after the commit and the next statements will be part a second transaction. The db.runCommand({transaction:"disable"}) is critical though because until you run that, all statements are uncommitted until an explicit db.runCommand({transaction:"commit"}). So don't forget the last part if you want to go back to the default auto commit behavior!

    There are two more options in the transaction command that are worth nothing.

    To rollback a transaction, run

    > db.runCommand({transaction:"rollback"})
    { "ok" : 1 }

    To check if are are currently in transaction mode or not, you can use the "status" option. The "enabled" field in the response shows whether transactions have been enabled on the session (i.e. whether you are in transaction mode) and the "supported" field indicates whether the current database supports transactions.

    > db.runCommand({transaction:"status"})
    { "enabled" : true, "supported" : true, "ok" : 1 }

    Transaction mode in REST

    All of the commands just described in the Mongo section can be run through REST. Below I will present the exact same transfer of funds example, but a couple of things first:

    • For transactions to work with REST, you have to be using cookies! When you authenticate to the listener for the first time, the listener will include a Set-Cookie directive in the response which will include an identifier for your session. This needs to be set in the cookie of every subsequent REST request that is part of the transaction. Cookies are always a good practice when using the REST listener to avoid getting a new session each time, but they are absolutely required if you want to use transaction mode in REST.
    • A few reminders on REST syntax.
      • A GET request runs a query. A GET request on the $cmd psuedo collection runs a command, which is what you use to run the transaction commands.
      • PUT on a table or collection does an update. The update condition is passed in the query parameter in the URL; the update operation is passed in the data portion of your request.
      • If you need more of a refresher on REST syntax, check out this page of the IBM Informix Knowledge Center

    Here's the same example as above, just using REST syntax:

    GET /mydb/$cmd?query={transaction:"enable"}

    PUT /mydb/accounts?query={customer_num:191187, type:"checking"}

         Data: {$inc:{balance:-1000}}


    PUT /mydb/accounts?query={customer_num:191187, type:"saving"}

         Data: {$inc:{balance:1000}})

    GET /mydb/$cmd?query={transaction:"commit"}


    GET /mydb/$cmd?query={transaction:"disable"}


    Option 2: Batched transaction


    The second way of doing transactions in the listener is to send a set of statements as a single command and instruct the listener to run them as a single transaction. This is done using the "execute" option of the listener's transaction command. When using this method, the listener will handle doing the start transaction and commit (or rollback) statements for you under the covers. No need for you to run transaction enable, commit, or disable.

    This support was added starting in 12.10.xC7.


    Batched transaction with Mongo clients

    The syntax for running a batch of commands as a single transaction is as follows.

    > db.runCommand({"transaction" : "execute", 
     "commands" : [
    	{"update":"accounts", "updates" : [ 
                 { "q" : { "customer_num" : 191187, type: "checking"}, 
                   "u": {$inc:{balance:-1000}} } 
             ] },
    	{"update":"accounts", "updates" : [ 
                 { "q" : { "customer_num" : 191187, type: "savings"}, 
                   "u": {$inc:{balance:1000}} } 
             ] }

    You just specify the transaction command with the execute option ({"transaction" : "execute"}) and then in the "commands" field pass any array of documents that represent your insert, update, or delete statements. The syntax for these individual commands is just as if you were to run insert, update, or delete as a command in the Mongo syntax.

    If all statements in your batched transaction succeed, the listener will automatically commit it.The result returned will include an array that indicates the number of documents updated (or inserted or deleted).

        "ok" : 1,
        "result" : [
                "ok" : 1,
                "n" : 1,
                "nModified" : 1
                "ok" : 1,
                "n" : 1,
                "nModified" : 1

    If there is a failure in one of the statements, the listener will roll back your transaction and return you a response that indicates which command failed and the failure message.


    Batched transaction in REST

    Here's the same example in a REST request.

    GET /mydb/$cmd?query={"transaction":"execute", "commands":[{"update":"accounts", "updates":[{"q":{ "customer_num":191187, type:"checking"}, "u":{$inc:{balance:-1000}}}]}, {"update":"accounts", "updates":[{"q":{"customer_num":191187, type:"savings"}, "u":{$inc:{balance:1000}}}]}]}

    That's just the exact same document as above passed in a GET request on the $cmd psuedo collection.


    Batched transaction with a finally clause

    There's one more thing! We added to the listener's batched transaction command the ability to add a finally clause. The finally clause is just a set of statements that will always be run at the end of the transaction no matter if the transaction gets committed or rolled back.

    This actually allows you to do things outside the traditional notion of transactions. One of the things about the listener is that it tries to efficiently use resources across sessions, by using a connection pool for example. This means that in normal auto commit mode, you are not guaranteed to run your statements on the exact same JDBC connection each time, which is perfectly acceptable when each statement is auto committed independently. Transaction mode, on the other hand, will bind a single connection to your session until you exit transaction mode. This, combined with the SQL passthrough feature, allows you to use transactions to do any sequence of statements that need to be run on the same JDBC connection.

    For example if you need to set an environment variable before running a query (and you don't want to set this environment variable globally for all connections created by the listener). The best way to do this is to execute it through the listener's batched transaction command to ensure that your query is run on the same JDBC connection that your environment variable was set on. In this use case, the finally block is key because you want to make sure the environment variable gets unset before that connection goes back to the connection pool, and you need to make sure this unsetting of the environment variable happens whether or not the rest of your transaction succeeds.

    > db.runCommand({"transaction" : "execute", 
     "commands" : [
        {"find" : "system.sql", "filter" : {"$sql" : 
                  "SET ENVIRONMENT USE_DWA 'ACCELERATE ON'" } },
        {"find" : "system.sql", "filter" : {"$sql" : 
                  "SELECT SUM(s.amount) as sum FROM sales AS s 
                  WHERE s.prid = 100 GROUP BY" } }
    "finally" : [{"find":"system.sql", "filter" : {"$sql" : 

    This example also shows how you can run queries (the find command) within the batched transaction syntax. This particular example is running SQL passthrough queries, but regular queries on tables and collections work as well. The query results are returned as part of the response to {"transaction" : "execute"} command.



    So that's it! Transaction support as you've come to know it in relational databases such as IBM Informix now available to applications using REST or Mongo client drivers!

    To find the documentation on the listener transaction command check out this page in the IBM Informix Knowledge Center.

    Modified on by Erika Von Bargen