IDS Experts

For a complete list of current SQL reserved words, see the IBM Informix Guide to SQL: Syntax.

Sanjit Chakraborty[Read More]

IDS 11.50 Installation Enhancements

cheetahblog Tags:  installation enhancements 2,619 Views
IDS 11.50 made a significant improvement in installation process. Started with IDS 11.50 informix introduced a new versatile installation wizard (installer), which can automatically creates a customized database server configuration file (ONCONFIG) suitable for your system environment and create a demonstration database server during installation process.

Part of the installation process installer captures inputs from user. Then evaluates the input values to ensure settings are valid, and it calculates several values for other configuration parameters based on hardware settings and database instance needs. You can use the Instance Configuration Wizard in GUI or console installation modes to use this new feature. However, make sure you choose the option “Creating Demonstration Database Server Instance” during installation process to automatically create the customized configuration file.

There are little differences between UNIX and Windows installation wizard.


  • Checked 'Yes' for Use create an IDS demonstration database server instance
  • Choose option for Customize the default configuration file

  • On Windows platform the Instance Configuration Wizard is only available with custom setup in GUI mode.
  • You must checked 'Demos' under select the feature you want to install
  • The 'Enable a custom configuration file' must checked also

Following are some restrictions that you need to remember during installation:

  • Installation directory must be empty
  • The root chunk file must be empty or not exists. Installer will create the file.
  • Parameters affected by the Instance Configuration Wizard are not available for silent installation.

During installation, Instance Configuration Wizard captured following input from user:

  • Installation Directory
  • Database Server Name
  • Database Server Number
  • Rootpath
  • Rootsize
  • No. of central processing units (CPUs)
  • Memory: System RAM dedicated to the IDS server (in MB)
  • No. of online transaction clients
  • No. of decision support clients

Based on the above information installer will set following configuration parameters:

If the Instance Configuration Wizard encounters a problem while validating or calculating configuration parameters for customize configuration file, the configuration file is created with default, workable configuration parameters and an appropriate message displayed.

The customized configuration file will save as 'onconfig.<DBSERVERNAME>' in etc directory under <INFORMIXDIR>.

The "Creating Demonstration Database Server Instance" option with installation process will create a demo database instance. Once the installation process complete, installer generate two script files called profile_settings (korn shell) and profile_settings.csh (C shell) for setting environment variable for demo server instance in $INFORMIXDIR/demo/server (%INFORMIXDIR%\demo\server for Windows) directory.

Following is a screenshot of instance configuration wizard on UNIX platform (console installation mode):
Console Installation Mode on UNIX platform

You must choose '1 - Yes' for create an IDS demonstration database server instance and '3- Customize the default configuration file to suit your needs and hardware' in console installation mode.

Following are some screenshot of instance configuration wizard on Windows platform (GUI installation mode):
Custom  Installation in GUI Mode
You must select 'Custom' in GUI mode

List of Product Components
Make sure to checked 'Demos' for create an IDS demonstration database instance

Server Configuration Setup
Make sure 'enable a custom configuration file to suit your needs and hardware' also checked

Server Configuration Setup
An example of server configuration setup. You need to insert values for customize the database server.

Sanjit Chakraborty

[Read More]

New onstat option to display sysdbprocedure cache

cheetahblog Tags:  onstat sysdbopen cache sysdbclose procedures 2,587 Views
In IDS 11.10, you can configure session properties using sysdbopen() and sysdbclose() procedures, see Configuring Session Properties. Theseprocedures are stored in sysprocedures system catalog. A memory cache is maintained for these procedures to facilitatefaster searching of procedure identifiers. A new onstat option is available to print statistical information about thismemory cache. You can determine how many users are accessing a database, the sysdb procedures defined for public andother users from this information. A sample output is shown:
    $ onstat -g cac sproc

    IBM Informix Dynamic Server Version 11.10.FC1 -- On-Line -- Up 4 days 20:30:27 -- 38912 Kbytes

    Sysdbproc Cache: Number of lists : 31

    list# refcnt dbname Public Procids Username User Procids UsrList# (open,close) (open,close)-------------------------------------------------------------------------------------7 0 sysmaster (-1, 0)16 1 testdb (345, 346) suma (347, 348) 14 informix (343, 344) 12218 0 sysutils (-1, 0)21 0 sysuser (-1, 0)

    list#List number to which the database name hashes
    refcntNumber of sessions accessing this database
    dbnameDatabase name
    Public ProcidsProcedure ids of public.sysdbopen() and public.sysdbclose(). The value will be (-1,0) if there is no public or user defined sysdb procedures for that database
    UsernameUser name of any users who have their own sysdbopen() and sysdbclose() procedures for that database
    User procidsProcedures ids of sysdb procedures for a particular user in a database
    UsrList#A user hash list to which the user belongs

The number of lists is determined by the onconfig parameter DS_HASHSIZE. The maximum number of database entries for a list is determined by DS_POOLSIZE. This is a soft limit as the limit is checked only when you close a database. When you close a database, if the number of database entries for that list is greater than DS_POOLSIZE, IDS deletes all excess database entries in that list that has a reference count of 0.

Suma Vinod[Read More]

Which version of CSDK should be used on Windows 7 platform?

cheetahblog Tags:  odbc csdk windows7 2,526 Views

Both CSDK version 3.50 and CSDK 3.70 are supported on Windows 7 operating system, which makes difficult to decide which one to use. A useful way to decide which CSDK to use is: 
consider what version of Informix Server you are using. By design, the CSDK version will be the "latest 3.x for 11.x". Here “x” is the same minor version number “.50” or .”70” as the database engine, e.g., Informix 11.50 and CSDK 3.50; or Informix 11.70 and CSDK 3.70.                 
In regard to the bit level, 64-bit windows will support either 32-bit ('.TC') or 64 bit (.'FC') CSDK. Although the bit level is not significant for determining which one to use, it is important to note that the ODBC manager on Windows 7 is 64-bit. If you want use a 64-bit CSDK, you will have no issues. However, a 32-bit CSDK will require an older ODBC manager.       


-David Jay                      

Achieving Consistent Meanings for DATETIME Values with JDBC

cheetahblog Tags:  timestamp time zone timezone jdbc datetime calendar 2,525 Views

The best-fit Java class for an Informix DATETIME value is java.sql.Timestamp. Java.sql.Timestamp technically inherits from java.util.Date; although, there are some semantic differences which are not typical of inheritance relationships. All the constructors, getters, and setters which are not deprecated for Date and Timestamp use values based on Greenwich Mean Time (GMT). They are offsets in milliseconds from the epoch, January 1, 1970 00:00:00 GMT. So, it would be logically consistent for Timestamps to convert to DATETIMEs using the GMT reference frame by default. However, the JDBC Tutorial and Reference specifies that, when no Calendar is provided, Timestamp values are converted to the time zone of the JVM in which the application is running. The difficulties surrounding Timestamps and time zones are not limited to the scope of the Informix JDBC driver; they are generic to all of JDBC because that is the specification.

This model creates problems in a variety of ways. For distributed applications, the Date or Timestamp values might be streaming into the database from JDBC drivers located in different time zones. With multi-tier applications, you might find yourself contending with a user interface in time zone A, the JDBC driver in time zone B, and the database server in time zone C. The driver is the library where external types are converted to internal types; so, where it is running is where the local time zone comes into play. It can be difficult for either the front end or the database to know in what time zone the driver is operating. Dealing with the conversion from Timestamp in GMT to DATETIME in the JDBC driver locale has to be done in the application layer directly interfacing with the JDBC driver.

The conversion done by the driver applies in both directions, and this can lead to complications if anything other than the JDBC Timestamp class is used in conjunction with DATETIME values. If strings representing DATETIMEs or DATETIME literals are used anywhere in your application SQL (including CURRENT), the values will not be converted on the way in, but will be converted on the way out. Similarly, different APIs will not convert in either direction. If you insert new Timestamp(86400001) (January 2, 1970 00:00:00.001 GMT) through JDBC, and select it back from within the same JVM, you will get that same value back. So, you might expect that selecting the same row-column from, say, DBAcccess, would also give you the same value, but it will not because the value was converted to JVM time zone on the way in through JDBC, but not converted on the way out through ESQL/C. If you are in the U.S., you will get some time on January 1st, not even the same day. The reverse also applies, you may find yourself with values inserted through other APIs, which perform no conversion, that are converted to some other time zone when selected through JDBC.

DATETIMEs, of course, do not contain time zone information. There are basically two ways to deal with this, record the time zone information in another column along with the DATETIME or establish a convention that all the DATETIMES in the database are in the same time zone. Establishing a convention is the simpler approach both from an application implementation perspective, and in terms of being able to easily compare values in different rows, columns, or tables. If you know that every Java application which will ever access the data will operate in its own JVM, you can code your applications to set the default time zone of the JVM, but this is a problematic solution when more than one, independent application is running in the same JVM, such as servlets in a web server. It is unclear what would happen with different threads of execution changing the time zone for all other threads, but if they were actually able to do that, it would not be pretty.

At JDK 1.1 two things happened, all the getters and setters for java.util.Date where the meaning was ambiguous with respect to time zone were deprecated and the java.util.Calendar class was introduced. Along with the Calendar class itself, methods accepting a Calendar object were added to the JDBC interface, for example PreparedStatement.setTimestamp(int parameterIndex, Timestamp x, Calendar cal) and ResultSet.getTimestamp(int columnIndex, Calendar cal). In these methods, the Calendar object establishes the frame of reference used to convert from the GMT offset in the Timestamp object to or from a DATETIME value. If you create a constant Calendar object in your applications, and use it every time you read or write a Timestamp object to a DATETIME, the values DATETIME will remain consistent with each other, and the meaning of the value will not change dependent on the time zone of the JVM.

Using a GMT calendar would be efficient because it requires less operations to convert between the Java classes and the value in the database. Also, the value will not change if other APIs, like ODBC or ESQL/C, are used. Conceptually, what is stored in the database is January 1, 1970 00:00:00.000 GMT + tstamp.getTime() milliseconds.

Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));

Coordinated Universal Time (UTC) is a close relative to GMT and is just as viable.

In this model, all date and time values in the database are on the same time line, and conversion to other time lines or calendar systems are handled at the user interface level. Using UTC or GMT for all DATETIME values provides a simple model to implement in applications. It also removes all the problems related to knowing the time zone of the JDBC driver runtime or if applications written in anything other than Java will ever be used to access the data.


-Chris Golledge


Modified on by cheetahblog

C-Style Comment Indicator with DB-Access

cheetahblog Tags:  dbaccess comment 2,500 Views

Earlier versions of Informix Dynamic Server (IDS) had restriction with C-style comment indicators ( /* ... */ ) in SQL statements, within the DB–Access operations.

Without the C-style comment indicator it was not compliance with SQL-99 standard and caused problems while migrate from other databases server.

Starting with IDS Version 11.10, restrictions with C-style comment are no longer in effect. Any of the following styles SQL comment indicators can be use in all DB–Access statements.

  • Double hyphen ( -- ) complies with the ANSI/ISO standard for SQL
  • Braces ( { } ) are an Informix extension to the ANSI/ISO standard
  • C-style slash-and-asterisk ( /* . . . */ ) comply with the SQL-99 standard

The comment symbol indicator can be choosing depends on ANSI/ISO compliance requirements.

Comment symbol indicator can be use on the same line with the SQL statement. It can spans across single-line or multiple-line, but cannot be nested.

DB-Access will filter any comments before sending SQL statement to the database server.

Examples:  -- Double hyphen comment  SELECT * FROM orders;     -- Selects all columns and rows                            -- from the orders table

{ Comment using braces } SELECT * FROM orders; { Selects all columns and rows from the orders table }

/* C-style comment */ SELECT * FROM orders; /* Selects all columns and rows from the orders table */

Sanjit Chakraborty
[Read More]

New Onstat Commands

cheetahblog Tags:  onstat 2,482 Views
Beginning with IDS version 11.10, Informix introduced several new onstat commands to enhance server maintenance and monitoring. Database Administrator can take advantage of these new onstat commands to perform their regular activities.

Following are list of new onstat commands:

     Onstst Commands                Description   onstat -g his       Prints SQL statement tracing information using SQLTRACE                        configuration parameter. For more information on SQL tracing,                        see following articles:                       Tracing SQL in Cheetah Part 1                       Tracing SQL in Cheetah Part 2

onstat -g ckp Print checkpoint information and checkpoint related advisory. For more information on checkpoint advisory, see following article: Checkpoint Performance Advisory

onstat -g pqs Allows dynamically track a session's query plan. For more information see following article: onstat -g pqs onstat -g smx Prints Server Multiplexer group information for servers using SMX. Server Multiplexer Group (SMX) is a communications interface that supports encrypted multiplexed network connections between servers in high availability environments. SMX provides a reliable, secure, high-performance communication mechanism between database server instances.

onstat -g rss Prints Remote Standalone Secondary (RSS) server information. The output of this command differs slightly depending on whether the command is run on the primary server or on the RS secondary server.

onstat -g sds Print Shared Disk Secondary (SDS) server information

onstat -g ipl Shows index page logging status information

onstat -g cat Prints information from the Enterprise Replication global catalog. The global catalog contains a summary of information about the defined servers, replicates, and replicate sets on each of the servers within the enterprise.

onstat -g cdr config Prints the settings of Enterprise Replication configuration parameters and environment variables that can be set with the CDR_ENV configuration parameter.

For a complete list of onstat commands, see the IBM Informix Administrator's Reference Guide.

Sanjit Chakraborty
[Read More]

Dynamically Changing Configuration parameters for Enterprise Replication

cheetahblog Tags:  er dynamic configuration | 1 Comment 2,470 Views
er_dynamic_configIn IDS 11 you can dynamically change,add or removethe Enterprise Replication configuration parameters.  Thechangestakes effect immediately, while the replication is running. Following are the three commands to use, which are explainedindetail along with examples.

cdrchange config

Use the cdr change config command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1.

For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7

Before changing the values you can confirm the existing values by doing:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:1-2,10   

Now lets change the values to 2,3,4,5 and 7:

$cdr change config "CDR_SUPPRESS_ATSRISWARN 2-5,7"
 WARNING:The value specifed updated in-memory only.

The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfrom
ONCONFIG will take effect.  Now lets confirm the new values:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7   

cdradd config

Use the cdr add config
command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.

Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.

$cdr add config "CDR_SUPPRESS_ATSRISWARN 10"
 WARNING:The value specifed updated in-memory only.

We can confirm the new values from "onstat -g cdr config" as following.

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7,10   

cdrremove config

Use the cdr removeconfig command to removethe existingvalue of an Enterprise Replication configuration parameter whilereplication is active. The value persists while replication is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1.

Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.

$cdr remove config "CDR_SUPPRESS_ATSRISWARN 2-5"
 WARNING:The value specifed updated in-memory only.

Confirm your changes:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting: 7,10 

NOTE: Thecommands changeconfiguration parameters only in memory; they do not update theONCONFIG file. To update environment variables in the ONCONFIG file,use the CDR_ENV configuration parameter.

The following table shows which kind of changes are valid for eachEnterprise Replication configuration parameter.

Table 1.1
ConfigurationParameter cdr add config cdrchange config cdr remove config

Vijay Lolabattu
[Read More]

How to find the application that running a particular SQL?

cheetahblog Tags:  onstat 2,461 Views

It is not uncommon that an inefficient SQL statement causing overall performance problem. In that scenario, first you need to identify the problematic SQL, next find out the user and application that executing the SQL. Here is a glimpse to address the second part of the problem.


There are different possible ways to find out the user and application running a particular SQL. The ‘onstat -g ses <sessionid>’ command is the simple way figure out the application that running a SQL. Do you notice any thing new in the following ‘onstat -g ses <session id>’ output?


IBM Informix Dynamic Server Version 11.70.FC5 -- On-Line -- Up 16:41:52 -- 1048576 Kbytes


session           effective                 #RSAM    total  used   dynamic

id       user     user   tty pid   hostname threads  memory memory explain

34       informix -      1   19387 garion   1        278528 217656 off


Program :



tid      name     rstcb            flags    curstk   status

176      sqlexec  145629268        B--PX--  13103    yield bufwait-


Memory pools    count 2

name         class addr              totalsize  freesize   #allocfrag #freefrag

34           V     145b99040        274432     60064      376        27

34*O0        V     145b0d040        4096       808        1          1


name           free       used           name           free       used

overhead       0          6576           scb            0          144

opentable      0          13880          filetable      0          4312

ru             0          600            log            0          16536


sqscb info

scb              sqscb            optofc   pdqpriority optcompind  directives

142e211c0        145b53028        0        0           0           1


Sess       SQL            Current            Iso Lock       SQL  ISAM F.E.

Id         Stmt type      Database     Lvl Mode       ERR  ERR  Vers  Explain

34         INSERT         stores       NL  Not Wait   0    0    9.24  Off


Current statement name : inscur


Current SQL statement (997) :

  insert into customer


You are right. There is a new ‘Program’ section in above onstat output. Now, you can use the ‘onstat -g ses <session id>’ command to display the full path of the client program that used in a session. Use the client program information to monitor or control the application access to the database. One thing you should know that the program information is sent by the client itself, and name of the program totally depends on the client.


-Sanjit Chakraborty


Monitor Resource Contention

cheetahblog Tags:  bth 2,445 Views

Two new onstat commands introduced in 12.10.xC2 to view the dependencies between blocking and waiting threads. A running threads take ownership of various objects and resources; for example, buffers, locks, mutexes, decision support memory etc. Contention for these resources among hundreds or thousands of threads can result in chains of  dependencies.

Now, you can use the 'onstat -g bth' command to display the dependencies between blocking and waiting threads. Next, use the 'onstat -g BTH' command to display session and stack information for the blocking threads.

Here is a scenario where these onstat commands can be helpful. It possible a  thread that is blocked waiting to enter a critical section might own a row lock for which another thread is waiting. The second thread might be blocking a third thread that is waiting in the MGM query queue. Usually, the duration of such contention is short and user never notice any problem. However, if a thread is blocked long enough, you might need to identify the source of the contention. The 'onstat -g bth' command discovers the chains of dependency and displays blocker threads followed by waiting threads, in order.

The following is a sample output of the 'onstat -g bth' command where multiple threads are waiting on resources.


Highest level blocker(s)
 tid      name                 session
 48       sqlexec              26

Threads waiting on resources
 tid      name                 blocking resource              blocker
 49       sqlexec              MGM                            48
 13       readahead_0          Condition (ReadAhead)           -
 50       sqlexec              Lock (0x4411e578)              49
 51       sqlexec              Lock (0x4411e578)              49
 52       sqlexec              Lock (0x4411e578)              49
 53       sqlexec              Lock (0x4411e578)              49
 57       bf_priosweep()       Condition (bp_cond)             -
 58       scan_1.0             Condition (await_MC1)           -
 59       scan_1.0             Condition (await_MC1)           -

In the above example, four threads are waiting for a lock that is owned by thread 49. However, that's not the actual problem. The thread 49 is waiting for MGM resources that are owned by thread 48. So, originally problem started with the thread 48. Next, run the 'onstat -g BTH' command to see the session and stack information of thread 48.  Following is the example for 'onstat -g BTH' output:

Stack for thread: 48 sqlexec
 base: 0x00000000461a3000
  len:   69632
   pc: 0x00000000017b32c3
  tos: 0x00000000461b2e30
state: ready
   vp: 1

0x00000000017b32c3 (oninit) yield_processor_svp
0x00000000017bca6c (oninit) mt_wait
0x00000000019d4e5c (oninit) net_buf_get
0x00000000019585bf (oninit) recvsocket
0x00000000019d1759 (oninit) tlRecv
0x00000000019ce62d (oninit) slSQIrecv
0x00000000019c43ed (oninit) pfRecv
0x00000000019b2580 (oninit) asfRecv
0x000000000193db2a (oninit) ASF_Call
0x0000000000c855dd (oninit) asf_recv
0x0000000000c8573c (oninit) _iread
0x0000000000c835cc (oninit) _igetint
0x0000000000c72a9e (oninit) sqmain
0x000000000194bb38 (oninit) listen_verify
0x000000000194ab8a (oninit) spawn_thread
0x0000000001817de3 (oninit) th_init_initgls
0x00000000017d3135 (oninit) startup

Highest level blocker(s)
 tid      name                 session
 48       sqlexec              26      

session          effective                      #RSAM    total   used   dynamic
id      user     user      tty   pid   hostname threads  memory  memory explain
26      informix -         45    31041 darwin   2        212992  186568 off

Program :

tid      name     rstcb            flags    curstk   status
48       sqlexec  448bc5e8         ---P---  4560     ready-
58       scan_1.0 448bb478         Y------  896      cond wait  await_MC1 -

Memory pools    count 2
name         class addr           totalsize  freesize   #allocfrag #freefrag
26           V     45fcc040       208896     25616      189        16        
26*O0        V     462ad040       4096       808        1          1         

name           free       used           name           free       used      
overhead       0          6576           mtmisc         0          72        
resident       0          72             scb            0          240       
opentable      0          7608           filetable      0          1376      
log            0          33072          temprec        0          17744     
blob           0          856            keys           0          176       
ralloc         0          55344          gentcb         0          2240      
ostcb          0          2992           sqscb          0          21280     
sql            0          11880          xchg_desc      0          1528      
xchg_port      0          1144           xchg_packet    0          440       
xchg_group     0          104            xchg_priv      0          336       
hashfiletab    0          1144           osenv          0          2520      
sqtcb          0          15872          fragman        0          1024      
shmblklist     0          416            sqlj           0          72        
rsam_seqscan   0          368            

sqscb info
scb              sqscb          optofc   pdqpriority optcompind  directives
4499c1c0         461c1028       0        100         2           1         

Sess       SQL            Current       Iso Lock       SQL  ISAM F.E.
Id         Stmt type      Database      Lvl Mode       ERR  ERR  Vers  Explain    
26         SELECT         stores        CR  Not Wait   0    0    9.24  Off        

Current statement name : unlcur

Current SQL statement (5) :
  select * from systables,syscolumns,sysfragments

Last parsed SQL statement :
  select * from systables,syscolumns,sysfragments


-Sanjit Chakraborty

Modified on by cheetahblog

Encrypted Communications for HDR

cheetahblog Tags:  hdr encryption 2,414 Views

In IDS Cheetah release, you can encrypt communication between an HDR pair, to secure the transmission of data over unsecured networks, including the internet. After you enable encryption, the HDR primary encrypts the data before sending it to the HDR Secondary server. The HDR secondary server decrypts the data. Use new ONCONFIG parameter ENCRYPT_HDR to enable encryption between the HDR pair. You can also customized encryption using following parameters.

Configuration Parameter Default value Comments/Description
ENCRYPT_HDR 0 0 - disable, 1 - enable HDR encryption
ENCRYPT_CIPHERS allbut:<ecb> Defines ciphers and modes that can be used by the current database session.
The following ciphers are currently supported:
  • des (64-bit key), des3 (Triple DES), desx (Extended DES, 128-bit key)
  • aes/aes128 (128-bit key), aes192 (192-bit key), aes256 (256-bit key)

  • bf-1 Blowfish (64-bit key), bf-2 (128-bit key), bf-3 (192-bit key)
ENCRYPT_MAC medium Controls the level of message authentication code (MAC) generation.

  • off - does not use MAC generation.
  • low - uses XOR folding on all messages.
  • medium - uses SHA1 MAC generation for all messages greater than 20 bytes long
    and XOR folding on smaller messages.
  • high - uses SHA1 MAC generation on all messages.

ENCRYPT_MACFILE builtin A list of the full path names of MAC key files.
ENCRYPT_SWITCH 60,60 Defines the frequency (in minutes) at which ciphers, secret keys are renegotiated.
cipher_switch_time, key_switch_time
  • To use your own MAC key file
    • Execute the following command to generate MAC Key file.
      $INFORMIXDIR/bin/GenMacKey -o  /usr/informix/etc/MacKey1.dat
    • Copy MacKey1.dat over to the paired server

    • Update ENCRYPT_MACFILE configuration parameter on both the servers as shown below
      ENCRYPT_MACFILE    /usr/informix/etc/MacKey1.dat,builtin

  • NOTE - HDR and Enterprise Replication (ER) share the same encryption configuration parameters: ENCRYPT_CIPHERS, ENCRYPT_MAC, ENCRYPT_MACFILE and ENCRYPT_SWITCH.

Nilesh Ozarkar[Read More]

OpenAdmin Tool for IDS XAMPP tutorial updated

cheetahblog Tags:  idsadmin oat 2,413 Views
In March an article on installing IDSAdmin in a XAMPP environment was published on this blog. This article has just been updated by Inge Halilovic with instructions and screen shots for the latest available version of OpenAdmin Tool downloadable from the Free Product Download Site. The new version of OAT has improved graphics, simpler installation, new features such as support for Mach 11 topology, and several bug fixes.

Guy Bowerman[Read More]

Installing multiple copies of IDS on Windows - 11.50.xC2

cheetahblog Tags:  install installation 2,400 Views

Introduction:Multiple installation of IBM Informix Dynamic Server is now enabled in IDS 11.50 xC2. With this, users will be able to install multiples copies of IDS on a single machine with the ability to create instance(s) per installation. For example, a user can have 11.50 xC1 and 11.50 xC2 co-existing on the same machine with instances related to each installation running on that machine.

With this new effort, a user that has an installation of IDS 11.50 can invoke setup.exe and will be presented with existing installation of IDS on the machine. The user can then make a choice between installing a new copy of IDS and maintaining an (one of the) existing copy(ies). If there is an existing installation of IDS 11.50 on a machine, a user that launches setup.exe in graphical mode will be presented a panel similar to the one below:

Multiple Installation Screenshot

However, with an existing installation of IDS 11.50, launching setup.exe in silent mode will require a maintenance response file. Without a maintenance response file or with an installation response file, the installer will abort. To install a subsequent copy of IDS 11.50 in silent mode, the user should pass the ‘-multiple’ command line option to setup.exe.

In addition to the "-multiple" command line option, there are 2 command line options for maintenance namely "-path" and "-instnum". These 2 new command line options are used to launch maintenance mode for a specified installation. Since they perform the same functionality, they should be used interchangeably and not in conjunction with one another.

Usages of the newly implemented options:

  • -multiple usage

  • setup.exe -multiple

    This option can be used in conjunction with other installation command line options. E.g.

    setup.exe -multiple -s -f1"C:\TEMP\server.ini" -f2"C:\TEMP\server.log"

  • -path usage

  • setup.exe -path [installation path]

    The user invokes setup.exe with the ‘-path’ option passing the installation path for which maintenance is required. This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -path [installation path] -s -f1"C:\uninst.ini"

  • -instnum usage

  • setup -instnum [installation number]

    The user invokes setup.exe with the ‘-instnum’ option passing installation number for which maintenance is required. Note that each installation is tagged with an installation number in shortcut menu (except for the first installation which doesn’t have a tag). This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -instnum [installation number] -s -f1"C:\uninst.ini"

    For more information on installing IDS, visit

    Tosin Ajayi

    [Read More]

    Restoring critical administrative files with onbar utility

    cheetahblog Tags:  file -cf onbar critical 2,383 Views

    It is already known to us that onbar backs up some critical administrative files during storage space backup. These critical files are as follows:

    - The onconfig file

    - UNIX: The sqlhosts file

    - The ON-Bar emergency boot file: ixbar.servernum

    - The server boot file: oncfg_servername.servernum


    There may be a situation when you need to restore one of these files, for example: replace disks, restore to a second computer system (imported restore) etc. However, previously there was no easy way to restore these files from onbar backup. You need to depend on the Informix Technical Support to perform this operation.


    Starting with Inofrmix vesion 12.10.xC2, you can restore the critical files that you backed up with onbar utility. Onbar cold restore has an option now to restore these files or you can only restore these file without perform the storate space restore.


    Use 'onbar -r -cf yes' to restore critical file during cold restore. Alternatively, you can use 'onbar -r -cf only' to extract critical files while Informix server is offline.


    -Sanjit Chakraborty

    Improved Statistics Maintenance

    cheetahblog Tags:  explain sampling statistics 2,344 Views
    • Automaticstatistics collection.

    IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.

    • Query statistics in EXPLAIN output.

    You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN.  The  querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel.  The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid command. Following is an example.


    In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.

    If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:

        * The min value that you specifyimmediately after the SAMPLING SIZE keywords
        * The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).

    If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:

        * The min value that you specifyimmediately after the SAMPLING SIZE keywords
        * The sampling size that is required forthe specified percent of rows and for the specified confidence level.

    If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.

    For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.

    UPDATESTATISTICS MEDIUM FOR TABLE customer (city, state)

    Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.

    • New FILETO option to redirect  SET EXPLAIN output to a filethat you desire.

    When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.

    The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user.  The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file.  Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.

    SETEXPLAIN FILE TO '/tmp/explain.vijay'

    These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.

    Vijay Lolabattu
    [Read More]

    Primary can not send log that requested by RSS

    cheetahblog 2,323 Views

    Other day I was setting up a high availability cluster environment and ran into to an interesting problem. I followed all necessary instruction for setup a RSS server. However, RSS stuck in recovery process and message log on primary serer reported error that could not send log. For example, when executed following command on RSS server to set data replication type:

          onmode -d RSS <primary server name>


     The RSS server stuck in recovery mode and message log on primary server showing following messages:

            RSS Server <RSS server name> - state is now connected             
            Can not send log <log number>     


    The log number mentioned in error message was not close to the current log on primary or RSS server. For example, current log on primary was 7438 and on RSS 7436 but message log stating ‘Can not send log 825241904’. So, from where server getting a out of sequence log number?


    Initially I though it some kind of corruption. However, after some investigation figured out, I was using delayed application (DELAY_APPLY) on RSS server and the directory specified with LOG_STAGING_DIR configuration parameter holding some unwanted file(s). For example, file 'ifmxUniqueLog_825241904' in LOG_STAGING_DIR. So, during recovery RSS server requested to primary for log number 825241904 but that log not exists on primary server.       


    Once I removed all files from the LOG_STAGING_DIR directory on RSS server able to successfully set the high availability cluster environment. Conclusion, next time you try to setup a RSS with DELAY_APPLY, make sure nothing is in the LOG_STAGING_DIR on RSS server.


    -Sanjit Chakraborty

    Optimizer Directives in ANSI Join Queries

    cheetahblog Tags:  directives optimizer joins ansi 2,313 Views
    In IDS 11.10, all optimizer directives including external directives can be used in queries using ANSI joins.Using
    optimizer directives might enable you to force a favorable query execution path thus improving performance.

    Here is a description of the different optimizer directives and how they work with ANSI join queries.

    Optimization Goal DirectivesFIRST_ROWS, ALL_ROWSDirectives enabled for ANSI join queries
    Access Method DirectivesFULL, INDEX, AVOID_INDEX, AVOID_FULL,INDEX_SJ, AVOID_INDEX_SJDirectives enabled for ANSI join queries
    Explain-Mode DirectivesEXPLAIN, AVOID_EXECUTEDirectives enabled for ANSI join queries
    External DirectivesDirectives enabled for ANSI join queries
    Join Method DirectivesUSE_HASH, USE_NL, AVOID_HASH, AVOID_NL, /BUILD, /PROBEDirectives enabled for ANSI Inner Joins
    Directives not enabled for ANSI Inner Joins in views
    Directives enabled for ANSI Outer Joins that can be transformed into ANSI Inner joins
    Directives disabled for all other ANSI Outer Joins
    Join Order DirectivesOrderedDirectives enabled for ANSI Inner Joins
    Directives enabled for ANSI Left Outer joins if it doesn't conflict with the ordering requirement for Outer Joins
    Directives disabled for all other ANSI Outer Joins

    Suma Vinod[Read More]

    After an upgrade query running slow

    cheetahblog Tags:  opt_seek_factor slow upgrade 2,307 Views

    You have upgraded to Informix 11.70.xC3 or higher version. After upgrade encountered some of your queries running slow. The common observation, a query was using index path prior to upgrade and running faster. After an upgrade the same query is doing a sequential scan. Using an index directive the query uses the index and runs much faster than the sequential scan. However, the index path has a higher estimated cost associated with it than the sequential scan.

    Also, after an upgrade it could be seen that the optimizer chooses a different index in the newer version of Informix than it did in the prior version and causing the query to run slower.  

    A newer costing functionality was added in 11.70.FC3 and higher in the 11.70 family. This functionality was added to account for the seek time required to traverse an index (especially large indexes). There were situations where queries using an index would take longer to return than a sequential scan because of all the I/O involved.

    You can control the new costing functionality using the OPT_SEEK_FACTOR configuration parameter. This parameter allows to set the "weight" of an I/O seek cost. The range is 0 to 25 and default is 6. Making it lower causes the seek cost to be lower which lowers the estimated cost of using an index path.

    You can revert to the old costing method by setting OPT_SEEK_FACTOR to 0 in the ONCONFIG file.

    - Grant Flander

    Report from MacWorld

    cheetahblog Tags:  macos 2,293 Views
    MacWorld was an amazing event this past week in San Francisco at the Moscone Center. Although this is generally a consumer oriented event there were many companies with an enterprise focus showing their solutions. IBM was in the smallest size pedestal in the Enterprise Solutions Area -- People would constantly walk by and say - "IBM ---- the biggest company with the smallest pedestal" - We smiled and told them we make the biggest impact!!!!

    As you know we just announced our beta for the Mac OS X this week and were at the show to start to get the word out and meet customers and business partners who are interested in an "Industrial Strength" data base for their application. There was a ton of traffic in the ped - -People were at first surprised and then pleased to see us there as they have been looking for a dependable data base that is secure and scalable to build their applications on. Many developers who have a long history using Informix were thrilled to see the announcement and anxious to try the beta.

    • MAC users are very happy to see an enterprise class database on the LATEST MAC OS.
    • Looking for something better than what is currently available on MAC which just hasn't been reliable and has not scaled to meet their needs.
    • Some really like the idea of OAT that is open source and allow users to customize, especially the free part.
    • One mid sized system integrator commented " We are glad to see IBM supporting the MAC platform as we are building applications to take into Government and Financial Markets - We need a data base that our customers can depend on. IDS is exactly what we are looking for."

    Terri Gerber[Read More]

    Non-blocking Checkpoints - Part 1

    cheetahblog Tags:  non-blocking interval checkpoint 2,261 Views
    Prior to IDS Version 9, IDS supported a blocking checkpoint algorithmthat causes the system to block all transactional updates while thebufferpool is flushed to disk. In IDS Version 9 onwards, an additionalcheckpoint algorithm, called Fuzzy Checkpoint, was introduced. FuzzyCheckpoints limit the number of updates required to be flushed to diskduring checkpoint processing.

    With the new non-blocking checkpoint algorithm in Cheetah, IDS ensuresthat we virtually don't block transactional updates while the entirebufferpool is flushed to disk. Transactionalupdates are blocked only for very small duration of time required to flush thepartition partitions (tablespace tablespaces) to the bufferpool and tograb the restart point. (Restart point is wherefastrecovery starts. We log the checkpoint and use that LSN as the restartpoint for the checkpoint.)

    As part of this feature, the new "onstat-g ckp" command helps DBAs to view and analyze checkpointinformation and performance advisories. The "onstat -g ckp" command isexplained in more detail in thispost.

    ScottLashley's whitepaper on developerWorks explains the internalsabout non-blocking checkpoints in detail.

    In Part 2, we shall learn how to tune checkpoints and modify the relevant onconfig parameters (RTO_SERVER_RESTART, AUTO_CKPTS,AUTO_LRU_TUNING, etc.)

    Mirav Kapadia[Read More]

    ON-Bar whole system parallelize backup/restore

    cheetahblog 1 Comment 2,256 Views

    Prior to Cheetah,  a whole-system backup (onbar -b-w)  was serial backup of all storage spaces andlogical logs based on a single checkpoint. That time is stored with thebackup information. Theadvantage of using a whole-system backup was  that you canrestore the storage spaces with or without the logical logs.Because the data in all storage spaces is consistent in a whole-systembackup, you do not need to restore the logical logs to make the dataconsistent. Level 0, 1, or 2 backups are supported.

    One of the new features of Cheetah is Backup/Restore dbspacesin parallel with whole system functionality .  With parallelbackup , multiple processes run simultaneously  each processbacking up a different dbspace . In most cases, parallel backups complete fasterthan serialbackups, which use only one process.

    ONCONFIG parameter  "BAR_MAX_BACKUP" isused for whole system backups/restores as well. TheBAR_MAX_BACKUP parameter specifies the maximum number of parallelprocesses that are allowed for each onbar command. Both UNIX andWindows support parallel backups. Although the database server defaultvalue for BAR_MAX_BACKUP is 4, the onconfig.std value is 0.

    To specify parallel backups and restores, including parallel wholesystem backups and restores, set BAR_MAX_BACKUP to a value higher than1. For example, if you set BAR_MAX_BACKUP to 3 and execute anON–Bar command, ON–Bar will spawn the maximum 3 processes concurrently.

    If  BAR_MAX_BACKUP to 0, the system creates as manyON–Bar processes as needed. The number of ON–Barprocesses is limited only by the number of storage spaces or the amountof memory available to the database server, whichever is less.

    To perform a serial backup or restore, including a serialwhole system backup or restore, set BAR_MAX_BACKUP to 1. 

    Also in Cheetah,  storage manager efficiency isimproved  by improvingdbspace ordering .  "Biggest" dbspaces are backed up first and the "smallest"last.Dbspace with most pages used is the "biggest" and will be backed upfirst. This ensures better parallelism, no matter how BAR_MAX_BACKUP isset or how many pages are to be backed up in different dbspaces. Neworder of backup/restore is also effective for non-wholesystembackup/restore. Restore is done in the same order as backup to ease storagemanager's  access to stored objects.

    To implement new ordering of dbspaces for backup and restore changes are made to the bar_instance table in the sysutils Database .
    New column "ins_backup_order" (integer default 0 not null) is added to the table bar_instance .
    Also changes are made to ixbar file :
    New field with backup sequence integer (Last field in the line ) is added in ixbar file.
    Value of the new field is always "0" (zero) for log backup objects.

    Even if the BAR_MAX_BACKUP is set to 1 for serial backup,the dbspaces will be backed up using the new dbspace order .

    Backup Operation
    Only 1 checkpoint for all dbspaces - just before backup of rootdbs .
    Rootdbs is backed up first, without parallelism (no change here).
    Before image processor threads "arcbackup2" are started at this time, one for each dbspace  (more threads running in parallel.)
    As each dbspace backup completes, the respective "arcbackup2" thread exits ( less "arcbackup2" threads as backup progresses. )
    New order of backup. The order is based on the used-pages count at the start time of backup. Dbspace with most pages used is the "biggest" and will be backed up first.

    Restore Operation
    Not much changed, only it is now parallel.
    Rootdbs is restored first, without parallelism (no change here).
    Restore dbspaces in same order as they were backed up. Use the value of "ins_backup_order" to determine correct order. If BAR_MAX_BACKUP is changed between backup and restore, the objects will still be restored in the same order, however the timing relative to each other may differ significantly. This could have negative effects on SM performance .

    Rashmi Chawak[Read More]

    Tune logical and physical log automatically

    cheetahblog Tags:  extendable auto_tune_server_size dynamic_logs auto_llog 2,235 Views

    Starting with version 12.10.xC3 Informix can automatically tune the logical and physical log  as needed.  

    You can set the AUTO_LLOG configuration parameter to enable Informix to dynamically add logical logs when lack of logical logs causes frequent checkpoints, blocking checkpoints,  long checkpoints etc. In addition, with AUTO_LLOG configuration parameter you can specify a dbspace in which to create new logical log files and the size of all logical log files at which the server stops adding logs for performance.

    You are most probably thinking what about the the DYNAMIC_LOGS configuration parameters that exists since Informix version 10. The AUTO_LLOG and the DYNAMIC_LOGS configuration parameters works in different situations and do not interact with each other. When the AUTO_LLOG configuration parameter is enabled, logical logs are added to improve checkpoint performance. When the DYNAMIC_LOGS configuration parameter is enabled, logical logs are added under more urgent conditions, such as when a long transaction threatens to block the server. For example, the maximum size that is specified in the AUTO_LLOG configuration parameter does not affect the amount of log space that can be added by the DYNAMIC_LOGS configuration parameter. Similarly, the value of AUTO_LLOG configuration parameter does not affect the amount of log space that you can add manually.

    Now, Informix also allows to expand the size of the physical log as needed to improve performance by creating an extendable chunk for the physical log. An extendable chunks   can be automatically extend when additional storage space is required.

    Typically, physical log get created in the root dbspace, unless you create an Informix instance during installation. However, for optimal performance it is better to create a separate dedicated dbspace for physical log on a different disk from the root dbspace and move the physical log out of root dbspace. In case, an instance created during installation the physical log create in a separate dbspace called plogspace with a default size that depends on the value of the AUTO_TUNE_SERVER_SIZE configuration parameter.

    By default, the chunk that you assign to the plogspace is extendable, therefore, the initial size of the chunk can be small. The database server automatically expands the chunk when the physical log requires more space.

    -Sanjit Chakraborty

    IDS "Cheetah 2" Beta on Mac OS X (Leopard)

    cheetahblog Tags:  mac 2,221 Views
    Very aptly termed as... "The Big Cats Unite!".

    This was officially announced today at MacWorld'08 - IDS "Cheetah 2" first beta version on MacOS 10.5 (Leopard) is now available for download here.

    Here's the press announcement.

    Mirav Kapadia[Read More]

    New Shared Memory Segment

    cheetahblog 2,217 Views

    A new shared memory segment has been introduced in the Informix version 12.10. The 12.10.xC2 onwards the bufferpool has been moved out of the resident segment and new bufferpool segment has been added  which shown as class 'B' in the 'onstat -g seg' output. The BUFFERPOOL configuration parameter determines the amount of buffers allocated to this new segment when the database server is started.


    IBM Informix Dynamic Server Version 12.10.UC2DE -- On-Line -- Up 00:20:35 -- 85899522520 Kbytes

    Segment Summary:
    id         key        addr     size       ovhd     class blkused  blkfree
    32768      52564801   44000000 3424256    243124   R     836      0       
    65537      52564802   44344000 67108864   394328   V     6776     9608    
    98306      52564803   48344000 110305280  1        B     26930    0       
    Total:     -          -        180838400  -        -     34542    9608    

    -Rashmi Chawak

    Modified on by cheetahblog

    New Beta Drop available

    cheetahblog Tags:  cheetah 2,192 Views
    The latest Beta drop of IDS Cheetah is now available, with some important new features integrated into the code line..

    New features available in Beta drop 5 include:

    • AIX 64-bit platform build available for download
    • Web Feature Service for Spatial Data - Delivers spatial information as a web service for your Service Oriented Architecture
    • Label Based Access Control (LBAC) - Control data access at a row and column level using security labels
    • Continuous Availability with multiple HDR remote secondary servers

    The following Infocenter page has a complete list of new features for each Beta drop:

    Guy Bowerman[Read More]

    RENAME support in Enterprise Replication

    cheetahblog Tags:  er replication rename 2,083 Views
    From IDS 10.0x onwards, Enterprise Replication (ER) supports alteroperations on a replicated table while replication is active however, RENAMEsupport was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.

    When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database is correctlyupdated.

    Vijay Lolabattu[Read More]

    Inplace Upgrade of IDS on Windows

    cheetahblog 1,999 Views
    Depending on your existing Database Server setup, you might be able to upgrade directly to the
    current version by basically installing the product in a new directory, copying a few configuration
    files, and starting the new server to convert your database data. You can upgrade directly from any
    of the following products: Dynamic Server Version 11.10, 10.00, 9.40, 9.30, 9.21, or 7.31.

    Upgrading is an in-place migration method that uses your existing test and production hardware. The
    operating system on those machines must be supported by Dynamic Server Version 11.50. Also, you must
    have enough space for the system database data conversion.

    Upgrading consists of these steps:
      1. Prepare your system. That includes removing outstanding in-place alters, closing all transactions,
      verifying the integrity of the data with oncheck, and performing a level-0 backup. If you are
      using Enterprise Replication or High Availability Data Replication, stop replication.

      2. Install the new product on the machine.
      Important: Do not install it over the existing product.

      3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

      4. Start the Dynamic Server Version 11.50 instance. The database data is automatically converted.

      5. Run UPDATE STATISTICS MEDIUM for non-leading index columns, then run UPDATE STATISTICS FOR PROCEDURE.
    This type of migration minimizes the risk of introducing errors. You can always revert from the new
    server to the old one. In the event of a problem during reversion, you can restore the level-0 backup.

    This method works perfectly on Windows thought it is not documented well.
      1. When installing IDS 11.50 on Windows, choose the option "Install to a default/different directory".


      2. Make sure that you do not initialize the server when installing.

      Initialize Screen

      3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

      4. Bring the server up using Control Panel->Services or any other method without initializing.

      5. Monitor the online.log for the Conversion Successful message.

    Once the upgrade has completed successfully, you can remove the old instance. When you run the
    uninstaller make sure that you select "Retains all databases, but removes server binaries".


    If for some reason, you like to go revert to the previous instance, restore it from the level 0 backup.

    The only time this will not work on Windows is if you are upgrading from a 11.10.UC1 version to
    another 11.10 version say 11.10.UC2.

    Suma Vinod[Read More]

    Time types implemented as BIGINT User-Defined Types

    cheetahblog 1,903 Views

    BIGINT Time


    I wasn't happy with the overall efficiency of the built-in time types; so, I created a couple of user-defined types which represent time using distinct types of BIGINT. In these UDTs, a TIMEPOINT is the number of microseconds since the UNIX epoch, January 1, 1970 00:00:00 GMT, and a MICROSECOND is simply a duration of time. The internal implementation of a type affects three factors of database operations: storage space, performance, and application development. This is an implementation of time types that has great potential to improve upon Informix's internal implementation in all three aspects. However, the potential does not always live up to theory in practice. Let's look at when it does, when it does not, and why.


    Storage Space

    The storage space requirements for DATETIME and INTERVAL types can be calculated using this formula, in bytes:


    (total number of digits for all fields)/2 + 1

    For example, a DATETIME YEAR TO FRACTION(5) requires (4+2+2+2+2+2+5)/2 + 1 = 11 bytes; you have to round up. A 64-bit encoding always requires 8 bytes, never more and never less. This matters more or less according to your table definition. Examples:


    The table cust_calls2 is derived from the stores7 database table cust_calls.


    create table cust_calls2


    customer_num integer,

    call_dtime datetime year to minute,

    call_interval interval minute(4) to second,

    user_id char(18) default user,

    call_code char(1),

    call_descr varchar(240),

    res_dtime datetime year to minute,

    res_descr varchar(240)



    The table cust_calls3 replaces the server native types with their UDT counterparts.


    create table cust_calls3


    customer_num integer,

    call_tp timepoint,

    call_mu microsecond,

    user_id char(18) default user,

    call_code char(1),

    call_descr varchar(240),

    res_dtime datetime year to minute,

    res_descr varchar(240)



    So, if you load these tables with a lot of rows, how much space do you save?


    Let's run the following:





    nrows::bigint as nRows,


    npused::bigint as nPages,

    (npused * pagesize / (1024*1024))::decimal(5) as megabytes

    from systables where tabname like 'cust_calls%';




    tabname cust_calls2

    rowsize 523

    nrows 3670016

    pagesize 2048

    npages 1223339

    megabytes 2389.3


    tabname cust_calls3

    rowsize 528

    nrows 3670016

    pagesize 2048

    npages 1223339

    megabytes 2389.3


    Under these conditions, there is no space saving at all. Why is this?


    The page size is 2048 and the row sizes are 523 and 528. That means that you can only get 3 rows per page regardless of which table definition you use. Also, the 64-bit implementation has a slightly larger row size because the DATETIME and INTERVAL definitions do not cover the full range of their type capabilities. You will not see a savings on space unless you can increase the rows to page ratio.


    So, what happens when you have smaller rows and higher precision DATETIME/INTERVAL definitions? Let's take an extreme case. In this test, the table definitions only have time types in them.


    create table cust_calls2a


    call_dtime datetime year to fraction(5),

    call_interval interval day(9) to fraction(5)



    create table cust_calls3a


    call_tp timepoint,

    call_mu microsecond



    This time the table statistics are:


    tabname cust_calls2a

    rowsize 23

    nrows 3670016

    pagesize 2048

    npages 49595

    megabytes 96.865




    tabname cust_calls3a

    rowsize 16

    nrows 3670016

    pagesize 2048

    npages 36337

    megabytes 70.971


    In this condition, we are looking at a space savings over 26%. Your database probably has a mix of conditions; so, your overall space savings probably lies somewhere between 0 and 26 percent.



    Using the same tables as above, let's exercise the server with some queries. In each case the maximum value for the time column is selected.


    select max(call_tp) -- or call_dtime

    from cust_calls3;

    Since there are no indices, this forces the server to perform N-1 comparisons, where N is the number of rows. Just to be clear, these tests were run on an older laptop and the server is in no way optimized. The relative performance across conditions should be reasonably robust, but your mileage may vary. Especially, if you are using a solid state drive, the IO time will be a lot less and so the computational time will be proportionately more; this should dramatically change the relative performance percentages reported here.

    The time to find the maximum value for call_dtime was approximately 9 percent faster than doing the same for call_tp. In practice, the operations performed by the server are faster for this structure,


    typedef struct dtime {
       short dt_qual;
       dec_t dt_dec;
    } dtime_t;

    than they are for an integer base type. How could this be? Informix math libraries convert everything to a decimal prior to doing the math; so, the TIMEPOINT (BIGINT) types might be being converted to decimal prior to doing the comparison. The conversion time might be adding to the overall processing time. I played with some queries, and found that there was enough difference between

    select count(i::decimal(32))

    from x1m


    select count(i)

    from x1m

    to make this hypothesis plausible. It is not confirmed, but it is plausible. In this test, i is a column in a view of sysmaster:sysdual with 1 meg rows.

    It is time to take a look at the abridged versions of the tables. Here we have a complete reversal; the query on max(call_tp) was 24 percent faster than the query on max(call_dtime), on average. I suspect this is because there is a greater information density on the disk, 26 percent denser, for my UDTs and so the IO time is reduced, and that more than makes up for the increased CPU processing overhead.

    The summary is that for tables with a low proportion of time values, using TIME_POINT and MILLISECOND does not gain you much in disk space savings, and can actually cost you a little in CPU cycles. However, for tables with a high proportion of high precision time values, using these UDTs can save you in the neighborhood of 26 percent on disk space and around 24 percent in overall query execution time.


    Application Development

    I wrote up the concepts for why types like these would be nice to have for application development before implementing them. That material is covered in the write-up, “Timestamps with Time Zone Independence Implemented as 64-bit Integers”. Since these are not built-in types, you cannot use API methods to directly bind them to application time value types. For example, you cannot call ResultSet.getTimestamp() on a TIMEPOINT column. However, using them is pretty simple. You just pass the values through the API as BIGINT (64-bit) values, converting to or from the application language native types. Below are some snippets of Java code that do this.

    Inserting TIMEPOINT and MICROSECOND values using Java types

    // insert Instant based on GMT and Duration

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (?, ?)");

    Instant timepoint =; // current date and time with millisecond precision;

    long msec_since_epoch = timepoint.toEpochMilli(); // milliseconds after epoch GMT

    pst.setLong(1, msec_since_epoch * 1000);

    // This bypasses the JDBC driver converting Timestamp to

    // the timezone of the driver's JRE; there are 1000 micros

    // in a milli.


    Duration dur = Duration.ofNanos(123456000); // 0.123456 seconds

    long musec = dur.toNanos() / 1000;

    pst.setLong(2, musec);


    // What will happen here is the driver will pass the long values to the server,

    // and the server will leverage the implicit casts to convert the long values

    // into a TIMEPOINT and a MICROSECOND. Long(BIGINT) to the UDT types should be

    // a no-op cast.



    Inserting TIMEPOINT and MICROSECOND using integer literals

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (" +

    // seconds since epoch + nanosecond offset

    String.valueOf(timepoint.getEpochSecond()*1000000 + timepoint.getNano()/1000) +

    ", " + "12345654321)"); // 12345.654321 seconds


    // The server will convert the literals in string format to BIGINT, and then no-op cast the

    // BIGINTs to the UDT types.



    Inserting TIMEPOINT and MICROSECOND using DATETIME and INTERVAL literals

    PreparedStatement pst = conn.prepareStatement("insert into tab1(time, duration) values (" +

    "datetime(1970-01-02 00:00:00.54321) year to fraction(5), " +

    "INTERVAL (654321.54321) SECOND(6) TO FRACTION(5))");


    // Here the literal values will be passed to the server embedded in the SQL statement,

    // the server will convert the literal strings to DATETIME and INTERVAL types, and then

    // it will apply the casts from these types to the UDT types using the UDR conversion

    // routines defined for the casts.




    ResultSet r = stmt.executeQuery("select time, duration from tab1");

    long time, muSec;

    Duration dur;

    System.out.println("Selecting records: ");



    time = r.getLong(1);

    muSec = r.getLong(2);


    // Create an Instant using the number of whole seconds and the remaining nanos.

    timepoint = Instant.ofEpochSecond(time/1000000, (time % 1000000) * 1000);

    dur = Duration.ofNanos(muSec * 1000);


    System.out.println("Unformatted time: " + timepoint.toString() +

    "\tDuration: " + String.valueOf(dur.getSeconds()*1000000000 + dur.getNano()) +

    " nanoseconds");


    // What time zone do we want? Let's pick New Zealand.

    ZoneId zone = ZoneId.of("Pacific/Auckland");

    ZonedDateTime localZoneTime = ZonedDateTime.ofInstant(timepoint, zone);

    DateTimeFormatter format = DateTimeFormatter.ofPattern("MMM d yyyy hh:mm a");


    System.out.printf("Local time: %s %s\n", localZoneTime.format(format), zone);



    Unformatted time: 2014-10-02T23:02:27.780Z Duration: 123456000 nanoseconds

    Local time: Oct 3 2014 12:02 PM Pacific/Auckland

    Unformatted time: 2014-10-02T23:02:27.780Z Duration: 12345654321000 nanoseconds

    Local time: Oct 3 2014 12:02 PM Pacific/Auckland

    Unformatted time: 1970-01-02T00:00:00.543210Z Duration: 654321543210000 nanoseconds

    Local time: Jan 2 1970 12:00 PM Pacific/Auckland



    In terms of storage space, if the time values are a small portion of your table rows, you will not get much, if any, space savings, but if time values are a large portion of your table rows, you can see up to, in round numbers, 25 percent savings is space requirements. The server side operations on these types are not more efficient than the native types because the server converts integer types to decimal prior to performing any math operations, and the DATETIME and INTERVAL values are already in DECIMAL format. However, you can get around 25 percent better overall query performance because more information can be obtained per disk IO operation. Converting between database types and application types is more efficient in terms of CPU cycles because the conversion only involves integer operations. More significantly, the database values are time zone independent and you do not have to code your applications for awareness of the time zone in which the application was running when the values were stored.


    The attached files include the SQL routines to define the MICROSECOND and TIMEPOINT types, routines to convert them to and from the native time types, and simple math operations for addition, subtraction, multiplication, and division. Included is a write-up containing the theoretical reasoning for implementing these types. There are also a variety of SQL test scripts and a Java program demonstrating read and write operations on these types in an application.


    The download code is more of a proof of concept than a fully-fledged, for-production-use implementation. The conversion routines that exist could be better optimized, and operations beyond simple math could be implemented. Comments and recommendations for improvement are welcome.


    -Chris Golledge

    Modified on by cheetahblog

    Sunsetting Older Informix Information Centers

    cheetahblog Tags:  centers information center info 1,871 Views

    The following Informix information centers will be sunset in the near future:


    • Informix 11.10 information center
    • Informix 10 information center
    • Informix products information center

    After these information centers are sunset, links to them will be redirected automatically to the resources listed below. You can download final versions of the product documentation, which are being provided as-is, from these resources:

    Information centers for current Informix releases will be replaced by the new IBM Knowledge Center, which is currently in Beta. Links from the current information centers will be redirected automatically to the new IBM Knowledge Center.

    Consider updating your bookmarks now, and explore the IBM Knowledge Center, especially the Informix collections:

    To learn more about the IBM Knowledge Center strategy, follow the IBM Technical Content blog at

    -Randy House

    How to change multiple configuration parameters at once?

    cheetahblog 1,866 Views

    At present most of the Informix configuration parameters can modify dynamically. Typically, users run 'onmode -wf' or 'onmode -wm' command to dynamically change a specific configuration parameter. But there may be a situation when you need to modify multiple parameters at once. For example, you have ten Informix instance and want to tune some of SQL statement cache configuration parameters on all the Informix instances.


    Now, you can import a configuration file using onmode command to change multiple configuration parameter at once. Importing a configuration file is often faster and more convenient than changing one configuration parameter at a time. You can use the 'onmode -wi' command to import a file that contains new values for multiple configuration parameters. If the parameters are dynamically tunable, the database server applies the new values. The import operation ignores the configuration parameters in the file that are not dynamically tunable and if the new parameter values same as the existing value.


    However, you need to keep a note hare that an import operation changes the values of configuration parameters only in the memory. It does not modify the values in the ONCONFIG file on disk.


    Let's take the same example as before, modify some of SQL statement cache configuration parameters using import operation. Currently, you have following settings:

    STMT_CACHE         0
    STMT_CACHE_SIZE    512

    You create a '' file under '/tmp' directory with following values to modify the  configuration parameters :

    STMT_CACHE         1
    STMT_CACHE_SIZE    512


    Next, you run the following command to import configuration values from file named '' in the the /tmp directory:

    $ onmode -wi /tmp/
       3 parameters successfully modified. See online log for more info.

    The above command generates appropriate messages on screen and the message log. Following is an excerpt of the message log:

    14:50:42  Importing configuration from '/tmp/':
    14:50:42  Value of STMT_CACHE has been changed to 1.
    14:50:42  Value of STMT_CACHE_NOLIMIT has been changed to 1.
    14:50:42  Value of STMT_CACHE_HITS has been changed to 1.
    14:50:42  Config Import Complete. 3 tunable parameters processed.
              3 modified, 0 unchanged, 0 could not be tuned.


    You can notice only three out of the five parameters has been changed because either those are not dynamically tunable or using same existing value.


    As the 'onmode -wi' command import a configuration file, you can also export configuration parameters from memory to a file using 'onmode -we' command.


    -Sanjit Chakraborty


    Introduction to the Binary Datablade

    cheetahblog Tags:  datablade bdt 1,728 Views
    So now that we've covered the BTS blade, let's look at another newdatablade, while first introduced in 10.00.FC6, the Binary Datablade(BDT for short), needs some introduction.

    The BDT blade is designed to allow for the storage and manipulation of binary encoded strings. It supports two new datatypes, andallows for logical operations like AND, OR, XOR and NOT to be usedagainst these datatypes.

    Like any official datablade, it is registered/installed by the use ofthe blademgr function.

    Registering the BDT

    After registering it to your database you now have two additional UDT's

    • binaryvar- UDT which stores binary string of up to 255 characters
    • binary18- UDT which stores a fixed 18 byte binary string. If the value insertedinto this UDT is less tha 18 bytes,  the field is right paddedwith zeroes.
    Binary data is entered into the colums by use of  ascii hexvalues. You can choose to preface the value by "0x" or not. The datamust also be entered in pairs (since a byte of information is from 00to FF in hex). That means inserting the following will generate anerror:
    But inserting the Following will not generate an error:

    Currently the above datatypes can be manipulated by the followingfunctions:

    • bit_and() -performs a logical AND on two BDT datatypes or one BDT datatype and astring constant.
    • bit_complement() - performs a logical NOT on a single BDT datatype.
    • bit_or()- performs a logical OR on two BDT datatypes or one BDTdatatype and a string constant.
    • bit_xor()-performs a logical XOR on two BDT datatypes or one BDT datatype and astring constant.

    Also the following aggregates support the BDT datatypes:
    • DISTINCT()
    • MAX()
    • MIN() 
    Pleas note the following are not supported:
    • LIKE

    The BDT Blade supports b-tree indices for both single column andcomposite indices.

    So now that we have the details, let create an example. We will createa table in the stores_demo database called bdt_example, and insert somerows.

    CREATE TABLE bdt_example (bin_id serial, bin_val binary18) ;
    INSERT into bdt_example values (0, '102039847137');
    INSERT into bdt_example values (0, '0xFE');
    INSERT into bdt_example values(0, '49424d434f52504f524154494f4e32303036');

    So let's run :

    SELECT DISTINCT (bin_val) FROM bin_val;

    BDT Select Distinct

    As you can see the values for the binary18 which were inserted that hadless than 18 bytes had zeroes padded on the end.

    So let's see what the complement of the data would be:

    SELECT bit_complement(bin_val) from bdt_example;

    select with bit complement

    Again you can see where the logical not was run against these threevalues.

    The BDT datablade also has some additional diagnostic type functions.They are:

    • BDTRelease() - This function tells you what version of theBDT datablade you are using.
    • BDTTrace(filename)- This function specifies the location of the debug/trace file for theBDT datablade.
    • LENGTH(column)- This function specifies the length, in bytes, of the BDT columnspecified.
    • OCTECT_LENGTH (column)- This function specifies the length, in bytes, of the BDT columnspecified.
    The LENGTH and OCTECT_LENGTH return the same values for BDT datatypes.

    One additional addendum. The BDT is fully supported for use in ER.

    Additional information on the Binary Datablade can be found in TheBuilt-In DataBlade Modules User's Guide

    Mark Jamison[Read More]

    Tracking object state changes within Enterprise Replication (ER)

    cheetahblog Tags:  cdr_features track logcommands 1,697 Views

    Have you ever run into a situation where certain ER administrative commands executed and that caused ER to fail? Without knowing the exact sequence of activities, it would be very difficult to identify the actual commands that caused the problem. Unfortunately, not all ER commands generate alerts or update message log. If you did not use a script to run ER administrative commands, there would be no easy way to track the sequence of activity caused the failure. 


    An undocumented mechanism could be use for track ER object state changes. Once you enable this tracking mechanism, Informix server generates information about ER administrative operations in server message log file with full command line arguments, which could help analyzing ER issues. By default it would be disable. To enable the tracking either use –

    • Add the following entry to the ONCONFIG configuration file and restart the Informix server.

                     CDR_FEATURES   LOGCOMMANDS


    • Dynamically change the configuration settings with following command:

                cdr change onconfig "CDR_FEATURES LOGCOMMANDS"

    For example, if you run 'cdr define server --ats=/test/data2 --ris=/test/data2 --connect cdr2 --init g_cdr2 --sync g_cdr1' to define a replication server that would generate following information in message log: 

    10:23:12 CDR remote administrative operation 'define serv -A /test/data2 -R /test/data2 -S g_cdr1 -I g_cdr2' execution succeeded.


    -Sanjit Chakraborty


    cheetahblog 1,609 Views
    Welcome to the IDS Experts Blog, an international team blog authored by IBM software engineers. These experts from the Informix Technical Support and R&D labs will be posting about a variety of topics related to Informix and IDS. A popular topic over the next few months is likely to be the 11.10 release of Informix Dynamic Server currently in Open Beta.

    If you have suggestions on articles you'd like to see from support or development engineers please add a comment to the blog. Your feedback to this blog goes directly to engineers working on Informix products.[Read More]

    Identifying locks from Updatable Secondary Servers

    cheetahblog Tags:  onstat; lock proxy; 1,594 Views