Informix Experts

For a complete list of current SQL reserved words, see the IBM Informix Guide to SQL: Syntax.

Sanjit Chakraborty[Read More]

Moving (Resetting) the sysadmin database to a new dbspace

cheetahblog Tags:  database reset move sysadmin 6,063 Views
"sysadmin" database is created in "root dbspace" at server initialization. This database is required for the Scheduler API and Remote Administration feature. Until B5 drop, there wasn't any way of moving the sysadmin database safely to any other dbspace. A new SQL Admin API command in B6 drop now simplifies this task by allowing "informix" user to drop and recreate this database to any other dbspace.

If it's determined that root dbspace does not have enough space for storing task properties and command history information, you could move the sysadmin database to a different dbspace by using the "reset sysadmin" SQL Administration API command. This command drops the sysadmin database from root dbspace and recreates it in the specified dbspace.

Here's an example to move the sysadmin database...

1. Make sure the following message has appeared in the online message log after server startup:

SCHAPI: Started 2 dbWorker threads.  

2. Let's create a special dbspace to store sysadmin... admindbs

database sysadmin;execute function task("create dbspace", "admindbs", "/vol/dbspaces/admindbs", "1 GB", "0");

onspaces -c -d admindbs -p /vol/dbspaces/admindbs -s 1000000 -o 0

3. As user informix, run the following commands:

dbaccess sysadmin -execute function task("reset sysadmin", "admindbs"); 

The command will return the following message:

SCHAPI: 'sysadmin' database will be moved to 'admindbs'. See online message log.

The internal thread, bld_sysadmin (seen via onstat -g ath), waits up to five minutes to obtain exclusive access to the sysadmin database. The progress of the bld_sysadmin thread is logged in the online message log.

4. Our own session has sysadmin database open. So, terminate the dbaccess session with the "close database" statement.

On successful completion, the sysadmin database is dropped and recreated in the new dbspace. The dbScheduler and dbWorker threads are restarted automatically.

Mirav Kapadia[Read More]

New Onstat Commands

cheetahblog Tags:  onstat 4,877 Views
Beginning with IDS version 11.10, Informix introduced several new onstat commands to enhance server maintenance and monitoring. Database Administrator can take advantage of these new onstat commands to perform their regular activities.

Following are list of new onstat commands:

     Onstst Commands                Description   onstat -g his       Prints SQL statement tracing information using SQLTRACE                        configuration parameter. For more information on SQL tracing,                        see following articles:                       Tracing SQL in Cheetah Part 1                       Tracing SQL in Cheetah Part 2

onstat -g ckp Print checkpoint information and checkpoint related advisory. For more information on checkpoint advisory, see following article: Checkpoint Performance Advisory

onstat -g pqs Allows dynamically track a session's query plan. For more information see following article: onstat -g pqs onstat -g smx Prints Server Multiplexer group information for servers using SMX. Server Multiplexer Group (SMX) is a communications interface that supports encrypted multiplexed network connections between servers in high availability environments. SMX provides a reliable, secure, high-performance communication mechanism between database server instances.

onstat -g rss Prints Remote Standalone Secondary (RSS) server information. The output of this command differs slightly depending on whether the command is run on the primary server or on the RS secondary server.

onstat -g sds Print Shared Disk Secondary (SDS) server information

onstat -g ipl Shows index page logging status information

onstat -g cat Prints information from the Enterprise Replication global catalog. The global catalog contains a summary of information about the defined servers, replicates, and replicate sets on each of the servers within the enterprise.

onstat -g cdr config Prints the settings of Enterprise Replication configuration parameters and environment variables that can be set with the CDR_ENV configuration parameter.

For a complete list of onstat commands, see the IBM Informix Administrator's Reference Guide.

Sanjit Chakraborty
[Read More]

Monitoring Onbar Performance

cheetahblog Tags:  performance onbar 1 Comment 5,780 Views
Previously there was no easy way to monitor onbar archiving progress. It always a question, how long onbar process will take to complete an archive or how much time onbar will spend to transfer data between server, storage manager and vice versa.

Informix introduce two new configuration parameters to help onbar monitoring.


The BAR_PROGRESS_FREQ configuration parameter specifies, in minutes, the frequency of the progress messages in the bar activity log for backup and restore operations.

For example, if BAR_PROGRESS_FREQ is set to 5, onbar reports the percentage of the object backed up or restored every five minutes. Following is an excerpt of bar activity log that showing progress of rootdbs dbspace backup:

  2007-05-09 16:12:58 13344  13342 /export/home/informix/bin/onbar_d -b -L 0 2007-05-09 16:12:59 13344  13342 Archive started on rootdbs (Requested Level 0). 2007-05-09 16:13:00 13344  13342 Begin level 0 backup rootdbs. 2007-05-09 16:13:00 13344  13342 Successfully connected to Storage Manager. 2007-05-09 16:18:00 13344  13342 1 percent of rootdbs has been backed up. 2007-05-09 16:23:01 13344  13342 6 percent of rootdbs has been backed up. 2007-05-09 16:28:02 13344  13342 11 percent of rootdbs has been backed up. 2007-05-09 16:33:03 13344  13342 16 percent of rootdbs has been backed up. 2007-05-09 16:38:04 13344  13342 21 percent of rootdbs has been backed up. 2007-05-09 16:43:05 13344  13342 26 percent of rootdbs has been backed up.

The default value of BAR_PROGRESS_FREQ is 0. If the value set to 0, onbar does not write any progress messages to the bar activity log.

The BAR_PROGRESS_FREQ value can’t less than five minute for monitoring onbar progress.

If ON–Bar cannot determine the size of the backup or restore object, it reports the number of transfer buffers sent to the database server instead of the percentage of the object backed up or restored.


The BAR_PERFORMANCE configuration parameter specifies the type of performance statistics to report, and write them to the bar activity log for backup and restore operations.

For example, if BAR_PERFORMANCE is set to 3, onbar reports the time spent transferring data between the Informix server and the storage manager, in the bar activity log.

The default value of BAR_PERFORMANCE is 0. If the value set to 0, onbar does not report any performance statistics to the bar activity log.

Valid values of BAR_PERFORMANCE are 0,1,2 or 3.

  • 0 - turn performance monitoring off
  • 1 - display the time spent transferring data between the server and storage manager
  • 2 - display sub-second accuracy in the timestamps
  • 3 - display both timestamps and transfer statistics

Both BAR_PROGRESS_FREQ and BAR_PERFORMANCE configuration parameters take effect while onbar process starts.

Sanjit Chakraborty

[Read More]

Configuring session properties

cheetahblog Tags:  sysdbopen configure sysdbclose close open connect database session 6,524 Views
In IDS 11.10, a DBA or user informix can create sysdbopen() and sysdbclose() procedures to be executed when adatabase is opened and closed. These procedures can be used to change the properties of a session without changingthe application that the session executes. Any statements that are valid in a UDR can be executed in these proceduresto change the session behavior.

A DBA or user informix can create the following procedures in a database:
  • username.sysdbopen
  • public.sysdbopen
  • username.sysdbclose
  • public.sysdbclose

    username: Operating System User
Each time a user user1 opens a database using either a DATABASE or CONNECT TO statement, the database server executesuser1.sysdbopen() if such a procedure is defined. If not it will execute public.sysdbopen(). Each time a user user1 closes a database using either a CLOSE DATABASE or DISCONNECT statement, the database server executesuser1.sysdbclose() if such a procedures is defined. If not, it will execute public.sysdbclose(). sysdbclose() will beexecuted even if the application exits without an explicit CLOSE DATABASE or DISCONNECT statement because the serverdoes an implicit close of the current database for such cases.

The owner name is not ignored when you create sysdbopen() and sysdbclose() procedures in non-ANSI databases, so youcan create these procedures for specific users in non-ANSI databases.

The following procedure creates a table oltp_stat, sets the role to oltp and PDQ priority to 10 for user oltp_user ina database:

    create procedure oltp_user.sysdbopen()
    create table oltp_stat(userid int, connect_time datetime year to second);
    set role to oltp;
    set pdqpriority 10;
    end procedure;

The following procedure creates a table public_stat, sets the role to others and the PDQ priority to 1 for the PUBLICgroup in a database.

    create procedure public.sysdbopen()
    create table public_stat(userid int, connect_time datetime year to second);
    set role to others;
    set pdqpriority 1;
    end procedure

You can clean up any activities started by sysdbopen() in sysdbclose(). For example, if you had created some tables in sysdbopen(), you can drop those tables in sysdbclose().

The following procedures drop the tables created by the sysdbopen() procedures given above:

    create procedure oltp_user.sysdbclose()
    drop table oltp_stat;
    end procedure;

    create procedure public.sysdbclose()
    drop table public_stat;
    end procedure

A DBA or user informix can set the environment variable IFX_NODBPROC to any value, including 0, to prevent the execution of sysdbopen() and sysdbclose() procedures. When you set up sysdbopen() and sysdbclose() procedures, youcan set the environment variable IFX_NODBPROC and execute the procedures to test if the procedures work as expected. You need to unset the environment variable IFX_NODBPROC after testing.

For more information, see information on sysdbopen() and sysdbclose() in the IBM Informix Guide to SQL: Syntax

Suma Vinod[Read More]

List of Installed components : New manifest file

cheetahblog Tags:  of list components manifest installed 5,746 Views
The installation and uninstallation applications allow to selectively add or remove components. To choose specific features during installation, you must choose a custom setup option during installation .

You can remove specific Dynamic Server features after both typical and custom installations, provided that the features are not required for your system's integrity and that they have not been used in your implementation.

You can also add or reinstall specific features after completing setup of a Dynamic Server instance without reinstalling the base server. The installation and uninstallation applications detect your implementation's setup to prompt you about the features accordingly.

UNIX and Linux-platform installation
The manifest file manifest.inf ($INFORMIXDIR/etc/manifest.inf) and installed files IIFfiles.installed($INFORMIXDIR/etc/IIFfiles.installed ) on sites using J/Foundation, and at IDS2000files($INFORMIXDIR/etc/IDS2000files) for sites not using J/Foundation are dynamic files that log installation activity of an IDS instance.
If any on the components is uninstalled the entry for that component is removed/deleted from the manifest.inf file .
These "log files" can help you quickly see what  features and component files are currently installed, as well as a history of such activity. Do not modify the content of these files.

WINDOWS-platform installation
The manifest file manifest.inf (%INFORMIXDIR%\etc\manifest.inf) is a dynamic file that logs installation activity of an IDS instance. This "log file" can help you quickly see what features and component files are currently installed, as well as a history of such activity.

Do not modify the content of these files. 

Below are the snippets from the manifest.inf file

IBM Corporation IBM Informix Dynamic Server

Installed Features
Base Server
The core database server for basic DBA operations without optional extensions, libraries, or utilities
115.7 MB
Last installed:May 3, 2007

Informix Interface for Tivoli Storage Manager
For implementing XBSA functions that use Tivoli Storage Manager with ON-Bar
115 KB
Last installed:May 3, 2007

Informix Storage Manager
For managing external storage devices and media that contain backups
49.5 MB
Last installed:May 3, 2007

archecker Utility
For verifying backups and restoring portions of a database, a table, a portion of a table, or a set of tables
2.7 MB
Last installed:May 3, 2007

Rashmi Chawak

[Read More]

C-Style Comment Indicator with DB-Access

cheetahblog Tags:  dbaccess comment 4,794 Views

Earlier versions of Informix Dynamic Server (IDS) had restriction with C-style comment indicators ( /* ... */ ) in SQL statements, within the DB–Access operations.

Without the C-style comment indicator it was not compliance with SQL-99 standard and caused problems while migrate from other databases server.

Starting with IDS Version 11.10, restrictions with C-style comment are no longer in effect. Any of the following styles SQL comment indicators can be use in all DB–Access statements.

  • Double hyphen ( -- ) complies with the ANSI/ISO standard for SQL
  • Braces ( { } ) are an Informix extension to the ANSI/ISO standard
  • C-style slash-and-asterisk ( /* . . . */ ) comply with the SQL-99 standard

The comment symbol indicator can be choosing depends on ANSI/ISO compliance requirements.

Comment symbol indicator can be use on the same line with the SQL statement. It can spans across single-line or multiple-line, but cannot be nested.

DB-Access will filter any comments before sending SQL statement to the database server.

Examples:  -- Double hyphen comment  SELECT * FROM orders;     -- Selects all columns and rows                            -- from the orders table

{ Comment using braces } SELECT * FROM orders; { Selects all columns and rows from the orders table }

/* C-style comment */ SELECT * FROM orders; /* Selects all columns and rows from the orders table */

Sanjit Chakraborty
[Read More]

New onstat option to display sysdbprocedure cache

cheetahblog Tags:  onstat sysdbopen cache sysdbclose procedures 5,056 Views
In IDS 11.10, you can configure session properties using sysdbopen() and sysdbclose() procedures, see Configuring Session Properties. Theseprocedures are stored in sysprocedures system catalog. A memory cache is maintained for these procedures to facilitatefaster searching of procedure identifiers. A new onstat option is available to print statistical information about thismemory cache. You can determine how many users are accessing a database, the sysdb procedures defined for public andother users from this information. A sample output is shown:
    $ onstat -g cac sproc

    IBM Informix Dynamic Server Version 11.10.FC1 -- On-Line -- Up 4 days 20:30:27 -- 38912 Kbytes

    Sysdbproc Cache: Number of lists : 31

    list# refcnt dbname Public Procids Username User Procids UsrList# (open,close) (open,close)-------------------------------------------------------------------------------------7 0 sysmaster (-1, 0)16 1 testdb (345, 346) suma (347, 348) 14 informix (343, 344) 12218 0 sysutils (-1, 0)21 0 sysuser (-1, 0)

    list#List number to which the database name hashes
    refcntNumber of sessions accessing this database
    dbnameDatabase name
    Public ProcidsProcedure ids of public.sysdbopen() and public.sysdbclose(). The value will be (-1,0) if there is no public or user defined sysdb procedures for that database
    UsernameUser name of any users who have their own sysdbopen() and sysdbclose() procedures for that database
    User procidsProcedures ids of sysdb procedures for a particular user in a database
    UsrList#A user hash list to which the user belongs

The number of lists is determined by the onconfig parameter DS_HASHSIZE. The maximum number of database entries for a list is determined by DS_POOLSIZE. This is a soft limit as the limit is checked only when you close a database. When you close a database, if the number of database entries for that list is greater than DS_POOLSIZE, IDS deletes all excess database entries in that list that has a reference count of 0.

Suma Vinod[Read More]

BTS Blade part two setup and installation

cheetahblog Tags:  setup bts installation 3 Comments 8,526 Views
So you want to get the BTS datablade running on a database? Well hereis a step by step Quick Reference to get your database ready to use theBTS Datablade.

  • First you need to register the BTS blade in your database,this is done with the following method:

    Registering BTS

  • Next you need to create an external space for your BTSindex. This is accomplished by the following method:

    Creating Ext space

  • Next you need to set up your instance so that the BTSVirtualProcessor is created. This is done by adding the following to the$ONCONFIG file:


    Please note that at this point you have two options to start your btsVP, the first is to cycle the instance (ie, onmode -ky folllowed byoninit), the second is via the onmode -p command (ie. onmode -p +1 bts)

  • Finally you need to create an index using the bts accessmethod. Below is an example of the code.

    Creating index

And now just to validate that the BTS blade is now up an running, let'srun  quick test. We'll run the following query:

           SELECT company FROM customer WHEREbts_contains(company, '%all');

BTS Select

And so now you should have the BTS set up and ready to run.

In our final Blog on the BTS we will go over some performance tuning,and how to troubleshoot some issues.

Mark Jamison[Read More]

BTS part Three : performance tuning and troubleshooting

cheetahblog Tags:  performance tuning troubleshooting bts 6,569 Views
BTSPart 2
BTSPart 1

So now that we have set up the BasicText Search Blade, or BTS forshort, the next question is what performance tuning tools areavailable, andhow do we primarily troubleshoot problems?

BTS comes with 4 functions in addition to the bts_contains function.They are:

  • bts_tracefile :This functions lets you designate the location and name of you tracefiles.
  • bts_tracelevel : This enables tracing at the session level for BTS calls
  • bts_index_compact:  Think of this function as the Index Cleaner for the btsblade.
  • bts_release: Allows you to skip using blademgr to find out what version of theblade you are using.

Performance Tuning:

The biggest performance tuning tool is bts_index_compact. From anadministrative standpoint, we treat the BTS indices the same way wetreat btreeindices, with one exception, we have no automated thread which goes outandclean the BTS indices, for btree indices we have btscanner threads.bts_index_compact is the BTS version of  what the btscannerthread does.So from a performance standpoint, you should frequently runbts_index_compactin order to decrease the size, both disk space and number of elements,and thusincrease performance.  Please note that you can make thisprocessautomated by creating your index with delete= 'immediate' in youstatement.This will eliminate the need for compacting the index, but will makealldeletes run longer. A better option would be to set up a function usingtheadmin api to periodically run bts_index_compact on your indices, thuscreatinga poor mans btscanner for bts, so to speak.

Another performance tuning tip is to make sure that you have createdyour extspaces on fast disk. If you create an ext space on an NFS mount you arebasically asking for horrible performance. A little bit about extspaces may bein order. External spaces, or extspaces, are exactly as they sound,they arespaces external to the database proper. What this means is that youcannot usenormal onstat's to monitor an extspace.

Troubleshooting BTS Issues:

So now you have run into a problem, maybe something so simple as notremembering where your extspace is actually located, maybe you aregettingbigger problems, in either case, this is when you need to enable BTStracing.This process is actually fairly straight forward, as BTS has provided 2functions to handle this. First you will call bts_tracefile, to setyour tracefile location, then you should call bts_tracelevel which will dumpinformationto the file you specified.

Let's say that you had just forgotten the syntax and location of yourbts. Thequickest way would be to run the following:

execute function bts_tracefile('/tmp/bts.log');
execute function bts_tracelevel(10);
select * from customer where bts_contains(company,'all~');

After that just read the file bts.log

Below is a snippet from the trace file generated from the above query:

BTS log

Taking a quick look at this you can see what your extspace info is.

database name = stores_demo
owner name = informix
ext space name = bts1

You also know now that your extspace is located in/work1/informix/11.10.FC1/bts_idx/stores_demo/informix

A quick word on extspaces. Extspaces are spaces that are external tothe database;as such there is no onstat that monitors them. If you forget what thename ofyour ext space is or forget its location, the above method isessentially youronly option by which to find them.One other important note about extspaces, if you want to back them up, there is only one support method, and is as follows:

onmode -c block
External backup of external space
External backup of the instance
onmode -c unblock

What the above means is that rather than going through the above to safely backup your BTS indices you should just plan on rebuilding your BTS indices in the case of a restore.

One other consideration when troubleshooting BTS issues, BTS does notallow formultiple bts_contains in the predicate portion of an SQL Statement.

For example

select * from customer where bts_contains(company, "fred") orbts_contains(company, 'all')

would fail with the error:

(BTS22) - bts_error  bts_contains requires an index on thesearch column

This concludes my series on the bts blade.

Mark Jamison
[Read More]

Introduction to the Binary Datablade

cheetahblog Tags:  datablade bdt 3,665 Views
So now that we've covered the BTS blade, let's look at another newdatablade, while first introduced in 10.00.FC6, the Binary Datablade(BDT for short), needs some introduction.

The BDT blade is designed to allow for the storage and manipulation of binary encoded strings. It supports two new datatypes, andallows for logical operations like AND, OR, XOR and NOT to be usedagainst these datatypes.

Like any official datablade, it is registered/installed by the use ofthe blademgr function.

Registering the BDT

After registering it to your database you now have two additional UDT's

  • binaryvar- UDT which stores binary string of up to 255 characters
  • binary18- UDT which stores a fixed 18 byte binary string. If the value insertedinto this UDT is less tha 18 bytes,  the field is right paddedwith zeroes.
Binary data is entered into the colums by use of  ascii hexvalues. You can choose to preface the value by "0x" or not. The datamust also be entered in pairs (since a byte of information is from 00to FF in hex). That means inserting the following will generate anerror:
But inserting the Following will not generate an error:

Currently the above datatypes can be manipulated by the followingfunctions:

  • bit_and() -performs a logical AND on two BDT datatypes or one BDT datatype and astring constant.
  • bit_complement() - performs a logical NOT on a single BDT datatype.
  • bit_or()- performs a logical OR on two BDT datatypes or one BDTdatatype and a string constant.
  • bit_xor()-performs a logical XOR on two BDT datatypes or one BDT datatype and astring constant.

Also the following aggregates support the BDT datatypes:
  • MAX()
  • MIN() 
Pleas note the following are not supported:
  • LIKE

The BDT Blade supports b-tree indices for both single column andcomposite indices.

So now that we have the details, let create an example. We will createa table in the stores_demo database called bdt_example, and insert somerows.

CREATE TABLE bdt_example (bin_id serial, bin_val binary18) ;
INSERT into bdt_example values (0, '102039847137');
INSERT into bdt_example values (0, '0xFE');
INSERT into bdt_example values(0, '49424d434f52504f524154494f4e32303036');

So let's run :

SELECT DISTINCT (bin_val) FROM bin_val;

BDT Select Distinct

As you can see the values for the binary18 which were inserted that hadless than 18 bytes had zeroes padded on the end.

So let's see what the complement of the data would be:

SELECT bit_complement(bin_val) from bdt_example;

select with bit complement

Again you can see where the logical not was run against these threevalues.

The BDT datablade also has some additional diagnostic type functions.They are:

  • BDTRelease() - This function tells you what version of theBDT datablade you are using.
  • BDTTrace(filename)- This function specifies the location of the debug/trace file for theBDT datablade.
  • LENGTH(column)- This function specifies the length, in bytes, of the BDT columnspecified.
  • OCTECT_LENGTH (column)- This function specifies the length, in bytes, of the BDT columnspecified.
The LENGTH and OCTECT_LENGTH return the same values for BDT datatypes.

One additional addendum. The BDT is fully supported for use in ER.

Additional information on the Binary Datablade can be found in TheBuilt-In DataBlade Modules User's Guide

Mark Jamison[Read More]

Dynamically Changing Configuration parameters for Enterprise Replication

cheetahblog Tags:  dynamic er configuration | 1 Comment 5,015 Views
er_dynamic_configIn IDS 11 you can dynamically change,add or removethe Enterprise Replication configuration parameters.  Thechangestakes effect immediately, while the replication is running. Following are the three commands to use, which are explainedindetail along with examples.

cdrchange config

Use the cdr change config command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1.

For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7

Before changing the values you can confirm the existing values by doing:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:1-2,10   

Now lets change the values to 2,3,4,5 and 7:

$cdr change config "CDR_SUPPRESS_ATSRISWARN 2-5,7"
 WARNING:The value specifed updated in-memory only.

The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfrom
ONCONFIG will take effect.  Now lets confirm the new values:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7   

cdradd config

Use the cdr add config
command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.

Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.

$cdr add config "CDR_SUPPRESS_ATSRISWARN 10"
 WARNING:The value specifed updated in-memory only.

We can confirm the new values from "onstat -g cdr config" as following.

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7,10   

cdrremove config

Use the cdr removeconfig command to removethe existingvalue of an Enterprise Replication configuration parameter whilereplication is active. The value persists while replication is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1.

Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.

$cdr remove config "CDR_SUPPRESS_ATSRISWARN 2-5"
 WARNING:The value specifed updated in-memory only.

Confirm your changes:

$onstat -g cdr config | grep SUP
   CDR_SUPPRESS_ATSRISWARN configuration setting: 7,10 

NOTE: Thecommands changeconfiguration parameters only in memory; they do not update theONCONFIG file. To update environment variables in the ONCONFIG file,use the CDR_ENV configuration parameter.

The following table shows which kind of changes are valid for eachEnterprise Replication configuration parameter.

Table 1.1
ConfigurationParameter cdr add config cdrchange config cdr remove config

Vijay Lolabattu
[Read More]

Onbar backup order on Cheetah

cheetahblog 3 Comments 5,663 Views

Onbar backup order on Cheetah

1> Root dbsapce , SBLOBspace, BLOB space, and normal dbspace onbar backup order.
Rootdbspace is always first, then SBLOB space, BLOB space, normal dbspace is the last one.
For example:
Onstat –d output as following:

/home/informix/cheetah:onstat -d

IBM Informix Dynamic Server Version 11.10.UC1 -- On-Line -- Up 00:19:47 -- 44672 Kbytes

address number flags fchunk nchunks pgsize flags owner name
44cdf7f0 1 0x60001 1 1 2048 N B informix rootdbs
45ca6ea8 2 0x40001 2 1 2048 N B informix ddbs1
45c26ea8 3 0x40001 3 1 2048 N B informix ddbs2
45ca7a00 4 0x48001 4 1 2048 N SB informix sbdbs
45bbeeb0 5 0x60011 5 1 8192 N BB informix bdbs
5 active, 2047 maximum

Note: For BLOB chunks, the number of free pages shown is out of date.
Run 'onstat -d update' for current stats.

address chunk/dbs offset size free bpages flags pathname
44cdf950 1 1 0 50000 15421 PO-B /home/informix/cheetah/dsk/rootdbs
45c98e38 2 2 0 5000 4947 PO-B ./dsk/ddbs1
45ca73b0 3 3 0 5000 3449 PO-B ./dsk/ddbs2
45ca7b60 4 4 0 5000 4587 4587 POSB ./dsk/sbdbs
Metadata 360 268 360
45bb0e48 5 5 0 5000 ~1250 1250 POBB ./dsk/bdbs
5 active, 32766 maximum

After onbar –b –w ,and the backup order can be showed in the bar_act.log infor as following:

2007-06-15 04:41:10 5218 5216 /home/informix/cheetah/bin/onbar_d -b -w
2007-06-15 04:41:10 5218 5216 Archive started on rootdbs, sbdbs, bdbs, ddbs2, ddbs1 (Requested Level 0).
2007-06-15 04:41:11 5218 5216 Begin level 0 backup rootdbs.
2007-06-15 04:41:11 5218 5216 Successfully connected to Storage Manager.
2007-06-15 04:41:11 5218 5216 Completed level 0 backup rootdbs (Storage Manager copy ID: 751421824 0).
2007-06-15 04:41:12 5223 5218 Process 5223 5218 successfully forked.
2007-06-15 04:41:12 5223 5218 Begin level 0 backup sbdbs.
2007-06-15 04:41:12 5223 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5224 5218 Process 5224 5218 successfully forked.
2007-06-15 04:41:12 5224 5218 Begin level 0 backup bdbs.
2007-06-15 04:41:12 5224 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5225 5218 Process 5225 5218 successfully forked.
2007-06-15 04:41:12 5225 5218 Begin level 0 backup ddbs2.
2007-06-15 04:41:12 5226 5218 Process 5226 5218 successfully forked.
2007-06-15 04:41:12 5226 5218 Begin level 0 backup ddbs1.

2> Normal dbspace backup order
The onbar –b –w backup on Normal dbsapces doesn’t depend on the order which the dbspace is created but depend on the size is used on that dbsapce.
In the above example, the ddbs2’s usage is bigger than the ddbs1’s usage, the onbar backup’s order is the ddbs2 first and then the ddbs1 without following the create order.

[Read More]

Improved Statistics Maintenance

cheetahblog Tags:  explain sampling statistics 4,092 Views
  • Automaticstatistics collection.

IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.

  • Query statistics in EXPLAIN output.

You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN.  The  querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel.  The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid command. Following is an example.


In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.

If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).

If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe specified percent of rows and for the specified confidence level.

If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.

For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.


Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.

  • New FILETO option to redirect  SET EXPLAIN output to a filethat you desire.

When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.

The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user.  The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file.  Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.

SETEXPLAIN FILE TO '/tmp/explain.vijay'

These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.

Vijay Lolabattu
[Read More]

Summary of new SQL enhancements in IDS 11.

cheetahblog Tags:  enhancements sql trigger directive optimizer 5,590 Views
Derived tables in the FROM Clauseof Queries

TheSELECT statement can now include syntax that complies with ISO/IEC9075:1992, the SQL-92 standard, to specify a full select subquery inthe FROM clause as a data source for the query. These subqueries arecalled derived tables or table expressions, they can be simple, UNION,or joined subqueries, including OUTER joins, and can include the ORDERBY clause. In addition, AS correlation specifications in the FROMclause can declare temporary names for columns within the query.Informix-extension syntax, such as the FUNCTION keyword with iteratorfunctions or the TABLE (MULTISET (SELECT ...)) keywords forcollection-derived tables, can now be replaced in the FROM clause bySQL-92 syntax. This feature expands the capability of Informix DynamicServer to run without modification queries that are interoperable onother database servers that support industry-standard SQL syntax.

You can find examples using derived tables in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/sel_sql99.sql.

Optimizer Directives in ANSI-Compliant Joined Queries

Earlier IDS versions supported optimizer directives inInformix-extension joined queries, but not in queries that usedANSI/ISO syntax to specify joins. For both inline directives andexternal directives, this release extends support in ANSI/ISO joinedqueries to the following classes of optimizer directives:
  • Explain-mode directives (EXPLAIN, AVOID_EXECUTE)
  • Optimization-goal directives (ALL_ROWS, FIRST_ROWS).
The join-order directive (ORDERED) is supported only inANSI/ISO-compliant LEFT OUTER joins and INNER joins. Because ofordering requirements for OUTER joins, in ANSI-compliant joined queriesthat specify the RIGHT OUTER JOIN or FULL OUTER JOIN keywords, theORDERED join-order directive is ignored, but it is listed underDirectives Not Followed in the sqexplain.out file.
This feature does the not support the join-method directives (USE_NL,AVOID_NL, USE_HASH, AVOID_HASH, /BUILD, and /PROBE) in ANSI/ISO joinedqueries, except in cases where the optimizer rewrites the query so thatit is no longer uses the ANSI/ISO syntax.

Trigger Enhancements

Several new features expand the syntax and the functionality oftriggers on tables and on views:

  • You can now define multiple INSERT, DELETE, UPDATE, andSELECT triggers on a table and multiple INSTEAD OF triggers for theview.
  • When a table, view, or column list has multiple triggersfor a DML event type, Informix Dynamic Server executes all BEFOREtriggered actions before the FOR EACH ROW actions, and executes all FOREACH ROW actions before the AFTER actions.
  • You can create SPL procedures that refer to applicable OLDand NEW trigger correlated values. Within the procedure you can accessapplicable OLD and NEW values and modify the NEW values: e.g. using LETstatements. From a FOR EACH ROW trigger action, you can execute thisSPL procedure [syntax: execute procedure foo() with trigger references].
  • New Boolean operators (DELETING, INSERTING, SELECTING, andUPDATING) can be used in procedures executed from trigger actionstatements. These test whether the currently executing triggered actionwas triggered by the specified type of DML event and return a booleanvalue. The IF statement of SPL and the CASE expression of SQL canspecify these operators as the condition in a trigger routine.

These features make it easier to incorporate IDS triggers on tables andon views within a heterogeneous information management system wheremultiple applications need to share the table or view.
You can find examples using multiple triggers in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/cr_trig.sql.

Index Self-Join Query Plans

Inearlier Informix Dynamic Server versions, queries of tables withcomposite indexes performed inefficiently if the ratio of duplicatevalues to the number of distinct values was much higher for the leadingcolumns than for subsequent columns of the index. A new feature of thequery optimizer supports a new type of index scan, called an indexself-join path, that uses only subsets of the full range of a compositeindex. The table is logically joined to itself, and the more selectivenon-leading index keys are applied as index bound filters to eachunique combination of the leading key values. By default, the optimizerconsiders this type of scan.

The optimizer also supports two newjoin-method directives, INDEX_SJ and AVOID_INDEX_SJ. The INDEX_SJdirective forces an index self-join path using the specified index, orchoosing the least costly index in a list of indexes, even if datadistribution statistics are not available for the leading index keycolumns. The AVOID_INDEX_SJ directive prevents a self-join path for thespecified index or indexes. This feature can improve query performanceon tables with composite indexes.

Enhanced Concurrency withCommitted Read Isolation

In Committed Read isolation level, exclusive row-level locks held byother sessions can cause SQL operations to fail when attempting to readdata in the locked rows.  This release introduces a new LASTCOMMITTED keyword option to the SET ISOLATION COMMITTED READ statementto reduce the risk of locking conflicts when attempting to read atable. This new syntax instructs IDS to return the most recentlycommitted version of the rows, even if another concurrent session holdsan exclusive row-level lock. This behavior can be extended to the DirtyRead, Read Uncommitted, and Read Committed isolation levels by settingthe new USELASTCOMMITTED configuration parameter or through new optionsto the SET ENVIRONMENT statement.

This feature supports B-tree indexes and functional indexes, but notR-tree indexes. It does not support tables that are being accessed byDataBlade modules, tables with columns of collection data types, tablescreated using a Virtual Table Interface, tables with page-levellocking, tables with exclusive table-level locks, unlogged tables, ortables in databases with no transaction logging.

Enhanced Data Types andUDR Support in Cross-Server Distributed Queries

Earlier releases of IDS restricted the remote execution of UDRs indatabases of other IDS instances to SPL routines that the EXECUTEFUNCTION or EXECUTE PROCEDURE statement invoke explicitly, and to SPLroutines that queries and other DML operations invoked implicitly.

This release extends support for UDRs in cross-database andcross-server distributed operations to most contexts where a UDR isvalid in the local database. In addition, external routines written inthe C or Java languages are now valid in any distributed operationwhere an SPL routine is valid.  This release also extends thedata types that are valid as parameters or return values ofcross-server UDRs, which were formerly restricted to non-opaquebuilt-in SQL data types, by supporting these additional data types:

– DISTINCT of built-in types that are not opaque
– DISTINCT of the DISTINCT types listed above.

These data types can be returned by SPL, C, or Java language UDRs thatuse these data types as parameters or as return values, if the UDRs aredefined in all the participating databases. Any implicit or explicitcasts defined over these data types must be duplicated across all theparticipating Dynamic Server instances. The DISTINCT data types musthave exactly the same data type hierarchy defined in all databases thatparticipate in the distributed query.

This feature does not relax existing restrictions on other opaque andDISTINCT types or on large-object, serial, and collection data types inlocally or remotely executed SPL routines or external routines.

Vijay Lolabattu
[Read More]

What is MACH11?

cheetahblog Tags:  er rss hdr sds mach11 6,377 Views
MACH11Multi-node Active Cluster for High Availability(MACH11) is a code name for the new feature called ContinuousAvailability introduced in Informix Dynamic Server (IDS) 11. This new feature significantly expands the  highavailability options in IDS to provide increased failover, capacity,flexibility, and scalability.  

Traditionally, IDS has provided multiple robust solutions forsupporting high availability data replication options. Previousreleases of IDS have supported two replication technologies: EnterpriseReplication (ER) and High Availability Data Replication (HDR). By usingthese technologies together, customers are able to achieve very highlevels of data availability.  Both replication technologiescan be integrated with each other and coexist with other availabilitysolutions such as disk mirroring.

While HDR and ER have been features of IDS for many years, and haveproven to be highly reliable and low-maintenance technologies. IDS nowadds support for two new types of secondary servers:

  • Secondary servers that share the same physical disk. TheShared Disk Secondary (SDS) servers provide increased availability byallowing one or more instances of the IDS server to attach to the samedisk subsystem, providing redundancy for the server in addition to dataredundancy solutions
  • Additional remote secondary servers. The Remote StandaloneSecondary (RSS) servers extend HDR to provide multiple local or remotebackup servers that also replicate the data.

Both SDS and RSS servers provide customers a way to obtain increasedcapacity by distributing workload across multiple servers.Customers canchoose any of these solutions on their own. They become even morepowerful by combining all three types of topologies together. AddingEnterprise Replication, a completely customized availability solutioncan be delivered to meet each unique availability requirement.. Theseconfigurations are simple to set up and maintain, and are highlyscalable.

The following image depicts a Continuous Availability solution with 3tier protection.  For example, if the Primary in Building-A inNew Orleans went down for some reason the role of the primary caneasily be switched to one of the SDS servers running on the bladeserver in Building-B.  This would cause all other secondaryservers to automatically connect to the new primary server. If  both the servers in New Orleans died, Memphisbecomes your Primary and Denver can be made HDR secondary and you mayeven add some SDS servers to Memphis blade server for load balance.

To learn more about this feature, please read the following white paperwritten by the architects of this feature:

Whitepaper on MACH11 technology.

Vijay Lolabattu
[Read More]

Optimizer Directives in ANSI Join Queries

cheetahblog Tags:  directives optimizer joins ansi 4,625 Views
In IDS 11.10, all optimizer directives including external directives can be used in queries using ANSI joins.Using
optimizer directives might enable you to force a favorable query execution path thus improving performance.

Here is a description of the different optimizer directives and how they work with ANSI join queries.

Optimization Goal DirectivesFIRST_ROWS, ALL_ROWSDirectives enabled for ANSI join queries
Access Method DirectivesFULL, INDEX, AVOID_INDEX, AVOID_FULL,INDEX_SJ, AVOID_INDEX_SJDirectives enabled for ANSI join queries
Explain-Mode DirectivesEXPLAIN, AVOID_EXECUTEDirectives enabled for ANSI join queries
External DirectivesDirectives enabled for ANSI join queries
Join Method DirectivesUSE_HASH, USE_NL, AVOID_HASH, AVOID_NL, /BUILD, /PROBEDirectives enabled for ANSI Inner Joins
Directives not enabled for ANSI Inner Joins in views
Directives enabled for ANSI Outer Joins that can be transformed into ANSI Inner joins
Directives disabled for all other ANSI Outer Joins
Join Order DirectivesOrderedDirectives enabled for ANSI Inner Joins
Directives enabled for ANSI Left Outer joins if it doesn't conflict with the ordering requirement for Outer Joins
Directives disabled for all other ANSI Outer Joins

Suma Vinod[Read More]

OpenAdmin Tool for IDS - new version

cheetahblog Tags:  oat openadmin idsadmin 5,612 Views
A new version of OpenAdmin Tool for IDS (formerly known as IDSAdmin) is available to download from the IDS 11.10 Open Beta site (sign-in required). This new version has a simpler installation, new graphics, and many new features.

Please keep in mind that:
  • The product is still listed as IDSAdmin on the Beta download site.
  • The Readme file downloadable from the Beta site is (at the time of writing) the out of date one from March - do not use it, instead unzip the oatids package and refer to the Readme in there.

Here is a copy of the current Readme:

OpenAdmin Tool For IDS - v2.10

- July 2007

OpenAdmin Tool For IDS ( OAT ) , is a PHP based administration consolewhich can be used to administer one or more IBM Informix Dyanamic Server11.10 instances.


OAT requires the following products to be installed:
  • A Webserver (Apache 2.2.3)
  • IBM I-Connect or CSDK (3.00)
  • PHP 5 compiled with PDO, PDO_SQLITE, GD and SOAP enabled. (5.2.2)
  • Informix PDO Module.

Note: the versions in brackets indicate the versions that OAT has been testedwith.

Installation Instructions

1. Install and set up a working web server that has been configured toserve php pages.
(For more information see the NOTES section below.)

2. Update the php configuration file.

  • edit the php.ini
  • add two new lines to the 'extension' section of the configuration fileif they are not present:


  • modify the parameter memory_total to 256M.

3. Install IBM I-Connect or Client SDK.

4. Install the OpenAdmin Tool For IDS package.
  • Extract the OAT package into your web server document root directory.
  • Examples:
    /usr/local/apache2/htdocs/oat (UNIX/Linux)

    c:\xampp\htdocs\oat (Windows)
5. Change the ownership of the "<OAT>/install" directoryto the user and group that runs the apache ( httpd ) server.
  • Find the user and group that runs the apache ( httpd ) server from the httpd.conf.
  • chown <user>:<group> <OAT>/install ( Unix/Linux )

6. Start the webserver, making sure INFORMIXDIR points to the Client SDKor I-Connect install location in the webserver environment.

7. Launch the OAT installer by visiting the web page:

SERVERNAME = the name of your machine
LOCATION = where you extracted the tar file (e.g. oat )

Follow the instructions in the OAT installation screens, including obtainingan optional Google Maps API key for your domain if required.

Once installation is complete, point your browser at the OAT root URL.From there you can click on Admin and add a new IDS 11.10 Connection tothe default group or create a new group. If using an I-Connect or ClientSDK version prior to 3.0 then for each new connection you add, there needsto be a corresponding SQLHOSTS entry for that connection on the webservermachine.

Once a connection is created the "Get Servers" link from theOAT root page will retrieve the list of connections you have created.


Installing an Apache webserver with PHP is not a straight forward task, thankfully there are 3rd party solutions available. A popular choiceis XAMPP.


XAMPP is an easy to install Apache distribution containing MySQL , PHPand Perl. XAMPP can be obtainedfrom,

The XAMPP version 1.6.2 for Windows contains everything you need to runOAT (with the exception of IBM I-Connect or CSDK ), download and installationinformation is available at

XAMPP ( also called as LAMPP ) is also available for Linux, however you will need to download the 'Development Package' in order to compile the Informix PDO Driver. Download and installation information is available at

Informix PDO Driver

The following developerWorks article contains useful information to assistwith building an Informix PDO driver:

Guy Bowerman[Read More]

Derived Table Support in IDS 11.10

cheetahblog Tags:  iterator funtion derived table multiset 4,767 Views
As mentioned in this blog, IDS 11.10 allows SELECT statements to include syntax to specify a full select sub-query in the FROM clause. I would like to describe it with some examples here.In IDS 10, if you had to use sub-queries in the FROM clause, you had to cast the sub-query statement to a multiset andthen to a table. Here are some example SELECT statements syntax in prior IDS versions:
    SELECT * FROM table(multiset(select a, b, c from tab1, tab2 where tab1.a = tab2.c)) vt(va, vb, vc),emptab WHERE =;
    a, b, c: columns of any data type

    SELECT * from table((select coll_dat from sales_tab)) AS c_table(s_month, s_day);
    coll_dat: should be a collection data type column

    SELECT * from table(function f1());
    f1(): an iterator function

In IDS 11.10, the above statements can be written in the same way or you can re-write them as:
    SELECT * FROM (select a, b, c from tab1, tab2 where tab1.a = tab2.c) vt(va, vb, vc), emptab WHERE =;

    SELECT * from (select coll_dat from sales_tab) AS c_table(s_month, s_day);

    SELECT * from table(f1());
    f1(): f1() can be any function

These sub-queries in the FROM clause are called derived tables or table expressions. You can use simple, UNION, orjoined sub-queries, including OUTER joins, and can include the ORDER BY clause. You can also write nestedFROM clause queries and can combine old and new syntax in a SQL statement. Here is an example:
    select * from (select * from (select col1 from tab3 where col1 = 10 order by col2 desc)),table(multiset(select col1 from tab2 where col1 = 20)) order by 1 ;

The derived table support in IDS 11.10 enhances its capability to execute queries that complies with the SQL-92 standard without any modifications.

Suma Vinod[Read More]

OpenAdmin Tool for IDS XAMPP tutorial updated

cheetahblog Tags:  idsadmin oat 4,047 Views
In March an article on installing IDSAdmin in a XAMPP environment was published on this blog. This article has just been updated by Inge Halilovic with instructions and screen shots for the latest available version of OpenAdmin Tool downloadable from the Free Product Download Site. The new version of OAT has improved graphics, simpler installation, new features such as support for Mach 11 topology, and several bug fixes.

Guy Bowerman[Read More]

Automatic re-prepare in IDS 11.10

cheetahblog Tags:  online index re-optimize re-prepare recompile -710 2 Comments 6,484 Views
Auto Reprepare is a feature in IDS 11.10 to minimize the occurrence of -710 errors in applications.

What is a -710 error?
User applications can get -710 errors while executing an explicitly prepared statement if tables referenced have been changed after the statement was prepared. Applications can get -710 errors while executing a stored procedure if the tables referenced indirectly by them have been changed after the procedure was optimized. Let us look at some examples of prepared statements where you can get a -710.
    $prepare s1 from "select c1, c2 from t710";$execute s1;$create index i1 on t710(c1);$execute s1; -------> -710 error

    $prepare s1 from "select * from t710 where c1 = 10";$declare curs1 cursor for s1;$create index i1 on t710(c1);$open curs1; -------> -710 error

    create procedure p1(c_a int, c_b int) returning integer;insert into A values(1001, 1001);update A set b=c_b where A.a=c_a; ==>(You have an update trigger defined on A which inserts into table Breturn 0;end procedure;

    create procedure p2() returning integer;define i integer;let i=p1(56, 56);create index i1 on B(b);return p1(56, 56); -- > -710 error when p1 is executedend procedure;
When this feature is enabled, if the server detects that there has been a change to the tableor tables referenced by a prepared statement during execution time, it dynamically re-preparesand re-optimizes that statement. The statement is executed if the re-optimization succeeds. Similarly, if the server detects that there has been a change to a table or tables referencedby a stored procedure at execution time, it dynamically re-optimizes the stored procedure. Notethat stored procedures get dynamically optimized even when the feature is disabled in some cases.Let us look at the previous examples with this feature enabled:
    $prepare s1 from "select c1, c2 from t710";$execute s1;$create index i1 on t710(c1);$execute s1; -------> re-prepares and re-optimizes the statement and executes it

    $prepare s1 from "select * from t710 where c1=10";$declare curs1 cursor for s1;$create index i1 on t710(c1);$open curs1; -------> re-prepares and re-optimizes the statement and executes it

    create procedure p1(c_a int, c_b int) returning integer;insert into A values(1001, 1001);update A set b=c_b where A.a=c_a; ==>(You have an update trigger defined on A which inserts into table B)return 0;end procedure;

    create procedure p2() returning integer;define i integer;let i=p1(56, 56);create index i1 on B(b);return p1(56, 56); -- > re-optimizes p1() and executes itend procedure;
If you have queries that reference tables on remote IDS servers, as long as the changes areon tables that are directly referenced in your SQL statement, you will not get any -710 errors.But if a table referenced indirectly in your SQL statement changes, you will get -710 errorsunless you have Auto Re-prepare enabled on the remote IDS server.

You can still get -710 errors in your applications in one of the following scenarios:
  • The number and type of columns in your SELECT list have changed.
  • If you are executing a prepared DDL statement, you might get -710 errors.
  • A rare race condition in the server can lead to a -710 error.

This feature is enabled by default. To disable it, set the ONCONFIG parameter AUTO_REPREPARE to 0 and re-start IDS. You can disable it at a session level using the sql statement "set environment IFX_AUTO_REPREPARE '0'.

Suma Vinod[Read More]

Oracle PL/SQL compatibility in IDS

cheetahblog Tags:  pl/sql continue exit label for loop while 5,204 Views
IDS 11.10 supports some of the PL/SQL syntax thus expanding the capability of IDS. It also facilitates easy migration of applications that use a lot of stored procedures and triggers written in PL/SQL.

This version supports the following syntax:

GOTO statement and Label syntax

    define x integer;Let x = 0;BEGIN	<<increment_x>>	BEGIN	     LET x = x + 1;	END;

    IF x < 10 THEN GOTO increment_x; END IF;END;
Restrictions on GOTO:
  • A GOTO statement cannot be used in an EXCEPTION block.
  • Labels cannot be defined within an EXCEPTION BLOCK.
  • A label must be unique within a stored procedure.

Loop .. End Loop syntax

    LOOP   IF credit_rating IS NULL THEN      CONTINUE; (or CONTNUE LOOP)   END IF   IF credit_rating < 3 THEN      EXIT; (or EXIT LOOP)-- exit loop immediately   END IFEND LOOP;

<> WHILE LOOP… END LOOP whloop_label

    WHILE (i < 10) LOOP   LET i = i + 1 ;   IF  i < 2 THEN      CONTINUE;   END IF   IF i  > 5 THEN      EXIT;    END IFEND LOOP;
You can re-write the above while loop using a label like shown below:
    <<mywhile>>WHILE (i < 10) LOOP   LET i = i + 1 ;   IF  i < 2 THEN      CONTINUE;   END IF   IF i  > 5 THEN      EXIT;    END IFEND LOOP mywhile;

<> FOR LOOP… END LOOP for_label

    FOR i IN (1 TO 5) LOOP   LET i = i +1 ;   IF  i < 2 THEN      CONTINUE;   IF i  > 5 THEN      EXIT;    END IF; END LOOP;

Expression based exit statement.
  • EXIT WHEN <expr>;
  • EXIT loop_label WHEN <expr>; -- Label Based Exit.
This is especially useful in nested loop statments. Here is an example of a nested loop statement using a label and exitwhen statements:
    <<outer>>    LOOP    LET x = x+1;    <<inner>>            for i in ( 1 to 10) LOOP                LET x = x+1;                 EXIT inner WHEN x = 2;                EXIT outer WHEN x > 3;             END LOOP inner;    LET x = x+1;    END LOOP outer;
Please note that for while and for loops you can use both the old and new syntax.

Suma Vinod[Read More]

Administration Mode Enhancements in IDS 11.10

cheetahblog Tags:  users admin single onmode user oninit mode 5,400 Views
Some customers find the single user mode in IDS 10 limited because only informix and the DBSA group have access tothe database server. In IDS 11.10, you can give access to other DBAs and specific users in addition to informix andthe DBSA group members when the server is in single user mode. I have listed the steps to give access below:

1. Make sure that the onconfig variable ADMIN_USER_MODE_WITH_DBSA is set to 1.

2. Set the onconfig variable ADMIN_MODE_USERS to a comma separated user list.
    ADMIN_MODE_USERS tom,don

    When the database server switches to single user mode, informix, the DBSA group, tom and don will have access to
    the database.

3. Start the server in single user mode.
  • oninit -j
    user informix, the DBSA group and ADMIN_MODE_USERS have access

  • oninit -j -U fred, bill
    user informix, DBSA Group and users fred and bill have access

    You can use this command if you do not want to use ADMIN_MODE_USERS or if you want to override its value.

4. Dynamically change the server mode using onmode.
  • onmode -j
    user informix, DBSA group and ADMIN_MODE_USERS have access

  • onmode -j -U fred, bill
    user informix, the DBSA Group, users fred and bill have access

    You can use this command if you do not want to use ADMIN_MODE_USERS or if you want to override its value.

  • onmode -j -U ' '
    user informix and the DBSA group have access

    You can use this command to clear the user list.

Suma Vinod

[Read More]

Terminate Idle Users with the Database Admin System

cheetahblog Tags:  scheduler idsadmin oat openadmin 2 Comments 7,609 Views
The Database Admin System is a frameworkthat can simplify many tasks for DBAs,application developers and end users. In addition, these tasks can be seamlessly integrated into a graphical admin system, such as, the OpenAdmin Tool for IDS.

We will examine how a DBA can take advantage of the Database Admin System to solve a real life problem. The problem we are going to exploreis to how to remove users who have been idle for more than a specified length of time, onlyduring work hours. Prior to the database admin system a DBA would utilize several different operating system tools,such as, shell scripting and cron. In addition, if this is pre-packaged system these new scripts and cron entries will have to be integrated intoan installed script. Lastly this needs tobe portable across all supported platforms.

If you are to utilize the database Admin system youonly have to add a few lines to your schema file andyou are done. Since this is only SQL you will havethe advantage of being portable across different flavors of UNIX and Windows.

The components we are going to utilize are:
  • Database Scheduler
  • Alert System
  • User Configurable Thresholds
  • SQL Admin API

We are going to break the above problem into three separate parts.
  1. Creating a tunable threshold for the idle time out
  2. Develop a stored procedure to terminate the idle users
  3. Schedule this procedure to run at regular intervals

Lastly we are going to view screen shots of the completed work in the OpenAdmin Tool for IDS. This will graphically depict all the tasks in the system and allow users to drill down and see the scheduling details and parameters of a specific task.

Creating a Idle Timeout Threshold

To create a threshold which can be easily changed we insert a rowinto the ph_threshold table in the sysadmin database. Thistable stores all the threshold or configuration values used by thescheduler. When creating a threshold we must supply the followinginformation:

    nameThe Parameter name
    task_nameThe name of the task in the ph_task table associated with this threshold
    valueThe value associated with the parameter
    value_typeThe data type of this paramater, (STRING or NUMERIC)
    descriptionA description of what this threshold does.
    /* *  Insert a new parameter that can be updated to *    reflect the current idle timeout period *    saving the work of having to re-write the *    stored procedure if conditions change. *  Allow the OpenAdmin tool to display *   this as a configurable item for the task. */INSERT INTO ph_threshold(name,task_name,value,value_type,description)VALUES("IDLE TIMEOUT", "Idle Timeout","60","NUMERIC","Maximum amount of time in minutes for non-informix users to be idle.");

Developing a stored procedure to terminate idle users

When creating a stored procedure to be called by the schedulerit can optionally take in two parameters which lets the procedureknow when it was invoked and what is this procedure'sunique task id. This is often useful when passing information on to other parts of the Database Admin System, such as, the Alert System.
     CREATE FUNCTION idle_timeout( task_id INT, task_seq INT) 

Next we need to retrieve the thresholds from the ph_threshold table. This isdone using a simple select and casting the result into our desired data type (an integer).
         SELECT value::integer         INTO time_allowed         FROM ph_threshold         WHERE name = "IDLE TIMEOUT"    
The main part of our stored procedure is the select statement to findall users who have been idle for more than a specified number of minutes. We select from the systcblst table the last time a thread hasexecuted on a virtual processor. If this time is longer than our predetermined idle threshold and this thread is an sqlexec thread (i.e. not a system thread) then we pass the session id (sid) tothe SQL Admin API call admin(). The admin function has beensetup to call onmode -z to terminate a session.
           SELECT admin("onmode","z",A.sid), A.username, A.sid, hostname           INTO rc, sys_username, sys_sid, sys_hostname           FROM sysmaster:sysrstcb A , sysmaster:systcblst B,                sysmaster:sysscblst C           WHERE A.tid = B.tid           AND C.sid = A.sid           AND lower(name) in  ("sqlexec")	   AND CURRENT - DBINFO("utc_to_datetime",last_run_time) >                time_allowed UNITS MINUTE           AND lower(A.username) NOT IN( "informix", "root")    

The last piece of the stored procedure checks the return code of the admin() procedure to see if the session was successfully terminated. If the session was terminated successfully then an alert is inserted, logging the termination of an idle user.The optional arguments to the stored procedures which are used to uniquely identify the task and the task sequence are required by the alert system. This allows the alert system to know who generated this alert and when this alert was generated.Several other items are required by the alert system, such as the alert_type (ERROR, WARNING, INFO) or the alert_color (ERROR, WARNING, INFO) and a message indicating what has happened.
                    INSERT INTO ph_alert                  (                   ID, alert_task_id,alert_task_seq,                   alert_type, alert_color,                   alert_state,                   alert_object_type, alert_object_name,                   alert_message,                   alert_action                  ) VALUES (                   0,task_id, task_seq,                   "INFO", "GREEN",                   "ADDRESSED",                   "USER","TIMEOUT",		   "User "||TRIM(sys_username)||"@"||TRIM(sys_hostname)|| 			" sid("||sys_sid||")"||                         " terminated due to idle timeout.",                   NULL                  );    

Once an alert is created, theOpenAdmin Tool for IDSwill display it under the Health Center menu option, Show Alerts. If you do not see this alert at first, make sure that you have the ADDRESSED checkbox checked.

Showing the OpenAdmin Tool for IDS Alerts

Complete Stored Procedure

Below you will find the complete code for the idle_timeout stored procedure.
    /* ************************************************************** *  Create a function which will find all users that have  *   been idle for the specified time. Call the SQL admin API to *   terminate those users.  Create an alert so we can track which *   users have been terminated. ************************************************************** */CREATE FUNCTION idle_timeout( task_id INT, task_seq INT)RETURNING INTEGER

    DEFINE time_allowed INTEGER;DEFINE sys_hostname CHAR(16);DEFINE sys_username CHAR(257);DEFINE sys_sid INTEGER;DEFINE rc INTEGER;

    {*** Get the maximum amount of time to be idle ***} SELECT value::integer INTO time_allowed FROM ph_threshold WHERE name = "IDLE TIMEOUT";

    {*** Find all users who are idle longer than the threshold ***} FOREACH SELECT admin("onmode","z",A.sid), A.username, A.sid, hostname INTO rc, sys_username, sys_sid, sys_hostname FROM sysmaster:sysrstcb A , sysmaster:systcblst B, sysmaster:sysscblst C WHERE A.tid = B.tid AND C.sid = A.sid AND lower(name) in ("sqlexec") AND CURRENT - DBINFO("utc_to_datetime",last_run_time) > time_allowed UNITS MINUTE AND lower(A.username) NOT IN( "informix", "root")

    {*** If we sucessfully terminated a user log ***} {*** the information into the alert table ***} IF rc > 0 THEN INSERT INTO ph_alert ( ID, alert_task_id,alert_task_seq, alert_type, alert_color, alert_state, alert_object_type, alert_object_name, alert_message, alert_action ) VALUES ( 0,task_id, task_seq, "INFO", "GREEN", "ADDRESSED", "USER","TIMEOUT", "User "||TRIM(sys_username)||"@"||TRIM(sys_hostname)|| " sid("||sys_sid||")"|| " terminated due to idle timeout.", NULL ); END IF


    RETURN 0;


    Scheduling a Procedure to Run at Regular Intervals

    The last piece is to schedule the idle_timeout stored procedure above to runat regular intervals. This is accomplished by inserting a rowinto the ph_task table in the sysadmin database. While thereare many scheduling options we are going to keep it simple.In this example, the idle_timeout procedure is set to run every day between the hours of 6AM and 6PM at 10 minute intervals. While most of the insert statement below looks straightforward, the one key point tohighlight is the tk_execute column is receiving the name of the storedprocedure to run.

      tk_nameThe name of the task, must be a unique name
      tk_typeThe type of task (TASK, SENSOR, STARTUP TASK, STARTUP SENSOR)
      tk_groupThe name of the group to associate the task with, for organization purposes, see the tk_group table for more details
      tk_descriptionA comment to describe what this task is doing
      tk_executeA function name or SQL statement to execute
      tk_start_timeTime of day to start executing this task
      tk_stop_timeTime of day to stop executing this task
      tk_frequencyHow often to execute this task
    /* ************************************************************** *  Create a task which will schedule the idle_timeout SPL *  to be run between 6 AM and 6 PM. ************************************************************** */INSERT INTO ph_task(tk_name,tk_type,tk_group,tk_description,tk_execute,tk_start_time,tk_stop_time,tk_frequency)VALUES("Idle Timeout","TASK","USER","Remove all idle users from the system.","idle_timeout",DATETIME(06:00:00) HOUR TO SECOND,DATETIME(18:00:00) HOUR TO SECOND,INTERVAL ( 10 ) MINUTE TO MINUTE);

Viewing the Task in OpenAdmin Tool for IDS

Once we have built these three components we can view how the OpenAdmin Tool for IDS will display the tasks. If we look at the current overview of the task, the task we added, Idle Timeout is displayed in the overview.

Showing the OpenAdmin Tool for IDS Task Overview

If you drill down into the details of the Idle Timeout task you willsee a complete list of scheduling details along with any associated parameters. In this example we have one parameter called IDLE TIMEOUT .

Showing the OpenAdmin Tool for IDS Task Details

John F. Miller III [Read More]

Installing the Developer Workbench.

cheetahblog Tags:  developer workbench development tools 4,785 Views
Note if you have missed any articles in this series, check Introduction and Table of Contents

Installationfor IDS

Onceyou have downloaded the Developer Workbench, you need to install it. Iwill be discussing this from a Windows XP perspective, although Linuxis also straightforward. So now that you have downloaded the client youshould have a single .zip file called:

Thefirst step is to unzip this file into a subdirectory of you choosing.After that run install_win32.exe from that subdirectory. Once throughthe splash screen you should seen a screen like the following:

Picture 1 of Instal for DWB

Pleasenote that this install includes a fair amount of Java, so final installsize will be around 2.8 Gigabytes. The next screen is just the basiclicense agreement. Since DataServer Developer Workbenchis built upon the eclipse framework, and includes several plugins,their are two licenses listed. If you want to use the product, read theagreement and the select the yes radio button, and then click next. Atthis point you should see the next screen which should be the following:

Picture 2 for install of DWB

Fromthe above you can specify where you shared resources go. I usually gowith the default, but you may wish to make the name more meaningful.
Afterthis screen you are taken to another screen and given the option toeither use an existing package group, or create a new one. When I wassetting up I just created a new package group, and took the defaultsettings. Finally the last install screen allows you to install thisinto an existing copy of Eclipse, extending it. Please note that youversion of eclipse must have everything in the 3rd party license fileinstalled and up to date for this to work. I wen ahead and justinstalled a new version for the purposes of this review.

We nowmove on to the "Features" tab that is shown above, the first screenlet's you choose what spoken languages you want installed. Afterselecting those languages, I only selected English, you move to thefollowing page:

Picture 3 for install of DWB

I went ahead from this page and added the two Websphere pieces. Thefinal page is a summary page, showing everything you are installing,and makes the "install" button active. From there just hit install. Ittook abut 5 minutes to install on my trusty old T42p laptop.

Mark Jamison[Read More]

New Common Development tool for IDS and DB2

cheetahblog Tags:  developer client tool common 4,884 Views

CommonDevelopment Tool

One of the big ideas recently proposed by IBM, and further proof of our commitment to IDS, has been the idea of a common setof clients tools (like a Java Common Client (JCC), for example). If you aregoing to have a common set of clients, or programming interfaces, itmight not be a bad idea to have a common development tool that willallow you to use these common clients, for both DB2 and IDS. Well IBMhas released in beta an Eclipse based common tool that allows that. Forthose who want to jump right in here is the link:

IBMData Server Developer Workbench 9.5

As with many things IBM , since this is an open beta the actual name atthe time of  release may change. I will be blogging on thisproduct over the next few weeks covering its capabilities from the IDSside of the tool. So far there will be 5 parts to this report, notcounting this introduction.

Part 1 - Installation for IDS
Part 2 - Setting up your workspace and connecting to IDS.
Part 3 - The Database Explorer window.
Part 4 -  ER Diagramming
Part 5 - The SQL Builder and The SQL Editor Windows.
Part 6 - PureQuery and IDS

Mark Jamison

[Read More]

Setting up you workspace and connecting to IDS

cheetahblog Tags:  tools developer workbench development 5,252 Views
Note if you have missed any articles in this series, check Introduction and Table of Contents

Setting up yourworkspace and connecting to IDS

Now that you have installed the Developer Workbench it is now time torun it. ON the first run, you should  be asked to set up yourworkspace. This should give you a screen like the following:

Setting Up a worksapce

Once you select this, the Workbench takes a little while to build yourworkspace.  Once done however you get the classic EclipseWelcome screen. I have highlighted your workspace in the followingscreen shot , because that is the next place you want to go:

Welcome Screen

Once you click on the workbench you get the generic workbench. The nextstep at that point is to start a project , and establish a connection:
Setting up a new project to handle queries is fairly straightforward,and can be reached very quickly , all you have to do is select the DataDevelopment Project option as in the following screen shot:

Creating a new project

Selecting the above puts you to the new Data Development projectscreen, as shown below:

Data Development Project

As you can see,  there are some DB2 flavored words here.Schema in DB2, is the IDS equivalent of a database, and as such, keepthe default radio button, as seen in the screen shot above, set. I would recommend that you giver your project a meaningfulname, but that is completely under you control. In my case I listed itas "Mark's Test project for Blog".
This brings us to the next page which assigns an existing databaseconnection, or creates a new one. Since this is our first time through,just hit the next button, as we will need to create a new connection.

The Connection parameters gives you several options for connections, aswell as several data server choices. Take a look at the below screenshot:

 Connection Settings

This is a standard Informix JDBC connection string at this point. Thescreen shot above differs slightly from what you see as the defaultsettings, as I have selected the "Informix JDBC driver" instead of the"IBM Data Server Driver for JDBC and SQLJ." In short I took the JDBCdriver instead of the JCC driver. The main reason is because JCC itselfis in beta, and I wanted to use something I knew how to connect withand trace. JCC also requires a DRDA listener thread, and I don't have one set up for the instance I am using. Update : Check out Establishing JCC connections in 11.10 if you want to use JCC connections.

Fill the remaining fields above with the normal Informix connectionsettings, and test you connection. Once you connection is working,click on the next button.

This final screen allows you to filter your schema. Here is the screenshot :

 Schema Filter

Please note: The default is only for users who have IDS on a windowsplatform. Since schema id is actually a user id in IDS, filtering onINFORMIX means that no information of any kind will return for anydatabase objects if your IDS instance is on a Unix platform. The reasonof course is the user id is case sensitive, and informix != INFORMIX.Quite honestly I suggest you just disable filtering for now, when Icover the the Database explorer I will show you how to change thefilter how you want to.

As soon as you have selected your filter objects, you will finally seethe workbench laid out, and ready to be used for a Data Developmentproject. Here is mine :

Project Setup Completed

In the next article I will discuss the Database Explorer Window in thelower left hand corner.

Mark Jamison[Read More]

Establishing JCC connections in 11.10

cheetahblog Tags:  jdbc development jcc connectivity 4,821 Views

EstablishingJCC connections for IDS

While working with the open beta of IBMData Server Developer Workbench 9.5 , I noticed that inaddition to that product you also become open beta participants for theJava Common Client (JCC) driver. In fact, the Workbench uses the JCCdriver by default. On the one hand, this is absolutely great, because ithelps you see where IBM's long range direction is with clients. On theother hand, JCC requires some special setup on the IDS instance level.This article will detail how to setup you instance to handle JCCconnections, and then will use the Developer Workbench to show how youconnect using JCC.

Whathappened to my SQLI?

The reason why JCC connections require engine configuration changes isthe default means by which IDS instance communicate is through the SQLIprotocol, whereas JCC needs the DRDA protocol. Currently IDS does notallow for both SQLI and DRDA to connect through the same listenerthread, so this means is we need to configure a listener thread andport to accept DRDA calls.

This is done by making a simple change to the SQLHOSTS file as shownbelow:

<instance name> drsoctcp<hostname> <service/port>


<instance name> drsipcshm<hostname> <service/port>

As you can see the protocols look the same as our old ones, exceptinstead of the prefix ol,you use dr.

And full example is below:

cheetah_dr drsoctcp gofishsqlexec2

After changing the SQLHOST all you need to do is place this entry in youonconfig file, in the DBSERVERALIASES variable. So if you use theexample SQLHOSTS entry above and had no aliases currently you wouldhave the following in you onconfig file:


Now just bring your instance off-line and then back on-line, and then weare ready to use the Developer Workbench.

Setting up yourJCC Driver for 11.10 in Developer Workbench 9.5

Now that you have set your IDS instance up to accept JCC driverconnections, let's establish one using the Developer Workbench 9.5

In your Database Explorer window, select Connection->New Connectionas below:

This will bring you to the Connection Settings window. Below is anexample of the JCC driver with all field filled in:

JCC Connection Settings

As you can see, you do not need an INFORMIXSERVER for your connection,just the Database, Host and Port Number. Once you configure this foryour server, just click the TestConnection button. After that just click the Finish button. Younow have a JCC connection that you can use for your Applicationrunning against IDS 11.10

Mark Jamison
[Read More]

The Database Explorer Window

cheetahblog Tags:  dba database developer explorer 4,828 Views
Note if you have missed any articles in this series, check Introductionand Table of Contents

TheDatabase Explorer Window

Now that you have a workspace, and a connection established, it is timeto take a quick look around the options available to you in theDatabase Explorer window. By default this window is located in thelower left hand corner, and on start up looks like this:

Database Explorer

As you can see this has a tool bar as well as the Connections folder.Let's take a quick look at the options you have on the tool bar.Traveling from Left to right, you have:

  • Collapse all:This allows you to quickly collapse all the folders you have opened inyour explorer, as shown in the above dialog.
  • Link:This allows you to link data in the explorer to an open editor.
  • SQL Scrapbook:This lets you open up a file that saves the SQL you aretesting. 
  • Disconnect:Will disconnect any connection you have highlighter.
  • New Connection:Will take you to the connection tool, so you can add more connectionsand Databases to your Database Explorer.
  • SQL Editor:This lets you go straight to the SQL Editor Window.
  • ExportConnection: This lets you export an existing connection toXML. Makes it easy to deploy connections to multiple machines.
  • ImportConnection: Allows you to import the connection settingsfrom an XML file. So if, for instance, someone were to export theconnection settings for a development database, rather than having toget all the information to fill the connection string yourself, all youhave to do is import the connection.

Now let's open up our connections  folder in this window. We willbe looking at the stores_demo  database. So first we open theConnections folder, and then the stores_demo connection folder, andfinally the stores_demo database folder. And when the folders are open you will see something like the following:

Stores Demo Database Folder Opened

As you can see, each connection tells you what type of instance you areconnecting too, in my case all of my instances are 11.10, thoughDeveloper Workbench reports them as 11.0.  You can see thatthe Database folderhas three sub-folders. Those Folders are:

Click on each link to get a detailed overview, and examples, on whateach of these folders can do.

Mark Jamison
[Read More]

The Role Folder.

cheetahblog Tags:  explore dba developer workbench database roles 5,294 Views

Roles Folder:

The Roles Folder, as it's name implies, gives you a view of all theroles created for the table, and a visual means to create new roles oralter existing roles. To get a quick look at how this folder works,let's create a test role that will have complete access to the customertable.

  1. Select create rule from the folder, by right clicking onthe folder, and following the menu options. It should look like thefollowing:

    Creat Role Step 1

  2. This should now bring up the Data Object Editor window inthe center portion of your workspace. You will start in the Generaltab, and you will see that the Role name has been pre-filled, Let's change that to customer_role, so now you should seesomething like the following:

    Create Role Step 2

  3. Let's now make sure we grant privileges to customer forthis role,  to do this move down to the Privilege tab, clickon  the table tab, and the click on the "Grant New Privileges"which looks like a yellow star. That will bring up the Gant NewPrivileges window. While it looks like you can just type a table namehere, you actually have to press the "browse" button and select the customer table fromthe browser. Once you do that you should see the window as shown below:

    Grant New Privileges

    Go ahead and check all the boxes listed, and then check OK.

  4. You will now be back in the main window. Just click on the Run DDL button, andthe customer_role role will be created
Now that you have a role in your Roles folder, let's look at what youcan actually do. You should see customer_role in the folder so rightclick on it, you should see the following:

The Cusomter Role

As you can see you can do the following:
  • Alter the role
  • Drop the role
  • Generate the DDL for the role.
  • View Membership
The two not listed above (AnalyzeImpact, and CompareWith) are not actually used by roles. As I'm sure youhave noticed, you cannot add users to roles from the Role Folder, thatwill come with our discussion of the Users folder.

Mark Jamison[Read More]

OpenAdmin Tool XAMPP Installation on Linux

cheetahblog Tags:  idsadmin oat linux 3 Comments 15,838 Views

OpenAdmin Tool for IDS, the PHP-based administration console for IDS, is now available for downloadfrom the Open Source download site.

Updated 2/14/08 with new recommended PHP and XAMPP versions which are good for Cheetah and Cheetah 2.

Note: This article is based upon the “Installing IDSAdmin in a Windows XAMPP environment” article by Guy Bowerman and Inge Halilovic. The Windows version of this article is available on the IBM IDS Experts Blog.

Installation of OpenAdmin can appear complex - this article provides anexample of one way of installing OpenAdmin. This example uses an XAMPP package on Linux. XAMPP is a bundle of open-source utilities (such as Apache, PHP and PEAR) thatare used by OpenAdmin. Installing XAMPP simplifies the OpenAdmin install,but many other installation and configuration options are possible.

High-level Summary of Install steps
Detailed steps are provided below. This is the high-level sequence of stepsyou will follow:

  1. Install CSDK 3.00.UC1 (or later)
  2. Download and extract the XAMPP 1.6.4 runtime package and development package
  3. Download and extract PDO_INFORMIX 1.1.0
  4. Configure PDO_INFORMIX
  5. Update the php configuration file
  6. Download and extract the OpenAdmin package
  7. Start the Apache web server
  8. Install and configure OpenAdmin
1. Install Informix Client SDK

If not already installed, install the Informix Client SDK and set INFORMIXDIR in your environment to be the location where CSDK is installed. (Note: CSDK 3.00.UC1 or later is recommended.)

To install Informix CSDK, extract the product files and run as root installclientsdk. Follow the instructions on screen to install CSDK.

Important: Even if you are installing on a Linux-x86_64 machine, you will need a Linux 32-bit version of Informix CSDK. This is because Linux XAMPP is a 32-bit build; so to build a compatible PDO_INFORMIX module, you will need the Linux 32-bit version of Informix CSDK.

2. Download XAMPP and extract files.

Important: You must download both the Linux runtime and development packages; the development package is necessary in order to be able to compile the PDO_INFORMIX driver.

  • To install, all you need to do is untar both XAMPP packages in the /opt directory. (If you do not want to install directly in the /opt directory, you can create a symbolic link which links /opt/lampp to the directory you choose for installation.)
    tar xvfz xampp-linux-1.6.4.tar.gz –C /opttar xvfz xampp-linux-devel-1.6.4.tar.gz –C /opt
  • Now XAMPP is installed in the /opt/lampp directory

3. Download and extract PDO_INFORMIX

  • Untar the PDO_INFORMIX tar file into the /opt/lampp/lib/php/extensions directory.
    tar xvfz PDO_INFORMIX-1.1.0.tgz –C /opt/lampp/lib/php/extensions

4. Configure PDO_INFORMIX

  • Set INFORMIXDIR in your environment to the directory where Informix Client SDK is installed.
  • Make sure your PATH is set so that the php and phpize executables are picked up from the /opt/lampp/bin directory. If you have another version of PHP installed and your PATH is not set correctly to pick up php from /opt/lampp/bin, you may get errors when executing make for PDO_INFORMIX due to version incompatibility.
  • Important: If you are installing on a Linux-x86_64 machine, you also need to set CFLAGS in your environment to “-m32”. This is because XAMPP for Linux is a 32-bit build and therefore you will need to compile PDO_INFORMIX into a 32-bit shared object file instead the default 64-bit version. This also means that the version of Informix CSDK must be the Linux 32-bit version.
  • To configure the PDO_INFORMIX module, execute the following 5 commands:
    cd /opt/lampp/lib/php/extensions/PDO_INFORMIX-1.1.0/opt/lampp/bin/phpize./configuremakemake install

    Example output for these commands are shown below.

    Sample output after the “/opt/lampp/bin/phpize” command:

    This illustration is a screen capture after executing the /opt/lampp/bin/phpize command.

    Sample output after the “./configure” command:

    This illustration is a screen capture after executing the ./configure command.

    Sample output after the "make” and “make install” commands:

    This illustration is a screen capture after executing the make commands.

  • After executing “make install”, verify that was copied into your extension directory /opt/lampp/lib/php/extensions/no-debug-non-zts-20060613/

Note: For more information about building the PDO_INFORMIX module, go to the DeveloperWorks article on that topic.

5. Update the php configuration file (php.ini)

Navigate to the /opt/lampp/etc directory, and open the php.ini file in a text editor.

Edit php.ini to enable the Informix PDO driver:

  1. Locate the “Dynamic Extensions” section in the php.ini file and add the pdo_informix extension.

  2. Modify the memory_limit parameter to be at least 256M.
    Memory_limit = 256M

6. Download and Extract the OpenAdmin tar file in the htdocs directory.

  1. Download the OpenAdmin tar file from the Open Source download site to a temporary location.
  2. Create a new directory under /opt/lampp/htdocs to contain the OpenAdmin Tool files (For example: /opt/lampp/htdocs/openadmin)
  3. Untar oatids.tar to this new directory.
  4. Within the OpenAdmin directory, grant write permissions to the conf and install directories and all of their contents.
    cd /opt/lampp/htdocs/openadminchmod ugo+w install conf -R

7. Start the Apache web server.

  1. Verify INFORMIXDIR is set to the Informix CSDK directory.
  2. Start the web server by running /opt/lampp/lampp start as root.

    You should see output similar to the following:

    Starting XAMPP for Linux 1.6.4...XAMPP: Starting Apache with SSL (and PHP5)...XAMPP: Starting MySQL...XAMPP: Another FTP daemon is already running.XAMPP for Linux started.

8. Install and Configure OpenAdmin.

Using a web browser, go to http://< machine_name >/openadmin/install where < machine_name > is the name of the machine where you have installed the products. Follow the installation instructions to install OpenAdmin Tool. After the first install screen a check will be made to ensure you have the correct PDO drivers:

This image is a screen capture of a web browser displaying the OpenAdmin Tool installation screen. It shows that PDO, pdo_informix, and pdo_sqlite PHP modules have been found and are ready for installation.

Enter the base URL for your machine, language defaults, and optionallythe Google maps key for your machine URL:

This image is a screen capture of a web browser showing the OpenAdmin Tool configuration parameters, including the default language to use, the directory for connections database, the BASEURL for links, where everything is installed, the number of seconds to pausewith a redirect, the Google map key, and how often to check server status (in seconds).

8. Start using OpenAdmin

Once installation is complete you can go to the main page (http://< machine_name >/openadmin in this example), click Admin and start adding IDS 11.10 connections. Important: If you are using a version of CSDK prior to 3.00.UC1, there must be aSQLHOSTS entry on the web server machine for each connection you add .

This image is a screen capture of the OpenAdmin tool login screen. The user is required to fill out the following fields: Informix Server, Host Name, Port, Username, and Password.

Once a connection is defined you can return to the main screen and connect:

This image is a screen capture of the main OpenAdmin Tool after a successful login.

The connection administration allows you to create groups of connections, and assign a password for each group, making it easier to administer a large number of instances.

Erika Von Bargen[Read More]

Global Control of Temp Table Logging

cheetahblog Tags:  table temp logging control 3 Comments 7,517 Views
In previous IDS versions, the logging mode of the temp table matches the logging mode of the database in which theyare created. In other words, temp tables created in a logged database will be logging by default and temp tables created in a non-logged database will be non-logging by default. If you do not want your temp tables to be loggingin your logged database you have to use the "with no log" clause when you create it.

For example the following SQL creates the temp table temptab1 with no logging.
    create temp table temptab1 with no log;

In IDS 11, you can control the default logging mode of temporary tables. If you set the configuration parameter TEMPTAB_NOLOG to 1, temp tables created in a logged database will be non-logging by default. This is useful inreplication environments where you create your databases logged but do not want to replicate your temp tables.

If TEMPTAB_NOLOG is set to 0, this feature is disabled.

Suma Vinod

[Read More]

XML Support in IDS 11 - Part 1

cheetahblog Tags:  sql xml 2 Comments 7,275 Views
IDS 11 provides support for XML publishing thus enabling applications to publish SQL results as XML, query XML dataand return XML. It also provides the framework to map an XML schema into a relational schema and to shred an XML document into relational tables.

The XML functions available in IDS 11 are summarized in the following table:

Function NameDescription
genxml,genxmlclobreturn SQL results as XML elements
genxmlelem, genxmlelemclobreturn column values as XML elements
genxmlschema,genxmlschemaclob return schema as XML
genxmlquery, genxmlqueryclobreturn result set as XML
genxmlqueryhdr, genxmlqueryhdrclob return result set as XML with head
extract, extractxmlclob evaluate XPATH expression
extractvalue, extractxmlclobvaluereturn value of XML node
existsnodeverify whether a node exists in XML doc
idsxmlparseparse XML doc to determine if it is well-formed

To use the XML functions in IDS 11:
  • Start an XML VP by doing one of the following

    Add VPCLASS idsxmlvp,num=1 in onconfig file to start the XML VP when the server starts


    Use onmode -p +1 idsxmlvp command to add an XML VP dynamically after the server has started

    Note: An XML VP is required only if you use XPATH functions such as extract(), extractxmlclob(), extractvalue(),
    extractxmlclobvalue(),existsnode(), idsxmlparse(). XML functions starting with gen do not require the XML VP.

  • Verify that $INFORMIXDIR/lib/libxml.udr is read-only

  • Make sure a default sbspace exists.

    Default sbspace is specified by the SBSPACENAME configuration parameter. You can check the output of onstat -d to
    verify that the space exists.

I will talk about each of the above functions with examples in Part 2.

Suma Vinod[Read More]

XML support in IDS 11 - Part 2

cheetahblog Tags:  sql xml 2 Comments 6,004 Views
Here we will look at each of the XML functions given in Part 1 with examples.

You can use these functions to create an XML row element for each row of a SQL query result. Each column in the rowis an attribute of the row element. genxml() is used for returned row values that are LVARCHAR(32739) or less. Forlarger values, you should use genxmlclob(), which returns a CLOB.
These functions process the rows without any specific order. If the order of the rows is important, you can use thederived table queries to get the result set in the correct order, and then apply the functions on the result set.

    select  genxml(row (customer_num, fname), "row") from customer;
    Output from the above sql:
    <row customer_num="101" fname="Ludwig    "/><row customer_num="102" fname="Carole    "/><row customer_num="103" fname="Philip    "/><row customer_num="104" fname="Anthony   "/>
    To order the results on fname, use the following sql:
    select  genxml(row(num,name),"row") from (select customer_num, fname from customer order by fname) as vt(num, name);
    To select all columns in the table, use the following sql:
    select genxml(customer, "row") from  customer;
These functions return each column value as separate elements, in contrast to genxml(), which returns column valuesas attributes of the row element.

    select genxmlelem(row(customer_num,fname), "cust") from customer;
    Output from the above sql:
    <cust><row><customer_num>101</customer_num><fname>Ludwig        </fname></row><row><customer_num>102</customer_num><fname>Carole        </fname></row><row><customer_num>103</customer_num><fname>Philip        </fname></row><row><customer_num>104</customer_num><fname>Anthony       </fname></row></cust>
    You can select all columns in a table by passing the table name as the first argument as shown below:
    select genxmlelemclob(customer, "cust") from customer;
genxmlschema() & genxmlschemaclob()
They are like genxml() but generate full XML schema including XML header and data. An XML header specifies documentproperties such as the document encoding, the document type definition(DTD), and XML stylesheet(XSL). Thefollowing example shows a select using genxmlschema and its output:

    select genxmlschema(customer, "cust") from customer;
    Output from the above sql:
    <?xml version="1.0" encoding="en_US.819" ?> xs:schema xmlns:xs="" targetNamespace="" xmlns="" ElementFormDefault="qualified"> <xs:element name="cust">   <xs:complexType>     <xs:sequence>       <xs:element name="customer_num"  type="xs:serial"/>       <xs:element name="fname"  type="xs:char(15)"/>       <xs:element name="lname"  type="xs:char(15)"/>       <xs:element name="company"  type="xs:char(20)"/>       ...
genxmlquery() & genxmlqueryclob()
They are versatile functions that take a SQL query as argument and return the result set in XML.

    execute function genxmlquery('cust','SELECT customer_num FROM customer');
    Output from the above sql:
genxmlqueryhdr() & genxmlqueryhdrclob()
These functions return the same data as genxmlquery() but with an XML header.

    execute function genxmlqueryhdr('cust','SELECT customer_num FROM customer');
    Output from the above sql:
    <?xml version="1.0" encoding="en_US.819" ?><xs:schema xmlns:xs=""><cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row>
extract() and extractxmlclob()
These functions evaluate an XPATH expression on a XML column, document, or string. These functions are identicalexcept that extractxmlclob() returns a CLOB instead of LVARCHAR. They are used to return an XML fragment of theevaluated XML column, document, or string. They are compatible with the Oracle extract()function.

    execute function extract('<name><first>fred</first></name>','/name/first');
    Output from the above sql:
    (expression)  <first>fred</first>
extractvalue() and extractxmlclobvalue()
These functions return the value of the XML node in contrast to extract(), which returns the XML node. They are compatible with the Oracle extractvalue() function.

    execute function extractvalue('<name><first>fred</first></name>','/name/first');
    Output from the above sql is just the name without the XML tags:
    (expression)  fred
This function returns 1 if the specified XML node exists in an XML document. It is compatible with Oracle exists() function

    execute functionexistsnode('<name><first>fred</first></name>','/name/first')
    Ouput from the above sql:

Parse an XML document or fragment to determine whether it is well formed. This function returns an XML document orfragment if the input XML is well formed else an error is returned.

    SELECT idsxmlparse('<purchaseOrder poNo="124356"><customerName>ABC Enterprises</customerName><itemNo>F123456</itemNo></purchaseOrder>') AS PO FROM systables where tabid = 1;
    Output from the above sql:
    <purchaseOrder poNo="124356"><customerName>ABC Enterprises<itemNo>F123456</purchaseOrder>
    Note that the string in the quotes should not have any carriage returns.
[Read More]

Enhanced variable length row compression

cheetahblog Tags:  varchar max_fill_data_pages 5 Comments 5,693 Views
In IDS, when a new row containing a column that can vary in length is to be inserted on a page, the row will be inserted only if the page contains sufficient space to allow the new row to grow to its maximum length. This can result in low disk space usage. In IDS 11, you can change this default behavior by setting the configurationparameter MAX_FILL_DATA_PAGES to 1. When MAX_FILL_DATA_PAGES is enabled, the server will add a new row to a page ifafter adding the row, at least 10 percent of the page is free for the future expansion of the rows. The database server needs to be restarted after changing this configuration parameter.

To take advantage of this setting:
  • Existing tables with variable-length rows must be reloaded
  • Existing pages must be modified, followed by further inserts

Advantages of enabling MAX_FILL_DATA_PAGES are:
  • More data can be stored in less disk space
  • Enables the server to use the buffer pool more efficiently
  • Reduces fetch times as more rows can be accessed by reading less number of pages

The possible disadvantages of enabling MAX_FILL_DATA_PAGES are:
  • Allowing more variable-length rows per page might store rows in a different physical order
  • As the page fills, updates made to the variable-length columns in a row could cause the row to expand so itno longer completely fits on the page. This causes the server to split the row onto two pages, increasing the access time for the row

Example: Suppose you have a table with a column of type lvarchar(6000). If the dbspace in which the table resides is of 4K or 8k pagesize, you might see a lot of unused space in the pages. If you do not have to make updates which would cause most of the rows to expand to their full length, you might want to enable MAX_FILL_DATA_PAGES tobetter use the space in each page.

Suma Vinod[Read More]

IDS "Cheetah 2" Beta on Mac OS X (Leopard)

cheetahblog Tags:  mac 4,807 Views
Very aptly termed as... "The Big Cats Unite!".

This was officially announced today at MacWorld'08 - IDS "Cheetah 2" first beta version on MacOS 10.5 (Leopard) is now available for download here.

Here's the press announcement.

Mirav Kapadia[Read More]

Report from MacWorld

cheetahblog Tags:  macos 4,802 Views
MacWorld was an amazing event this past week in San Francisco at the Moscone Center. Although this is generally a consumer oriented event there were many companies with an enterprise focus showing their solutions. IBM was in the smallest size pedestal in the Enterprise Solutions Area -- People would constantly walk by and say - "IBM ---- the biggest company with the smallest pedestal" - We smiled and told them we make the biggest impact!!!!

As you know we just announced our beta for the Mac OS X this week and were at the show to start to get the word out and meet customers and business partners who are interested in an "Industrial Strength" data base for their application. There was a ton of traffic in the ped - -People were at first surprised and then pleased to see us there as they have been looking for a dependable data base that is secure and scalable to build their applications on. Many developers who have a long history using Informix were thrilled to see the announcement and anxious to try the beta.

  • MAC users are very happy to see an enterprise class database on the LATEST MAC OS.
  • Looking for something better than what is currently available on MAC which just hasn't been reliable and has not scaled to meet their needs.
  • Some really like the idea of OAT that is open source and allow users to customize, especially the free part.
  • One mid sized system integrator commented " We are glad to see IBM supporting the MAC platform as we are building applications to take into Government and Financial Markets - We need a data base that our customers can depend on. IDS is exactly what we are looking for."

Terri Gerber[Read More]

Introduction to GLSLIB

cheetahblog Tags:  gls 2 Comments 5,347 Views

The GLSLIB is a library that is used to internationalize IBM Informix products such as IBM Informix Dynamic Server and IBM Informix Client SDK. Each IBM Informix product is bundled with a specific version of GLSLIB. This GLS library supports English, Western European, Eastern European, Asian and African languages. Additional Locales that are not included in the GLS library can be installed from the International Language Supplement (ILS). ILS provides all available GLS locales and code-set conversion files. It also includes error messages to support several languages. The important point about ILS is that it is a platform independent product. The languages supported by ILS are Czech, German, French, Spanish, Russian, Polish, Slovak, Japanese, Simple Chinese, Traditional Chinese and Korean.

The following tables provide information about the version of GLSLIB - bundled with IBM Informix Dynamic Server and IBM Informix Client SDK. It also provides the information about the version of ILS that is compatible with the specific version of IBM Informix product.

The table can be used to
1) Find the version of GLS library bundled with the specific version of IBM Informix product.
2) Determine the compatible version of ILS that should be used with the specific version of IBM Informix product.


IDS Version
GLSLib Version
Compatible ILS Version

CSDK Version
GLSLib Version
Compatible ILS Version

Parag A Sheth[Read More]

Manual installation of OpenAdmin Tool (OAT) on Mac OS

cheetahblog 4 Comments 7,528 Views


Setting Up OpenAdmin Tool(OAT) on Mac OS




The Open Admin Tool (OAT) is an open source web-basedadministration tool for Informix Dynamic Server (IDS). It is a web consolewhere users can manage dbspaces, schedule tasks and monitorMACH 11 clusters etc. through their web browsers.


OAT is a web console that is hosted on a single web server.Other machines that have access to the web server’s network can simply open upa web browser, type in the web server machine’s URL, and they can use OAT tomanage their database servers.


In order to host the Open Admin Tool website, the followingprerequisites have to be setup:


  1. Informix Client SDK or Informix Connect
  2. A web server, such as Apache 2.0.
  3. PHP 5.2.4 dynamically loaded as an Apache 2.0 module. It also has to have the following extensions enabled
    • Pcre
    • Pdo
    • Pdo_sqlite
    • Pdo_informix
    • Soap
    • Gd   (requires libpng and libjpeg libraries)
    • Openssl  (requires libssl and libcrypto libraries)
    • Sockets
    • Libxml  (requires libxml2 libraries)
    • Session
    • Simplexml


The Open Admin Tool can be hosted on any platforms that haveall of the above setup.


This article will provide the steps on how to setup OAT on aMac OS X Platform.


Setting up Open Admin Tool (OAT) on Mac OS


First step is to install Informix Client SDK for Mac OS.


The Mac OS will already have an Apache web server installed.We can use the Apache web server that comes with Mac OS. We do not need toinstall other web servers.


The Mac OS will also have an installed PHP,however the PHP that comes with Mac OS does not have all the extensions thatOAT requires. We cannot use the PHP that comes with Mac OS. We have to compilePHP binaries from source.


Our goal here is to build our own PHP binaries anddynamically load it to the Apache web server that comes with Mac OS.


Compiling PHP on Mac OS


Before starting to compile PHP, make sure that you havelibpng, libjpeg, openssl and libxml2 installed into /usr/local/ or /usr/. Some PHP extensions require these libraries in orderto compile.


Obtain the latest PHP source code from ( the source code package and issue the following commands.


cd /path/to/php/source/code


./configure --prefix=/path/to/where/you/want/to/install/php--with-apxs2=/usr/sbin/apxs --enable-soap --enable-pdo --with-pdo-sqlite--with-gd --with-pcre-regex --enable-session --with-openssl --enable-sockets--enable-libxml --enable-simplexml





The “configure” command will generate a makefile specific toyour machine’s build environment. The “make” command will use the generatedmakefile to create PHP binaries.


The “configure” and “make” command will not always runsmoothly. It is dependent on your machine’s setup. Please see the followingsection about debugging the “configure” command for more information.


If the “configure” and “make” command ran smoothly, you canrun the “make install” command. This command will copy the php binaries to/path/to/where/you/want/to/install/php. It will also overwrite the PHP apachehandler (the glue between Apache and PHP, called at /usr/libexec/apache2/.You may want to backup the Mac OS’s default before running “makeinstall”


##Be sure that you havebacked up /usr/libexec/apache2/

make install


After the “make install” command, the php binaries can befound under /path/to/where/you/want/to/install/php/.


Right now we have compiled the PHP binaries that have allthe extension modules that OAT requires, EXCEPT the pdo_informix extension.This extension DOES NOT come with PHP source code from We have tocompile it from source code separately from PHP, and enable it as a dynamic PHPextension module. We will go through the process of compiling pdo_informix inthe later section “Compiling pdo_informix extension and enabling it in PHP”.


Right now we also have compiled the PHP apache handler (theglue between PHP and apache). This will appear as a file called “”,under /usr/libexec/apache2/. This file will be loaded as a dynamic Apachemodule. This will be described in the later section “Enabling PHP in Apache”


The next thing you have to do is to create a phpconfiguration file. Copy the sample php.ini file from/path/to/php/source/code/php.ini-dist to /path/to/where/you/installed/php/lib


cd /path/to/php/source/code/

cp ./php.ini-dist/path/to/where/you/want/to/install/php/lib/php.ini


Then edit the php.ini file. Make the following changes tothe php.ini file.


memory_limit = 128M Changethis to à memory_limit= 300M

extension_dir ="./"     Changethis to à extension_dir =“/path/to/where/you/installed/php/lib/extensions”


OAT requires a greater memory limit. Thus we increased thememory limit here.


The extension_dir is the directory where dynamic PHPextension modules will reside. Dynamic PHP extensions have to be placed underextension_dir, and enabled in the php.ini configuration file. The pdo_informixextension module will be handled in this fashion.


Debugging Information about the “configure” and “make”command


The --prefix tag lets you specify where you want to installthe php binaries. You have to make sure that you have the write permissions tothat directory before configuring PHP.


The --with-apxs2 tag lets you specify where to find yourapache binaries, so that the apache handler (i.e. the glue between Apache andPHP) can be built. This apache handler will appear as a file called“” after the compilation. It will appear under /usr/libexec/apache2/.This document will talk about how to dynamically load this handler to theApache web server in the later section “Enable PHP in Apache”. Here we areusing the Mac OS default Apache web server, thus this flag should be set topoint to /usr/sbin/apxs.


The --enable and --with flags specify what extensions youwant to enable. Pdo_informix will be enabled separately after we compile thePHP binaries.


The “configure” and “make” command will not always runsmoothly. One possible reason is that the prerequisite libraries are notpresent. For example, the php gd extension will not compile if libpng andlibjpeg libraries are not found on your machine. The openssl extension will notcompile if libssl and libcrypto libraries are not found. The libxml extensionwill not compile if libxml2 libraries are not found. Make sure that you have libpng, libjpeg, openssl and libxml2 installed into /usr/local/or /usr/ before running the configure command.


Another possible reason is that the library architecture andthe compilation flag do not match. If your libpng, libjpeg, libssl, libcrypto and libxml librariesare in 32bit mode, you have to set the compilation flag CFLAGS to –m32. If yourlibraries are in 64bit mode, you have to set CFLAGS to –m64. For example, ifyou want to see whether your libxml2 library is 32bit/64bit, run the followingcommands:


cd /usr/lib




##If your libraries are64bit, set CFLAGS to be –m64##

setenv CFLAGS –m64


##If your libraries are32bit, set CFLAGS to be –m32##

setenv CFLAGS –m32


If you encounter problems, you can look into the config.log file generated by the configure script. You canalso open up the configure script (i.e. /path/to/php/source/code/configure)or the makefile (i.e. /path/to/php/source/code/Makefile)with a text editor, search for the error message and debug the problem. Sometimesthe configure script cannot find libraries because they are not installed on /usr/ or /usr/local/. In thatcase, hardcoding your library paths to the configurescript’s library search path will solve the problem.


You may also add the --disable-all tag to the configurecommand. This will disable all other extensions that we did not specify in theconfigure command. Compiling fewer extensions will give us fewer errors. SinceOAT doesn’t require those extra extensions, it is okay to add the --disable-allflag.


Compiling PDO_INFORMIX extension and enabling it inPHP


Download the latest pdo_informixsource code from


Before you compile pdo_informix.Make sure that CFLAGS are set accordingly. If your Client SDK/IConnect libraries are 32bit, then set CFLAGS to be –m32.If your Client SDK/IConnect libraries are 64bit, thenset CFLAGS to be –m64


Decompress the pdo_informix sourcecode and run the following commands


cd /path/to/pdo_informix/source/code/








This compiles the pdo_informixextension module. It will appear as a file “”under /path/to/pdo_informix/source/code/modules/


Copy to the PHP extension_dir ( We have set thisto be /path/to/where/you/installed/php/lib/php/extensions/in previous sections)


cd /path/to/pdo_informix/source/code/modules/

cp./ /path/to/where/you/installed/php/lib/php/extensions/


Add the following line to the php.ini configuration file (Under/path/to/where/you/installed/php/lib/php.ini)


This will enable the pdo_informixextension in PHP.


Enabling PHP in Apache


After PHP has been built successfully, the PHP apachehandler (the glue between PHP and apache) will be created. This will appear asa file called “”, under /usr/libexec/apache2/


We have to edit the Apacheconfiguration file (httpd.conf) to load This can be done by adding the following three lines in/etc/apache2/httpd.conf. These lines might already exist in the httpd.conf but are commented out. In that case you onlyhave to uncomment them.


LoadModulephp5_module libexec/apache2/

AddTypeapplication/x-httpd-php .php

AddTypeapplication/x-httpd-php-source .phps



You also have to setup the Apacheenvironment variables. You will have to edit the /usr/sbin/envvarsfile. Add the following lines in the envvars file.











You may now start the Apacheweb server. Issue the following command with root user access permissions.


/usr/sbin/apachectl–f /etc/apache2/httpd.conf –k start



UnpackOAT source code at /Library/WebServer/Documents/OpenAdminTool.The ownership of all files under this directory has to be changed too. Makesure these files are owned by the user and group that runsapache. (These should match the user and group settings in/etc/apache2/httpd.conf)


Visithttp://<hostname>.<domainname>//OpenAdminTool/index.php andcontinue with the OAT web install. OAT should be operational after the OAT webinstall.

Leo Chan


[Read More]

Redistributing IBM Informix Client products

cheetahblog Tags:  visual iconnect c++ esql/c csdk 5,038 Views

Scope: This article covers redistributing ESQL/C based demos and application. The steps required to redistribute other Informix client applications by copying files are being investigated.

Depending on how you deploy your Informix applications there is sometimes a need to bypass the Informix Client SDK or I-Connect installation process and copy the Informix library and API files directly to a target computer. Though the officially recommended and supported approach is to use the supplied CSDK/I-Connect installer, these instructions are provided as an alternative approach involving copying files for scenarios where using the installer is not possible. This article demonstrates how and where to copy all the required CSDK files and Microsoft Windows DLLs to a target computer in order to deploy applications.

Note that while this is not an officially recommended approach, these instructions have been tested by IBM and demonstrated to work. If you encounter problems with this method and need to talk to IBM technical support you may be asked to try installing via the installer to rule out other problems with your configuration.


  1. Install Client SDK 3.50 or I-Connect on your Windows development computer. After successful installation, verify that all the shortcuts in the programs groups are created and registry keys are updated.
  2. Make a copy of the entire CSDK installation folder (INFORMIXDIR) and transfer those to the target computer (For example by zipping the files and unzipping on the target computer). Choose any location on target computer for copying files for example c:\informix.
  3. Copy the required Microsoft Windows runtime DLLs.
    Since the Informix product is not being installed via the regular installer the required runtime DLLs may not be present on the target computer. As a result applications such as setnet32.exe, ilogin.exe and finderr may not run correctly.
  4. If a manifest is present in your application but a required Visual C++ library is not installed in the WinSxS folder, you may get one of the following error messages depending on the version of Windows on which you try to run your application:

    • The application failed to initialize properly (0xc0000135).
    • This application has failed to start because the application configuration is incorrect. Reinstalling application may fix this problem.
    • The system cannot execute the specified program.

    Recommended approach: The Visual C++ Redistributable Package (VCRedist_x86.exe, VCRedist_x64.exe,VCRedist_ia64.exe) has to be executed on the target system as a prerequisite to deployment of the application. The Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) ( installs runtime components of Visual C++ Libraries required to run applications developed with Visual C++ on a computer that does not have Visual C++ 2005 installed.

    Alternative approach: If your deployment requirements prevent you from installing the Visual C++ Redistributable Package directly, from the development computer, copy the %WINDIR%\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.762_x-ww_6b128700 directory to same location on the target computer. (Create the same directory structure on the target computer as the development computer if it does not exist.)

    Also copy the policy files from the development computer %WINDIR%\WinSxS\Policies\x86_policy.8.0.Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_x-ww_77c24773 to the same location on target computer. (Again, create the same directory structure on target computer as the development computer if it doesn’t exist.)

    Note: This workaround is only applicable to Microsoft Visual C++ 2005 SP1 runtime components. If later versions of the Client SDK are built with a later version of Visual Studio then the corresponding version of the runtime components would be required. Check your release notes to see which version of Visual Studio is required.


Running ESQL/C demos:

If you do not have the required Visual C++ libraries installed in the WinSxS folder then while running the demo1 program the following error pops up.


Once you install Visual C++ libraries or copy the runtime DLLs and policy files in C:\WINDOWS\WinSxS folder you should see the ESQL/C demos successfully as shown below.

ESQL/C Demo 1

Snigdha Sahu[Read More]

IBM OpenAdmin Tool for IDS -- Version 2.20 – Available Now!!

cheetahblog Tags:  idsadmin openadmin oat 4,197 Views

OpenAdmin Tool for IDS has been greatly enhanced in version 2.20 with a completely redesigned user interface, a new automated installer, and lots of new IDS admin functionality. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.

New feature highlights of OpenAdmin Tool for IDS version 2.20 include:

  • Automated Installer: For users who want an easy, hassle-free install, there is a new automated GUI installer that will install and configure Apache, PHP, I-Connect, and OAT. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.

  • All-New Look: OAT version 2.20 has a completely new and improved UI

  • Automated Update Statistics: Configure rules and define a schedule to have IDS automatically maintain your database server's statistics

  • Onconfig Recommendations: Get recommendations for your onconfig settings that are tuned to your specific database server instance; modify dynamic onconfig parameters directly through OAT

  • Enhanced Mach11 Support: Including a Connection Manager interface, an SDS Setup Wizard, and the ability to start and stop Mach11 servers remotely

  • Historical Performance Graphs: View and monitor historical trends of key performance measurements

  • Task Scheduler Wizard: Use OAT’s new wizard to define new tasks in the scheduler

  • Read-Only Access: Define groups in OAT that can monitor IDS database servers but not perform administrative tasks

  • And the list goes on…. Virtual Processor Administration, System Integrity Checks, a Privileges Manager, environment variable support on OAT connections, the ability to kill database sessions remotely through OAT, and more!

Download OAT version 2.20 now at, or at in the Members Area.

Erika Von Bargen[Read More]

Inplace Upgrade of IDS on Windows

cheetahblog 3,398 Views
Depending on your existing Database Server setup, you might be able to upgrade directly to the
current version by basically installing the product in a new directory, copying a few configuration
files, and starting the new server to convert your database data. You can upgrade directly from any
of the following products: Dynamic Server Version 11.10, 10.00, 9.40, 9.30, 9.21, or 7.31.

Upgrading is an in-place migration method that uses your existing test and production hardware. The
operating system on those machines must be supported by Dynamic Server Version 11.50. Also, you must
have enough space for the system database data conversion.

Upgrading consists of these steps:
    1. Prepare your system. That includes removing outstanding in-place alters, closing all transactions,
    verifying the integrity of the data with oncheck, and performing a level-0 backup. If you are
    using Enterprise Replication or High Availability Data Replication, stop replication.

    2. Install the new product on the machine.
    Important: Do not install it over the existing product.

    3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

    4. Start the Dynamic Server Version 11.50 instance. The database data is automatically converted.

    5. Run UPDATE STATISTICS MEDIUM for non-leading index columns, then run UPDATE STATISTICS FOR PROCEDURE.
This type of migration minimizes the risk of introducing errors. You can always revert from the new
server to the old one. In the event of a problem during reversion, you can restore the level-0 backup.

This method works perfectly on Windows thought it is not documented well.
    1. When installing IDS 11.50 on Windows, choose the option "Install to a default/different directory".


    2. Make sure that you do not initialize the server when installing.

    Initialize Screen

    3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.

    4. Bring the server up using Control Panel->Services or any other method without initializing.

    5. Monitor the online.log for the Conversion Successful message.

Once the upgrade has completed successfully, you can remove the old instance. When you run the
uninstaller make sure that you select "Retains all databases, but removes server binaries".


If for some reason, you like to go revert to the previous instance, restore it from the level 0 backup.

The only time this will not work on Windows is if you are upgrading from a 11.10.UC1 version to
another 11.10 version say 11.10.UC2.

Suma Vinod[Read More]

Using multiple copies of CSDK on the same machine

cheetahblog Tags:  odbc multiple .net csdk oledb 3 Comments 7,532 Views

In some cases it can be useful to have different copies ofsame versions of CSDK installed in the same machine, though this is not anofficially recommended approach by IBM technical support. This documentidentifies the most common problems when using different copies of same versionsof CSDK on the same machine and how to solve them. The first problem you willencounter if you try to install another copy of the same version of CSDK iswith the Windows Installer. By design, the Windows Installer detects if youhave a pre-existing installation of CSDK and it gives you the options ofModify, Repair or Remove.

Windows keeps a database of products installed on themachine. Every time you install a product using the Windows Installer a new keyis added to the registry with specific information about the product, installmedia location and options used for the install.


This is the database used when you run the “Add or RemovePrograms” applet from the “Control Panel”

When you launch the install process for a product, WindowsInstaller checks if there is already an entry in the Uninstall registry keywith the same id, if so it assumes that the product is already installed andlaunches the installer in Maintenance mode (with options Modify, Repair andRemove).

Different major versions of CSDK have unique productid’s, this means you can easily install CSDK 2.90.TC1 and CSDK 3.00.TC1, but ifyou try to install two releases of the same minor version (e.g.:3.00.TC1 and 3.00.TC3) the installer would not allow it. Currently the Windowsinstaller does support single installation of various CSDK versions indifferent locations on the same host.

To overcome this design restriction, you can use a Microsofttool like Windows Installer CleanUp to remove theRegistry entry corresponding to the version. Note that this will not uninstallthe binaries; it would only remove the entry from the Uninstall database. Thismay allow us to reinstall the product.

Due to these shortcomings sometimes there can be a need tobypass the Informix Client SDK or I-Connect installation process and copy theInformix library and API files directly to a machine. Though the officiallyrecommended and supported approach is to use the supplied CSDK/I-Connectinstaller, these instructions are provided as an alternative approach involvingcopying files for scenarios where using the installer is not possible.

CSDK 3.x is a bundle containing the followingapplication-programming interfaces:

IBM Informix Object Interface for C++
IBM Informix ESQL/C
IBM Informix ODBC Driver
IBM Informix OLE DB Provider
IBM Informix .NET Provider

While some of these components can easily be used andredistributed without the complete CSDK bundle, there are some other componentswhich are more challenging to redistribute due to the way they interact withthe Windows operating system. APIs such as the ODBC driver and the OLE DBprovider depend on additional CSDK libraries to work. These libraries must beaccessible to your application.

In most cases, Windows searches in the directories includedin the PATH variable to load these libraries. One of the main problems whensupporting multiple versions of CSDK is mixing libraries between versions(e.g.: .NET provider from a newer version loading the ODBC library from an oldversion). In the same way as Windows uses the PATH variable to search for DLLs,Informix components use the INFORMIXDIR directory to load additional resourcessuch as GLS conversion files or message files.

The primary rule when using multiple versions is to have theINFORMIXDIR and PATH environment variables pointed to the directory containingthe version you want to use. You can accomplish this creating a batch file(e.g.: .BAT or .CMD file) to start your application or running the applicationas a user which has different environment settings.

Here is an example:

- - test.cmd - - - -
set PATH=D:\infx\CSDK300TC1\bin;%PATH%
- - - test.cmd- - - -

Interface for C++, ESQL/C and LIBMI

If you want to use the Object Interface for C++, ESQL/C orLIBMI you can follow the steps in following IDS Experts article: Redistributing IBM Informix Client products.

You will need to ensure that both the PATH and INFORMIXDIRvariables are pointing to the version of the CSDK installation folder you wantto use.

ODBC Driver

Using different versions of the IBM Informix ODBC driver canbe more complex. Any ODBC driver on a Windows machine must have a unique name.In the case of CSDK 3.50 and 3.00 the name for the ODBC driver is “IBM INFORMIXODBC DRIVER”. Windows keeps the list of installed drivers in the ODBCINST.INIregistry key.


Note that in previous versions of the CSDK package the namefor the ODBC driver was different (e.g.: IBM INFORMIX 3.82 32 BIT).

When you create a DSN using the ODBC Data SourceAdministrator tool (odbcad32.exe)

An entry is created in the registry with all the values usedin your DSN and the full path of the ODBC library used to create the DSN.

Windows uses the directory in the “Driver” registry key toload the ODBC library. You can use different versions of the ODBC driver bymanually changing the directory in the “Driver” key.

Even if you can update this key from the console or a shellscript before your application starts, the best approach is having a differentDSN depending on the driver you want to use, or use a relative path (e.g.:“.\iclit09b.dll”) and only rely on the Windows DLL loader mechanism.

The ODBC Data Source Administrator tool always loads theODBC library specified in the “ODBCINST.INI\%DRIVERNAME%\Setup” key, even if the DSN entry you are modifying has a differentpath in the “Driver” key.

For other APIs, the values for the variables PATH andINFORMIXDIR need to be changed depending of which DSN you want to use.

OLE DB Provider

The OLE DB provider library is called Ifxoledbc.dll. Windowsuniquely identifies an OLE DB provider based on the Class id (CLSID) and theprovider name (e.g.: Ifxoledbc). In the CSDK install process the OLE DBprovider is automatically registered in the Windows registry. At any point youcan manually register a different version of the provider using theREGSVR32.EXE Microsoft tool.

This tool creates the needed keys in the Windows registry soyou can load the OLE DB provider using the “Ifxolebdc” name.

There are two places where the full path of the OLE DBlibrary is stored, one for the “Ifxoledbc” provider and a second for the “IfxoledbcErrorLookup”.


{A6D00422-FD6C-11D0-8043-00A0C90F1C59} Ifxoledbc

{A6D00423-FD6C-11D0-8043-00A0C90F1C59} IfxoledbcErrorLookup

Note:the CLSID could change with different versions

The registry key that contains the path for the OLE DBlibrary is “InProcServer32”.

The best approach to use different versions of theIfxoledbc.dll library is to use a relative path rather than the full path…

Remember to update both keys with the desired value (Ifxoledbcand IfxoledbErrorLookup). When an application wants to load the “Ifxoledbc”provider, it would follow Windows operating system rules to search for thelibrary. You should point your PATH and INFORMIXDIR to the version you want touse.

Further information related to the DLL Search Order could befound in the following link.

Dynamic-Link Library Search Order

NET Provider

Windows has a central place for .NET assemblies called theGlobal Assembly Cache or GAC. During the CSDK install process the IBM Informix.NET Provider is installed in the GAC with the unique name (strong name) “IBM.Data.Informix”.

You can have multiple versions of the .NET Provider if theassembly version is different.

The assembly version for CSDK 2.81 and CSDK 2.90 is2.81.0.0, and for CSDK 3.00 and CSDK 3.50 the version is (Note that9.0.0.1 and correspond to the IBM Data Server Provider.)

The assembly version for the .NET provider in minor releases(e.g.: 3.00.TC1, 3.00.TC3, etc) is the same, which means that only one versionof the library could be registered in the global cache.

To be able to use more than one .NET provider, first youneed to uninstall the IBM.Data.Informix assembly from the GAC. To do this youcan use the “gacutil.exe” with the “/u” flag for uninstall, or use the Explorercontext menu…

When a .NET assembly is not in the global cache the .NETruntime would try to load the class from the same directory as the executable.(You can find more information about the assembly loading in the followinglink: How the Runtime Locates Assemblies.)

Copy the .NET Provider (IBM.Data.Informix.dll) to thedirectory where your application would start; this would allow you to usedifferent versions of the .NET provider.

Alternatively you can specify the location of the assemblyat runtime using the Assembly.LoadFile method and the AssemblyResolveevent. Further information on this topic can be found in the following MSDNlinks: Assembly.LoadFrom and AppDomain.AssemblyResolve

Before contacting IBM technical support

The officially recommended approach is to uninstall theprevious version of CSDK before installing the new one. If you need to rundifferent versions of any of the CSDK components on the same machine, youshould be aware of the potential issues.

Always check that the versions of the libraries loaded byyour application belong to the same CSDK versions. (You can use a tool likeTLIST.EXE from Microsoft to find which libraries your process is using) - Debugging Tools for Windows


Javier Sagrera

Snigdha Sahu

[Read More]

IBM OpenAdmin Tool for IDS -- Version 2.21 – Available Now!!

cheetahblog 5 Comments 5,377 Views

OAT 2.21 has incorporated IDS Enterprise Replication monitoring, a plugin manager to allow customization of OAT functionality and an automated installer on Mac OS X. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.

New feature highlights of OpenAdmin Tool for IDS version 2.21 include:

  • Mac OS X Automated Installer: Automatically setup Apache, PHP, I-Connect and OAT on Mac OS X. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.

  • IDS Enterprise Replication Monitoring: Monitors Spool Disk usage, Send and Receive queues, Receiving/Apply statistics, Routing Topology, node details and much more. OAT Enterprise Replication is provided as a separate plugin. The latest automated installer will install the ER plugin for you, you can also manually place the ER plugin zip file under the OAT plugin_install directory and install it with the plugin manager (requires php zip extension).

  • Plugin manager: A simple way to customize OAT functionality. You can download customized OAT plugins and install it with the plugin manager. Sample plugin code is also provided and you can follow the sample in creating your own plugins. Note that the plugin manager requires the php zip extension. The latest OAT2.21 automated installer includes the php zip extension, but OAT2.20 installer does not.

Download OAT version 2.21 now at .

Click here to see OAT Enterprise Replication at work!

Leo Chan[Read More]


cheetahblog 4,414 Views


HTTPS protects the OAT webserver from eavesdropping, tampering, and message forgery. It protects the OAT webserver from hackers who are trying to secretly listen or interfere with the OAT network. Enabling HTTPS will encrypt OAT clients’ messages before sending it to the OAT webserver. This prevents hackers from listening over the line and stealing sensitive information. Sometimes hackers will setup a fake OAT webserver and deceive OAT clients as to whom the real OAT host is. Enabling HTTPS also allow OAT clients to authenticate with the OAT host, so that hackers cannot deceive OAT clients with a fake OAT webserver.

Please note that HTTPS only encrypts communication between OAT webserver and OAT client. It does not encrypt communication between IDS server and the OAT webserver.

IBM® Informix® Dynamic Server, Version 9.4 and later, enables encryption of data between IDS server and OAT webserver using an encryption communication support module. You can find more information in the following link:

Enabling HTTPS in OAT will involve the following steps:

  • 1. Replacing OAT’s Apache webserver with another mod_ssl enabled Apache webserver.
  • 2. Creating an encryption key and a certificate for your new OAT webserver, so that OAT clients can authenticate your webserver based on your certificate.
  • 3. Configure httpd.conf (the Apache configuration file) to enable HTTPS

Replacing OAT’s Apache Webserver with another mod_ssl enabled Apache Webserver

In order to use OAT with https, you will need to have an apache webserver, PHP with some extensions and the apache mod_ssl module. HTTPS functionality is provided by the apache mod_ssl module. Using the OAT automated installer will install an apache webserver and PHP with all the required PHP extensions, however apache webserver bundled with installer is NOT shipped with the mod_ssl module.

Therefore you have to install another apache webserver compiled with the mod_ssl module to replace OAT’s apache webserver. Then dynamically load OAT’s PHP apache handler (a file named, or php5apache2_2.dll on windows, which is the apache and php ‘glue’) to your new apache webserver.

Dynamically loading the mod_ssl module to OAT’s original apache webserver is not possible because the version of OAT’s original apache webserver is 2.2.4. Mod_ssl dynamic module only supports the older apache 1.3.x, It does not support OAT’s apache webserver.

On Linux and Mac OS X,

In order to compile Apache with mod_ssl support, you need to have OpenSSL installed. OpenSSL might have been installed on your Linux distribution already and you only need to find out the installation directory. Note that the OpenSSL binaries have to be 32-bit, because the binaries from the OAT automated installer are 32-bit. We suggest doing this only on a 32-bit machine so that there will be no architectural mismatch among the binaries.

If OpenSSL has not been installed already, you can download the latest source code release from here:

Then you have to compile the OpenSSL source code. Note that you have to compile 32-bit OpenSSL binaries because the binaries from the OAT automated installer are 32-bit. This can be done by setting CFLAGS to be –m32. Use the following commands:

cd /path/to/openssl/source/code
export CFLAGS=-m32
./config --prefix=/openssl/installation/directory/ --openssldir=/openssl/installation/directory/
make install

Stop OAT’s apache webserver by running the /oat/installation/directory/StopApache script. Rename OAT’s Apache_2.2.4 directory to Apache_2.2.4_noSSL. This is a will serve as a backup copy of the apache binaries. You will also need to use some configuration files from this backup Apache directory in later steps. Do make a copy.

Then you have to compile the latest Apache with mod_ssl support. Download the latest Apache source code and compile with the following commands. Note that you have to compile 32-bit Apache binaries because the binaries from the OAT automated installer are 32-bit. This can be done by setting CFLAGS to be –m32. The compilation prefix should match the old OAT apache’s directory (/oat/installation/directory/Apache_2.2.4/)

cd /path/to/apache/source/code
export CFLAGS=-m32
./configure --prefix= /oat/installation/directory/Apache_2.2.4/ --enable-so --enable-ssl --with-ssl= /openssl/installation/directory/
make install

Right now Apache should be installed with mod_ssl support. Use the following commands to check if mod_ssl is enabled in Apache.

cd /oat/installation/directory/Apache_2.2.4/bin/
./httpd –M
(Here you will see a list of Apache modules. See if the SSL module is on the list)

Then you have to change the Apache configurations file to load OAT’s PHP Apache handler (The PHP and Apache ‘glue’). Edit the Apache configuration file (/oat/installation/directory/Apache_2.2.4/conf/httpd.conf). Add the following lines. Uncomment if these lines exist but are commented out.

LoadModule php5_module /oat/installation/directory/PHP_5.2.4/
AddType application/x-httpd-php .php
PhpIniDir ‘/oat/installation/directory/PHP_5.2.4/lib’
Setenv INFORMIXDIR ‘/oat/installation/directory/Connect_3.50/’

Search for the line “Listen 80” in the httpd.conf file. This indicates the port number that the OAT webserver should run on. You should use the same port number as OAT’s original Apache webserver that comes with OAT installer.

Search for the line “ServerName” in the httpd.conf file. This indicates the name and the port that the server uses to identify itself. You should use the same ServerName as the original non-SSL Apache webserver that comes with OAT installer.

Search for the line “DirectoryIndex index.html” in the httpd.conf file. This sets the files that Apache will serve if a directory is requested. Change this line to “DirectoryIndex index.html index.php”

Copy the file /oat/installation/directory/Apache_2.2.4_noSSL/bin/envvars to /oat/installation/directory/Apache_2.2.4/bin/envvars. Apache will read this file to setup the Apache environment variables for OAT to run.

Copy the entire directory /oat/installation/directory/Apache_2.2.4_noSSL/htdocs/openadmin/ to /oat/installation/directory/Apache_2.2.4/htdocs/openadmin/. All the OAT source code resides in this directory.

Run the following commands to make sure that the PHP Apache handler is properly loaded.

cd /oat/installation/directory/Apache_2.2.4/bin/
./httpd –M
(Here you will see a list of Apache modules. See if the php5 module is on the list)

Right now your new webserver should be properly setup for OAT. You may start your webserver by running /oat/installation/directory/StartApache and visit OAT using your web browser.

Note that this server has mod_ssl enabled but HTTPS is not switched on yet. A few more steps are needed to enable HTTPS in the following sections.

On Windows,

First we have to download and install Openssl from the following website:

Then we need to setup Apache with mod_ssl support. Download the latest Win32 Binary including OpenSSL 0.9.8g (MSI Installer). This should be available here:

Stop the OAT Apache webserver. There should be a OpenAdmin shortcut in the start menu. You should be able to stop the OAT Apache webserver from there. Make sure that the Apache Monitor is not running on your system tray.

Rename OAT’s Apache_2.2.4 directory to Apache_2.2.4_noSSL. This is a will serve as a backup copy of the apache binaries. You will also need to use some configuration files from this Apache directory in later steps. Do make a copy.

Run the Apache MSI installer. Do a typical install and choose the installation directory to be /oat/installation/directory/Apache_2.2.4

Edit the Apache configuration file (/oat/installation/directory/Apache_2.2.4/conf/httpd.conf). Add or uncomment the following lines in the httpd.conf file:

LoadModule php5_module "c:\oat\installation\dir\PHP_5.2.4\php5apache2_2.dll"
LoadModule ssl_module modules/
AddType application/x-httpd-php .php
PhpIniDir 'c:\oat\installation\dir\PHP_5.2.4'

Search for the line “Listen 80” (or “Listen 8080”) in the httpd.conf file. This indicates the port number that the OAT webserver should run on. You should use the same port number as OAT’s original Apache webserver that comes with OAT installer.

Search for the line “ServerName” in the httpd.conf file. This indicates the name and the port that the server uses to identify itself. You should use the same ServerName as OAT’s original Apache webserver that comes with OAT installer.

Search for the line “DirectoryIndex index.html” in the httpd.conf file. This sets the files that Apache will serve if a directory is requested. Change this line to “DirectoryIndex index.html index.php”

Search for the line “setenv INFORMIXDIR” in the OAT’s original Apache’s configuration file (c:\oat\installation\dir\Apache_2.2.4_noSSL\conf\httpd.conf). This line sets the INFORMIXDIR variable in the Apache environment for OAT. This has NOT been set in your new Apache webserver yet. Copy this line to your new Apache webserver’s httpd.conf file. You can put this at the end of the httpd.conf file.

Copy the entire directory c:\oat\installation\dir\Apache_2.2.4_noSSL\htdocs\openadmin\ to c:\oat\installation\dir\Apache_2.2.4\htdocs\openadmin\. All the OAT source code resides in this directory.

Run the following commands in a command prompt to make sure that the PHP Apache handler and the mod_ssl modules are properly loaded.

cd c:\oat\installation\dir\Apache_2.2.4\bin\
httpd.exe –M
(Here you will see a list of Apache modules. See if the php5 module and the ssl_module are on the list)

Right now your new webserver should be properly setup for OAT. You may now click on ‘Start Apache Service for OpenAdmin’ in OpenAdmin Menu under the Start Menu. You should be able to access OAT using your web browser.

Note that this server has mod_ssl enabled but HTTPS is not switched on yet. A few more steps are needed to enable HTTPS in the following sections.

Creating an Encryption Key and a Certificate for your OAT Webserver

Keys are used in encryption and decryption. They usually come in pairs, the public key and private key. Public keys are used to encrypt messages and private keys are used to decrypt messages. The message encrypted by a public key can only be decrypted by its associated private key.

A HTTPS enabled webserver will have its own pair of public and private key. The webserver will make its public key available to all clients. But nobody else except the webserver will know its private key. Thus all clients will be able to encrypt messages, but only the webserver can decrypt the message. If a client wants to send encrypted messages to the webserver, he will encrypt the message with the public key provided by the webserver and then send out the message. The webserver will use its secret private key to decrypt the client’s message. Hackers who are trying to listen over the network and steal the client’s message will not be able to decrypt the client’s message, because the hacker does not have the private key.

A certificate is a document authenticating a person to be whom he claims to be. A HTTPS enabled webserver will have its certificate, signed by a trusted certificate authority, to authenticate itself to be the real webserver that the clients are talking to. Before a client talks to the webserver, he will be prompted to view and accept the webserver’s certificate. To the client can make sure that the webserver’s certificate is signed by a trusted certificate authority before proceeding with the communication. This prevents hackers from setting up fake webservers to deceive clients.

Once a webserver is HTTPS enabled, clients get to choose whether they want to do a normal connection to the webserver or a secure connection to the webserver. If clients want to do a normal connection, they will type “http://webserver_url” in their web browser. If clients want to do a secure connection, they will type “https://webserver_url”. In a secure connection, the webserver will send the client its certificate and its public key. The client will be prompted to view and accept the webserver’s certificate before the webpage is loaded, so that the client can be sure that the webserver is not a fake webserver created by hackers to deceive you. Once the client accepts the certificate, the client’s web browser will use the public key that it received from the webserver to encrypt communication. Only the webserver has the associated private key, thus only the webserver can decrypt the client’s encrypted communication. Hackers cannot secretly listen and decrypt the communication.

To generate private/public key pairs and the certificate, we use the openssl executable. You should be able to find it under the bin directory of your openssl installation.

To generate a private key, use the following command:

openssl genrsa -des3 -out privkey.pem 2048

The private key will be stored in the privkey.pem file. Store this file in a secure location because this is the webserver’s secret decryption key. This file will be used to generate the associated public key. When we generate the certificate, it will look for this file, it will generate its associated public key and include the public key in the certificate.

To generate a certificate, we create need to create a certificate signing request, and send the certificate signing request to a trusted certificate authority (Such as VeriSign). The authority will then issue you a certificate. Use the following command to generate a certificate request. For more information about the process of signing certificate requests, contact your certificate authority.

openssl req -new -key privkey.pem -out cert.csr

If you don’t want to deal with another certificate authority and you just want to create a certificate for yourself, you can create a self-signed certificate. Note that this is not the recommended way of creating a certificate.

openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095

Openssl will prompt you to enter your personal information. After that the certificate will be stored in the cacert.pem file. This file will be displayed to web clients to verify your identity. It will also include the public key for web clients, thus they can start encrypting communication.

Openssl will also prompt you to enter a pass phrase. This is an extra layer of security. You will need to enter your pass phrase when you start your webserver.

For more information about encryption keys, please consult the OpenSSL documentations:

For more information about certificates, please consult the OpenSSL documentations:

Configure httpd.conf (the Apache configuration file) to enable HTTPS

Edit the apache configuration file.The file should be /oat/installation/directory/Apache_2.2.4/conf/httpd.conf

Search for the following line:

#Include conf/extra/httpd-ssl.conf

This line is commented out by default. Uncomment it so that httpd.conf will include the apache ssl configuration file.

Then edit the apache ssl configuration file.The file should be /oat/installation/directory/Apache_2.2.4/conf/extra/httpd-ssl.conf

HTTPS will require one more ssl port. By default, the ssl port number is set to be 443. Make sure this port is available. If you wish to change this port, edit the Listen directive and the Virtual Host section of the httpd-ssl.conf file.

Search for the ‘SSLCertificateKeyFile’ directive and the ‘SSLCertificateFile’ directive in the httpd-ssl.conf file. These two directives indicate the location of your private key file and the certificate file. Make sure that they point to the privkey.pem and the cacert.pem created in the previous section.

Search for the ‘SSLCipherSuite’ directive. This directive indicates the ciphers for your HTTPS webserver. By default the HTTPS webserver will accept all encryption ciphers. If you wish to accept only the seven strongest ciphers, edit the directive as follow, or else you can keep the default configuration:

SSLProtocol all

For more information about HTTPS configurations, please refer to the following website:

Final Testing

Your OAT webserver should be secured with HTTPS right now. Restart your webserver and launch OAT with your web browser. Instead of using http://hostname:portnumber/openadmin, try using https://hostname:ssl_portnumber/openadmin. You will be prompted to view and accept OAT webserver’s certificate before the OAT login page is launched.

On Linux, you can restart your webserver by running the StopApache script and then the StartApache script in the OAT installation directory. You will be prompt to enter the pass phrase before you can start you webserver.

On Windows, you have to restart your webserver with the command prompt. Starting the webserver with Apachemonitor.exe or with the start menu shortcuts doesn’t work because they do not support pass phrases. Use the following command to restart your webserver.

httpd –k restart

If you wish to use Apachemonitor.exe or with the start menu shortcuts to control your webserver, you have to disable pass phrases. Note that this reduces the level of security.

Run the following command:

cd c:/openssl/installation/directory/bin/
openssl rsa -in privkey.pem -out privkey_nopassphrase.pem

Now an unencrypted copy of your private key will be stored in the privkey_nopassphrase.pem file. Edit your httpd-ssl.conf file SSLCertificateKeyFile directive to use the privkey_nopassphrase.pem file instead of the privkey.pem file. Then you will not be prompted to enter a pass phrase when starting your webserver. Apachemonitor.exe and the start menu shortcuts should work now.

Leo Chan

[Read More]

New Release of IBM’s OpenAdmin Tool for IDS -- Version 2.22

cheetahblog Tags:  oat openadmin 2 Comments 6,699 Views

The newest version of OpenAdmin Tool for IDS, Version 2.22, is available now! The latest features center on a Enterprise Replication (ER) monitoring and security.

ER Plug-in Version 1.1: Version 1.1 of OAT’s ER plug-in greatly enhances OAT’s Enterprise Replication monitoring capability.

  • The ER Routing Topology page has been transformed to allow monitoring of all nodes in the ER domain from a single page without having to drill-down on each node. Users can set thresholds for key ER statistics and then use the Routing Topology page to monitor alerts and profile data for each node in their domain. (Requires IDS server version 11.50xC2.)

  • Screenshot of OAT ER Plug-in version 1.1

  • The ER Node Details pages have been expanded to show errors for the current node or the entire ER domain (Errors tab) and to list current values of the ER configuration parameters (Configuration tab).

  • Check out a demo of the newest ER monitoring features here: ER Monitoring with Alerts Demo

Secure SQLToolbox: OAT admins can now choose to turn on an additional level of security for the SQL Toolbox pages. If “Secure SQLToolbox” is turned on, OAT users will have to re-authenticate in order to view schema data or use the SQL Editor. This additional layer of security can be used to ensure that OAT users are not automatically allowed free access to databases or tables as the user informix.

Download OAT Version 2.22 now at

For additional information on OpenAdmin Tool for IDS, including feature details, screenshots and demos, go to

Also check out the new DeveloperWorks article on writing custom plug-ins for OAT:

Erika Von Bargen[Read More]

Installing multiple copies of IDS on Windows - 11.50.xC2

cheetahblog Tags:  install installation 4,475 Views

Introduction:Multiple installation of IBM Informix Dynamic Server is now enabled in IDS 11.50 xC2. With this, users will be able to install multiples copies of IDS on a single machine with the ability to create instance(s) per installation. For example, a user can have 11.50 xC1 and 11.50 xC2 co-existing on the same machine with instances related to each installation running on that machine.

With this new effort, a user that has an installation of IDS 11.50 can invoke setup.exe and will be presented with existing installation of IDS on the machine. The user can then make a choice between installing a new copy of IDS and maintaining an (one of the) existing copy(ies). If there is an existing installation of IDS 11.50 on a machine, a user that launches setup.exe in graphical mode will be presented a panel similar to the one below:

Multiple Installation Screenshot

However, with an existing installation of IDS 11.50, launching setup.exe in silent mode will require a maintenance response file. Without a maintenance response file or with an installation response file, the installer will abort. To install a subsequent copy of IDS 11.50 in silent mode, the user should pass the ‘-multiple’ command line option to setup.exe.

In addition to the "-multiple" command line option, there are 2 command line options for maintenance namely "-path" and "-instnum". These 2 new command line options are used to launch maintenance mode for a specified installation. Since they perform the same functionality, they should be used interchangeably and not in conjunction with one another.

Usages of the newly implemented options:

  • -multiple usage

  • setup.exe -multiple

    This option can be used in conjunction with other installation command line options. E.g.

    setup.exe -multiple -s -f1"C:\TEMP\server.ini" -f2"C:\TEMP\server.log"

  • -path usage

  • setup.exe -path [installation path]

    The user invokes setup.exe with the ‘-path’ option passing the installation path for which maintenance is required. This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -path [installation path] -s -f1"C:\uninst.ini"

  • -instnum usage

  • setup -instnum [installation number]

    The user invokes setup.exe with the ‘-instnum’ option passing installation number for which maintenance is required. Note that each installation is tagged with an installation number in shortcut menu (except for the first installation which doesn’t have a tag). This option can be used in conjunction with other maintenance command line options. E.g.

    setup.exe -instnum [installation number] -s -f1"C:\uninst.ini"

    For more information on installing IDS, visit

    Tosin Ajayi

    [Read More]

    Limiting Connections in IDS 11.50

    cheetahblog Tags:  connections sessions limitnumsessions limit 5,468 Views


    Through online forums and technical support cases, Informix users have requested the ability to limit the number of connections to IDS servers.


    Available now in 11.50 IDS, the ONCONFIG parameter, LIMITNUMSESSIONS, can be used to define the maximum number of sessions that you want connected to IDS.

    This ONCONFIG parameter uses the following syntax:

    LIMITNUMSESSIONS maximum_number_of_sessions, print_warning

    The maximum_number_of_sessions option defines the limit for the maximum number of sessions and can be set to a value in the range of 0 to 2,097,152. The value of 0 is the default and indicates that this feature is turned off.

    The print_warnings option can be set to 1 or 0. When this value is set to 1, a warning message is written to the online.log when the number of sessions approaches the limit. These warning messages are not reported when the value is set to 0. The default is 0.

    The limit imposed by LIMITNUMSESSIONS takes effect when the database server is shut down and restarted. It can also be started, modified or stopped dynamically by using onmode –wf or onmode –wm.

    When the number of connections reaches this limit, new connection requests are rejected with the error -25571 Cannot create a user thread, until the number of connections fall below this limit. A message is reported to the online.log when this limit has been reached.

    Database Server Administrators (DBSAs) are the exception to this limit. When the limit has been reached, a DBSA user will still be allowed to connect to IDS.


    The following example specifies that you want a maximum of 100 sessions to connect to the server and that you would like a warning message printed to the online.log when the number of connected sessions approaches 100.


    For more information:


    This feature is not intended to enforce terms or conditions of the User License Agreement.


    Jeff Laube

    [Read More]

    Polling Performance and IDS 7.x to IDS 10.x Upgrade

    cheetahblog Tags:  performance polling pollthread configuration fastpoll 1 Comment 7,498 Views


    This article describes a customer’s experience with their poll thread configuration while upgrading from IDS 7.31.FD9 to IDS 10.00.FC6. This particular upgrade was related to their busiest IDS server running on an HP Superdome. Typically, one could observe upwards of at least 3000 short-lived soctcp connections on this system.

    Original IDS 7.31 Configuration:

    Key configuration settings that were active in the IDS 7.31 environment were initially used after upgrading to IDS 10.00:

    • 23 CPU VPs
    • 20 poll threads running on NET VPs
    • multiple server aliases

    Optimal IDS 10.00 Configuration:

    An optimal configuration was ultimately determined and incorporated the following configuration settings in the IDS 10.00 server:

    • 23 CPU VPs
    • a handful (3-5) of poll threads running on NET VPs
    • enable new IDS 10.00 ONCONFIG parameter, FASTPOLL
    • multiple server aliases

    Relevant Testing:

    Stress testing supportive of this optimal configuration was conducted on a 16-processor/32-core HP server using the latest IDS 7.31 and IDS 10.00 64 bit products. The testing involved a multi-threaded ESQL/C application that would spawn 3000 threads over 3 server aliases. Each thread would connect to the server, complete a small amount of read-only work and disconnect from the server 30 times. These 90,000 total connections mimicked the customer’s workload and considered poll threads running both on CPU VPs (inline) and on NET VPs against servers configured with 23 CPU VPs. The following chart shows results from the stress testing that were considered for the optimal customer configuration:

    Stress Testing Results

    Jeff Laube[Read More]

    Reduce the size of Shared Memory Dump File

    cheetahblog Tags:  size shared bufferpool dumpshmem nobuffers dump memoty nonuffs 7,673 Views

    Most of us fairly familiar with errno -28 (No space left on device) during Assertion Failure (AF), while Informix Dynamic Server (IDS) generates diagnostics data (AF file and shared memory dump). Diagnostics data are very critical to determine the root cause of failure. AF files are generally not too big, where as shared memory dumps often huge in size, almost same as the total memory size used by the IDS instance. The lack of disk space can cause partially dump of shared memory file, which add very little or no value to diagnose the failure.

    In large IDS systems, the amount of space required to dump the shared memory is excessive because of gigantic sizes of the resident segment. Most of it contains is BUFFERPOOL information. Large size of the shared memory dump file not only create space issue, it difficult also for technical support to extract useful information in a timely manner.

    The IDS version 11.50 provides some flexibility to control how much memory is written to a dump file. We can exclude the buffer pool information from resident segment to significantly reduce the shared memory dump file size. Configuration parameter DUMPSHMEM and onstat both provide some new options to control the shared memory dump size.

    Use the DUMPSHMEM configuration parameter to automatically create a dump file during AF. Set DUMPSHMEM to 2 to create a shared memory dump that excludes the buffer pool. You can dynamically change the value of DUMPSHMEM with onmode -wm and onmode -wf. The DUMPSHMEM can take following values:

            0 -  Do not dump shared memory during AF        1 -  Dump full shared memory (default)        2 -  Dump shared memory without bufferpool (new option)

    The 'onstat -o' command also allows to dump shared memory file on-demand. Use the new ‘nobuffs’ options with 'onstat -o' to generate shared memory dump without bufferpool. If you use 'onstat -o' without 'nobuffs' option, the DUMPSHMEM configuration parameter controls the content of shared memory file. The 0 or 1 configuration value will generate full shared memory dump file and 2 exclude buffer pool information.

    All oncheck options works on the shared memory dump file without buffer pool, except options that access buffer information e.g. -b, -B, -P.

    Typically onstat shows segments as “FACADE” while working with full shared memory, where as shared memory without buffer pool shows as "FAÇADE NOBUFFERS".

    Sanjit Chakraborty[Read More]

    Getting Windows installer Error 1706 in IDS 11.50.xC1 and xC2 during IDS installation?

    cheetahblog Tags:  install installation 3 Comments 9,446 Views

    Introduction:IDS 11.50 makes use of IBM Global Security Kit (GSKit) for data encryption and SSL communication. GSKit is deployed as part of IDS installation.


    Due to the manner in which GSKit is deployed by the IDS installer, you may get a Windows installer error during IDS installation stating:

    Error 1706. No valid source could be found for product IBM Informix Dynamic Server. The Windows installer cannot continue.

    Window installer error 1702

    If GSKit 7 is not present on the machine, it will not be deployed during IDS installation. Furthermore, this error impacts the invisibility of IDS being deployed using silent mode of installation. If present on the machine, GSKit is deployed in [PROGRAM_FOLDER]\IBM\gsk7

    After IDS installation, INFORMIXDIR (IDS installation directory) will contain the GSKit-related msi file "IBM Informix Dynamic Server.msi". Prior to any subsequent installation of IDS, clean entries related to the GSKit-related msi from the Microsoft installer database by running the following command:

    msiexec.exe /x "IBM Informix Dynamic Server.msi" /qn

    Note:The command above will not uninstall IDS. "IBM Informix Dynamic Server.msi" does contain the IDS application files or IDS-related information. Therefore, uninstalling IDS is not an alternate workaround.

    Below is are the repro steps for the problem:

  • On clean machine, install IDS, OR
  • On a machine with no IDS installation present, run the Windows installer clean up utility and clean up all reference to IBM Informix Dynamic Server.
  • After installation, uninstall IDS and clean up all the remnants of INFORMIXDIR
  • Install IDS in a separate directory from the previous installation.

  • The problem will be fixed in IDS 11.50.xC3 release.

    Tosin Ajayi

    [Read More]

    Upgrade to IDS 11.50 in Windows

    cheetahblog Tags:  windows 11.50 11.10 upgrade 6,075 Views

    Upgrading Informix Dynamic Server Version 10.00 to Versions 11.10 or 11.50:

    When installing IDS 11.10 or 11.50, if you choose the option "Upgrade from the previous version", all the server binaries will be upgraded to the newer version(s) automatically.

    Upgrading Informix Dynamic Server from Version 11.10 to 11.50:

    When installing IDS 11.50 on Windows, direct upgrade from IDS 11.10 is not supported. If “Upgrade from the previous version” is selected the following message pops up. Installer stops beyond this point.

    Since the support to install both 11.10 and 11.50 on the same machine exists, it is recommended to choose the option “Install into a default/different directory” (shown in the panel below) to install IDS 11.50. Note: This will not upgrade IDS 11.10 to IDS 11.50.

    As part of providing a folder name to install the product, a completely new path must be supplied. If the folder selected already contains binaries from IDS 11.10, then the following message pops up.

    User can respond to the question “Do you want to select another folder? “. Clicking “yes”, the installer returns to the destination panel where they can provide a different folder. Clicking No, the installer goes to finish panel

    The recommended approach to upgrade to 11.50 is

    1. Uninstall IDS 11.10 using "Retain database, but remove server binaries option"
    2. Save all the Registry Keys in HKEY_LOCAL_MACHINE\SOFTWARE\Informix\OnLine\
    3. Install the later version (IDS 11.50) to the same path.
    4. Make sure that you do not initialize the server when installing.
    5. Copy the ONCONFIG file to the target and set parameters that are new for the current release.
    6. Bring the server up using Control Panel->Services or any other method without initializing.


    Snigdha Sahu[Read More]

    IDS 11.50 Installation Enhancements

    cheetahblog Tags:  installation enhancements 5,402 Views
    IDS 11.50 made a significant improvement in installation process. Started with IDS 11.50 informix introduced a new versatile installation wizard (installer), which can automatically creates a customized database server configuration file (ONCONFIG) suitable for your system environment and create a demonstration database server during installation process.

    Part of the installation process installer captures inputs from user. Then evaluates the input values to ensure settings are valid, and it calculates several values for other configuration parameters based on hardware settings and database instance needs. You can use the Instance Configuration Wizard in GUI or console installation modes to use this new feature. However, make sure you choose the option “Creating Demonstration Database Server Instance” during installation process to automatically create the customized configuration file.

    There are little differences between UNIX and Windows installation wizard.


    • Checked 'Yes' for Use create an IDS demonstration database server instance
    • Choose option for Customize the default configuration file

    • On Windows platform the Instance Configuration Wizard is only available with custom setup in GUI mode.
    • You must checked 'Demos' under select the feature you want to install
    • The 'Enable a custom configuration file' must checked also

    Following are some restrictions that you need to remember during installation:

    • Installation directory must be empty
    • The root chunk file must be empty or not exists. Installer will create the file.
    • Parameters affected by the Instance Configuration Wizard are not available for silent installation.

    During installation, Instance Configuration Wizard captured following input from user:

    • Installation Directory
    • Database Server Name
    • Database Server Number
    • Rootpath
    • Rootsize
    • No. of central processing units (CPUs)
    • Memory: System RAM dedicated to the IDS server (in MB)
    • No. of online transaction clients
    • No. of decision support clients

    Based on the above information installer will set following configuration parameters:

    If the Instance Configuration Wizard encounters a problem while validating or calculating configuration parameters for customize configuration file, the configuration file is created with default, workable configuration parameters and an appropriate message displayed.

    The customized configuration file will save as 'onconfig.<DBSERVERNAME>' in etc directory under <INFORMIXDIR>.

    The "Creating Demonstration Database Server Instance" option with installation process will create a demo database instance. Once the installation process complete, installer generate two script files called profile_settings (korn shell) and profile_settings.csh (C shell) for setting environment variable for demo server instance in $INFORMIXDIR/demo/server (%INFORMIXDIR%\demo\server for Windows) directory.

    Following is a screenshot of instance configuration wizard on UNIX platform (console installation mode):
    Console Installation Mode on UNIX platform

    You must choose '1 - Yes' for create an IDS demonstration database server instance and '3- Customize the default configuration file to suit your needs and hardware' in console installation mode.

    Following are some screenshot of instance configuration wizard on Windows platform (GUI installation mode):
    Custom  Installation in GUI Mode
    You must select 'Custom' in GUI mode

    List of Product Components
    Make sure to checked 'Demos' for create an IDS demonstration database instance

    Server Configuration Setup
    Make sure 'enable a custom configuration file to suit your needs and hardware' also checked

    Server Configuration Setup
    An example of server configuration setup. You need to insert values for customize the database server.

    Sanjit Chakraborty

    [Read More]

    Configure Distributed Relational Database Architecture (DRDA) connection during IDS installation

    cheetahblog Tags:  drda configuration 4,876 Views
    IDS 11.50 introduced a new installation wizard (installer), which makes life easier than in past releases to set up an instance to use a variety of database clients. In this section we will discuss how to set DRDA connection during installation process.

    The installation wizard is extremely user friendly and easy to navigate. Options are available with installation wizard to configure a database server alias (DBSERVERALIASES) and a port for clients that use the DRDA protocol. You can easily setup this DRDA connection via a checkbox when creating the demonstration database server instance during installation. We will see some screenshots latter in this section to get a better understanding on DRDA configuration.

    There are little differences between UNIX and Windows installation wizard.


    • Select option use the default configuration file to able to install database server with DRDA.
    • Do not select the option to customize the default configuration file, as this option does not allow setting DRDA.

    • By default the server installation includes DRDA setup.
    • You must select custom installation to exclude DRDA setup.

    On UNIX platform you need to answer 'Yes' for question Do you want to create an IDS demonstration database server instance. Similarly, you need to choose option 2 - Use the default configuration file under demonstration database instance configuration. You may use default server name and server alias values or change those as per your requirement.

    Following is a DRDA configuration screenshot on UNIX platform:
    DRDA configuration in Console Mode

    On Windows platform the typical installation process (default) automatically include the DRDA configuration. It will set the configuration parameter DBSERVERALIASE as "svc_drda". Custom installation process is required if you want to disable DRDA setup in demonstration server instance.

    Following are some DRDA configuration screenshot on Windows platform:
    Custom  Installation in GUI Mode

    The custom installation process provides controls on initialize the demonstration server. You can exclude the DRDA configuration, if you wish. By default DRDA is selected.
    DRDA configuration Setup in GUI Mode

    Things to notice if you enable a custom configuration file on windows platform, DRDA support is removed.
    DRDA configuration Setup in GUI Mode

    By default IDS will configure TCP/IP port 9089 and drtlitcp or drsoctcp connection protocol for DRDA configuration.

    Sanjit Chakraborty

    [Read More]

    Specifying CLIENT_LOCALE/ DB_LOCALE value.

    cheetahblog Tags:  locale client_locale gls db_locale 8,427 Views

    The value of locale variable, CLIENT_LOCALE or DB_LOCALE can be broken into 4 parts.

         1          2                   3                      4    
    <language>_<territory>.<Code set name/Code set number>[@modifier]
    -------- -------- ----------------------------- --------

    Conventional, I represent this as ll_tt.xxxx@xyz, where ...

      ll ............ represents the Language

      tt ........... represents the Territory, or cultural convention.

      xxxx ....... represents the Code set Name or the Code set Number supported by the locale and

      xyz ......... represents the Modifier. This is the only optional part in a locale value.

      The modifier, sometimes refered as variant, modifies the cultural-convention settings that the language and territory settings imply. It usually indicates a special localized collating order that the locale supports.

    Let us look at an example.

      CLIENT_LOCALE = de_at.cp1252@euro, and
      CLIENT_LOCALE = de_at.1252@euro

      • Here, both CLIENT_LOCALE values represent the same locale.

      • 1252 is the Code set number for Code set name, cp1252. We can specify either Code set name or the Code set number in a locale value.

        - de ........... represents the German language
        - at ............ the territory, Austria
        - cp1252 ... the code set used for the encoding and
        - euro ....... the modifier used for the locale

      So, this is German language locale, for Austria, using cp1252 encoding and euro modifier.

    Now, to check if this locale file exists, where to lookup ?

      All locale files reside under directory $INFORMIXDIR/gls/lc11

      To lookup for locale files for language (ll) and territory (tt), we check under $INFORMIXDIR/gls/lc11/ll_tt directory.

        In our example, to lookup for locale files for German language (de), for territory Austria (at), we will lookup $INFORMIXDIR/gls/lc11/de_at directory

        Next, under the specified locale directory, look for files with name represented by hex value of the code set name/ code set number, along with modifier name if modifier is specified, with an extension .lco

        In our example, hex value for Code set cp1252 is 04e4 and modifier euro is used. So, we will look for file 04e4euro.loc under directory $INFORMIXDIR/gls/lc11/de_at.

    How and where to find the hex value for a Code set name ?

      For any Code set name, its Code set number and hex value can be looked-up in file $INFORMIXDIR/gls/cm3/registry.

        Let us find the hex value for Code set name Latin-3.

        We can find the information in file

          In the registry file ...
          - first coulmn represents the code set name,
          - second column is code set number
          - third column is the hex value of the code set number, and
          - fourth column, is either blank or has comment about the code set.

        Let us lookup for code set, Latin-3 in registry file and see what we find.
        We get the following value.

           Latin-3          57346              0xe002  					
        --------- ------------ ----------- --------------------------------
        code set name code set number hex value in this case, there is no comment


    • Locale values are case in-sensitive.

      • Example:
          DB_LOCALE = de_de.cp1252, DB_LOCALE = de_de.CP1252.
          Here, both locale values are valid, representing the same code set.

    • You can specify either code set name or code set number in a locale value, but you cannot use the hex value of the code set number.

      • Example:
          DB_LOCALE = fr_ca.57372 or fr_ca.utf8, ........ both values are valid and they represent the same code set.

          DB_LOCALE = de_de.cp1252 or de_de.1252 .... both values are valid and they represent the same code set

          DB_LOCALE = de_de.04e4 ............... this in invalid. Code set's hex value cannot be used in a locale value.

    • If modifier is not specified in the locale variable, like say ...

    • CLIENT_LOCALE = de_at.cp1252

        - to locate the locale file, look for .loc under the language_territory directory. In this case, we look for following file ...

    • If modifier is specified in the locale variable, like ...

    • CLIENT_LOCALE = de_at.cp1252@euro

        - to locate the locale file, look for .lco file under language_territory directory. In this case, we look for following file ...

    • Code set name, its corresponding Code set number and hex value is specified in file

    • Locale Territory/ Country code and Language code can be looked up in file

    • Conventionally, for LOCALE variable having value ll_tt.xxxx[@xyz], following locale file should exist.
      • $INFORMIXDIR/gls/lc11/ll_tt/<hex value of xxxx>[xyz].lco

      Seema Kumari
      [Read More]

    DB_LOCALE and CLIENT_LOCALE Compatibility:

    cheetahblog Tags:  gls db_locale locale client_locale 13,221 Views

    DB_LOCALE and CLIENT_LOCALE are compatible if ...

    • the code set for both the locales are identical,

    • OR

    • code set conversion is possible between CLIENT_LOCALE's and DB_LOCALE's code set,

    • AND
      a locale consisting of the language_terriorty from CLIENT_LOCALE and the code set from DB_LOCALE exist

      Note: For any code set name, its corresponding code set number and hex value, can be found in following file.

    Example 1:

      DB_LOCALE = en_us.850 (code set 850, its hex value 0352)
      CLIENT_LOCALE = de_de.850

      Here, both the locales are using same code set, 850 which is supported by both en_us and de_de, that is, US English and Germany German.
      Hence, they are compatible.

    Example 2:

      CLIENT_LOCALE = fr_ca.850 (code set 850, its hex value 0352)
      DB_LOCALE = de_de.cp1252 (code set 1252, its hex value 04e4)

      Here, for both the locales to be compatible, there are 2 requirements:

        i) Canadian French (fr_ca) must support code set 1252 encoding.

          In other words, this locale file must exit.

        ii) Conversion between CLIENT_LOCALE's code set, 850 and
          DB_LOCALE's code set, cp1252 must be supported.

          In other words, these files must exist.
        If you have locales fr_ca and de_de, for code set 850 and cp1252, installed on the machine, you

        should see the above listed files exists.
        Hence, they are compatible locale.

    Example 3:

      CLIENT_LOCALE = fr_ca.850 (code set 850, its hex value 0352)
      DB_LOCALE = cs_cz.8859-2 (code set 8859-2, its hex value 0390)

      Here, the locales are not compatible because ...
      • Canadian French (fr_ca) does not support 8859-2 encoding, and
      • conversion between code set 850 and 8859-2 is not supported.

    Example 4:

      CLIENT_LOCALE = iw_il.8859-8 (code set 8859-8, its hex value 0394)
      DB_LOCALE = en_us.utf8 (code set utf8, its hex value e01c)

      Here, even though code conversion is posible between code set 8859-8 and utf8, these locales

      are not compatible because Israelian hebrew (iw_il) does not support utf8 encoding.

      That is, there is no such locale file to support utf8 encoding by Israelian hebrew.

    Note: If a locale file is not available in your INFORMIXDIR, check the Informix International

    Language Suppliment (ILS) to see if that the locale is supported.

    Seema Kumari

    [Read More]

    New ONINIT startup option

    cheetahblog Tags:  oninit 5,660 Views
    Suppose you have an application which depends on IDS. Your business requirement is application must starts after a successful IDS memory initialization. Previously there was no way you can validate the return code of oninit process to make a decision whether or not IDS initialize successfully. So, if you have a script that automatically starts IDS and the application respectively, it possible the application may start even though IDS failed to initialize.

    A new oninit option has introduced to IDS 11.50 that generate a return code. Based on the return code you can customize the script and automate startup process.

    The 'oninit -w' command generates following return codes:

    • 0 - when success
    • 1 - when initialization fails or exceed the timeout value

    The 'oninit -w' command forces IDS to wait until it successfully initializes before returning to a shell prompt. You can also provide an addition argument for timeout value with '-w' option. Without any timeout value with '-w' option, IDS will use the default value e.g. 10 minutes. If IDS cannot initialize within the timeout period, oninit generates return code 1 and writes following error message to the online.log file:

      Warning: wait time expired

    The syntax of new command as follows:

    • oninit -w
    • oninit -w

    You can use the '-w' option with combination of any other oninit initialization options.

    Couple of points to remember:

    • In a high-availability environment, you can only use the 'oninit -w' command on primary server; it is not valid on secondary servers.
    • The oninit command returns success when sysmaster, sysutils, sysuser and sysadmin are successfully created.

    Sanjit Chakraborty[Read More]

    Local System Account Support in Windows IDS 11.50 Installation

    cheetahblog Tags:  system server user informix dynamic local 1 Comment 10,963 Views
    In IDS releases prior to 10.00.TC5, the Informix Dynamic Server(IDS) Windows Service was allowed to log on only as user informix. Launching the IDS installation program setup.exe with the -system command line argument (only) will install IDS and create a new instance running as the Windows Local System user, bypassing screens which prompt for informix user password(informix user does not get created when this option is used).

    Starting with version 11.50, you can install IDS on Windows as the local system user account by selecting the Local System User option right on the IDS Server configuration Setup panel. This option is available only in custom mode of installation.

    Console Installation Mode on UNIX platform

    Typically this option provides the same privileges as the informix user account; however, it uses an internal account representing a pseudo-user that does not require a password. The local system account is used by the operating system and services running under Windows during the installation of Dynamic Server. The informix user is still created by default.

    However user can choose not to create an informix user account at all, but we have to note that Enterprise Replication between Dynamic Server on UNIX and IDS on Windows Operating System will not work if informix user and Informix-Admin group is not present.Along with the “Start database server as Local System User” checkbox another checkbox “Do not create user informix account” is also provided on the same panel which is greyed out unless the user picks the System User option.

    Console Installation Mode on UNIX platform

      Some of the benefits of why people would want to do this:
    • if the machine has a password expiry policy and it is not convenient to change the password of the IDS service this will save having to do it
    • some security policies require that the informix user not be a member of the Administrators group. If the IDS service logs on as local system user the informix user no longer has to be an administrator.

    Snigdha Sahu[Read More]

    Getting error messages with using Finderr Utility?

    cheetahblog Tags:  csdk windows error server vista 2008 message informix finderr ids utility 6,470 Views
    Introduction:In most cases the Informix error codes are very good at explaining why there is a problem and how to fix it. Finderr(Error Message Utility), is the utility shipped with IBM Informix Dynamic Server and client products(CSDK and IConnect) that’s helps to check an error code and returns error messages corresponding to IBM Informix error numbers.

    Problem:Finderr uses WinHelp. The Help for this program was created in Windows Help format, which was used in previous versions of Windows and it is not supported in the newer flavors of Windows Operating Systems like Vista or Windows 2008. Windows Vista and Windows Server 2008 are not shipped with Winhelp application. So you will get error messages popping up while executing finderr utility shipped with IDS 11.50 on Windows Server 2008 and Vista:

    Console Installation Mode on UNIX platform

    The GUI finderr program supplied with CSDK and IConnect also doesn't work on Windows Vista or Windows Server 2008. The execution of finderr utility in client products pops-up the following error messages.

    Console Installation Mode on UNIX platform


    Console Installation Mode on UNIX platform

    The workaround is to download the Windows Help program (WinHlp32.exe) from the following Microsoft support Web site.

    For Windows 2008:

    For Vista:

    The problem has been fixed in IDS 11.50.xC3 and CSDK-3.50.xC3 releases. So you can always upgrade to these releases to get finderr utility working in newer flavors of Windows Operating System.

    Snigdha Sahu[Read More]

    JDBC 3.50 can now be installed on Windows 64-bit with 64-bit JRE

    cheetahblog Tags:  install jdbc 2008 informix ids 5,673 Views
    Previous version of JDBC 3.50 could not be installed on Windows 64-bit using 64-bit JRE. Installing JDBC 3.50 gave an error "Directory not writable" for all directories. Same error message was received, even after running all the process as administrator by turning UAC (User Account Control) off. Workaround for this problem was to use 32-bit JRE on Windows 64-bit to install JDBC 3.50.

    This problem was solved in IDS 11.50. Now JDBC 3.50 uses newer version of Install Shield, which allows for JDBC 3.50 to be installed on 64-bit Windows using 64-bit JRE. Now, users no longer have to use 32-bit JRE to install JDBC 3.50 on Windows 64-bit. This makes users life easier, since users do not have to set up 32-bit JRE on 64-bit Windows.

    Bhadrik Patel

    [Read More]

    Program Group folder pop-up and start menu entries can be suppressed in IDS 11.50

    cheetahblog Tags:  embedability 2008 install informix ids 2 Comments 8,449 Views
    During the installation of IDS 11.50, Program Group folder is opened and Start Menu shortcuts are added. If, these actions were optional then it would make IDS more embeddable with other applications, which uses IDS in background. Customers who are embedding IDS as part of their software package usually desire near-complete invisibility.

    This request was met by adding a new comma-line option “hidden” for instillation. This option will prevent creation of start menu shortcuts and suppress the Program Group folder from popping-up. Users can use this option by invoking the setup.exe (IDS 11.50 installation) from command-line and supplying “–hidden” option with the same command.

    Command-line Usage:

    $ setup.exe -hidden

    Bhadrik Patel[Read More]

    OAT Version 2.23 Available Now!

    cheetahblog Tags:  openadmin oat 4,559 Views

    The newest version of OpenAdmin Tool for IDS, Version 2.23, has just been released! Download the new version today to get the enhanced SQL Explorer and version 2.0 of the ER Plugin.

    New feature highlights:

    The SQL Explorer has been newly redesigned. New features include

    • Filters and search fields for viewing SQL tracing data.
    • Support for different SQL trace levels - global or user - so you can manage what kinds of SQL information are traced for IDS V11.50.xC1 servers.
    • Support for suspending and resuming history tracing, without releasing resources, on IDS V11.50.xC1 servers.

    ER Plug-in Version 2.0: Version 2.0 of OAT’s ER plug-in includes a new Replicate Explorer and becomes the first step in supporting ER administration graphically through OAT.

    • The Replicate Explorer now lets you monitor your replicates and replicate sets.
    • Screenshot of the Replicate Explorer

    • The Node Details -> Configuration page now supports the editing and updating of ER configuration parameters

    Download OAT Version 2.23 now at

    For additional information on OpenAdmin Tool for IDS, including feature details, screenshots and demos, go to

    Erika Von Bargen

    [Read More]

    Commonly Encountered GLS Errors:

    cheetahblog Tags:  -23101 gls -23197 error -23104 7,870 Views

    --> Error -23101      Unable to load locale categories.

      If you have variables CLIENT_LOCALE and DB_LOCALE set, following files must exist otherwise it will result in error -23101.

        - $INFORMIXDIR/gls/lc11/DB_LOCALE's(ll_tt)/(hex value of db's code set).lco
        - $INFORMIXDIR/gls/lc11/CLIENT_LOCALE's(ll_tt)/(hex value of db's code set).lco
        - $INFORMIXDIR/gls/lc11/CLIENT_LOCALE's(ll_tt)/(hex value of client's-codeset).lco

      where ll_tt represents the language and the territory specified by the locale.

      Hex value for any code set can be looked up in file $INFORMIXDIR/gls/cm3/registry

      For example:
      say, DB_LOCALE = en_us.ut8 ......................... (hex value for code set utf8 is e01c)
             CLIENT_LOCALE = zh_cn.gb18030-2000 ..... (hex value for code set gb18030-2000 is 1570)

      following files must exist:
        - $INFORMIXDIR/gls/lc11/en_us/e01c.lco
        - $INFORMIXDIR/gls/lc11/zh_cn/e01c.lco
        - $INFORMIXDIR/gls/lc11/zh_cn/1570.lco

      Missing any of these files will cause error -23101.

      When GL_USEGLU is set (i.e. when using ICU implementation), you need to check the following as well.
        - Verify $INFORMIXDIR is set correctly.
        - Ensure that ICU files are loaded correctly.

        ICU files:
          - $INFOMRIXDIR/gls/dll/* ........... (all 4 libraries must exist and have execute permission)
          - $INFORMIXDIR/gls/etc/* ........... (all 4 files must exist)

    --> Error -23104      Error opening required code-set conversion object file.
      Error -23104 is encountered if any of these files are missing:
        - $INFORMIXDIR/gls/cv9/ccccdddd.cvo
        - $INFORMIXDIR/gls/cv9/ddddcccc.cvo

      where cccc is hex value for CLIENT_LOCALE's code set,
               dddd is hex value for DB_LOCALE's code set,

      assuming code set for CLIENT_LOCALE and DB_LOCALE are not the same.

      Having same code set for CLIENT_LOCALE and DB_LOCALE means no code set conversion is involved and hence no -23104 error.

      For example:
      If DB_LOCALE = en_us.ut8 ............................ (hex value of utf8 is e01c)
          CLIENT_LOCALE = zh_cn.gb18030-2000 ....... (hex value of gb18030 is 1570)

      following files must exist, else it results in error -23104.
        - $INFOMRIXDIR/gls/cv9/e01c1570.cvo
        - $INFOMRIXDIR/gls/cv9/1570e01c.cvo

    --> Error -23197      Database locale information mismatch.
      Error -23197 is encountered if ...
        - the code set specified by DB_LOCALE is not same as the code set of database locale.
        - the code set of locale_spec used in SET COLLATION sql statement is not same as the code set of database locale.

      * Database locale is the locale used when the database was created.

      * To know your database locale, you can lookup for dbs_collate value in sysmaster database by

      executing following command,
        SELECT dbs_collate FROM sysmaster:sysdbslocale WHERE dbs_dbsname = "<your database


      * Remember, if DB_LOCALE is not set, CLIENT_LOCALE is used as DB_LOCALE.

    Seema Kumari[Read More]

    Data Studio Developer 2.1 and pureQuery for IDS

    cheetahblog Tags:  purequery tooling surange data_studio 5,566 Views

    To All Informix users, I would like to encourage you to take a look at our new IBM Data Studio Developer 2.1 offering.  There are significant enhancements we think that IDS developers will like.

    We now have new tools to view update statistics and perform object management tasks easily, such as specifying data partitions (either round robin or expression-based) and raw tables. Also, trigger support has been greatly improved to take advantage of the ability in IDS to have before, after, and ‘for each’ triggers, whereas only ‘for each’ was supported previously. These capabilities are available at no charge. Guy Bowerman talks a bit more about these features in his blog.

    In Data Studio Developer, which includes the base tooling and all the value-add capabilities, we also have many new features that IDS developers can take advantage of.

    Using our previous releases, IDS developers could improve productivity by using the SQL content assist and validation capabilities in the Java editor and to generate applications quickly using the pureQuery API. You could capture SQL from any  Java application and correlate which SQL was issued to the database and from where in the application. Using Data Studio, you could gain insight into which database tables and columns were used by the SQL. Developers and DBAs could easily isolate poorly performing queries in the application and work better together, to assess changes needed in the application as a result of database schema changes.

    Data Studio Developer 2.1 has even more to offer for Informix developers and DBAs:.
    • You can now understand more about how the SQL in your application is performing using visual displays of elapsed time per SQL statement, and then you can change the SQL that is issued without having to change the application. Use these features from Data Studio and pureQuery Runtime together to help develop enterprise-ready applications before throwing it over the wall into production. 
    • You can reduce or eliminate the risk of SQL injection because now it is possible to ensure that only SQL that was previously captured and approved is allowed to execute. This is available for dynamic SQL not just static, so by using with pureQuery Runtime,  Informix applications can take advantage of this.
    • You can share database connections, reducing the amount of communication required to share details of connection information between administrators and all the developers who need to use the databases.  Here's an entire article just on this topic.
    See my article for more details on what's available for Informix developers and DBAs. We even have a set of videos that you can check out that follow along with the article.

    The download for Data Studio Developer 2.1 should be available soon.

    This single package contains both the no-charge, perpetual features available and supported for IDS customers with up-to-date maintenance contracts, but also 30-day trial features of the value-added capabilities of the priced Data Studio Developer product. The trial features are optional during installation, but I think you’ll be happy if  you give them a test drive.
    In the meantime don’t hesitate to send me any feedback or suggestions that you may have. Then let me know either here or on the Data Studio Forum what you think of the new release.
     -- Sonali Surange

    [Read More]

    JDBC connectivity to IDS on Mac OS X

    cheetahblog Tags:  os jcc x jdbc mac connectivity 4 Comments 6,872 Views

    Purpose:  In this blog entry I will explain the appropriate steps to setting up the JDBC connectivity  for IDS on Mac OS X.


        All testing for this document was prepared with the following versions of software:
    -Mac OS X:  10.5.5
    -Java:  1.5.0_16 or 1.6.0_07
    -IDS:  11.50.FC3W2
    -JCC driver:  Versions 3.52.78 or 4.2.51


    $INFORMIXDIR = directory where your informix Informix instance is installed.

    <"element"> = data that must be filled out specifically from your setup.  The word "element" can be replaced by "host", "port", etc.

    JCC driver =  jar and zip files necessary.
    (JCC driver may be obtained on:   Its the file identified as "IBM Data Server Driver for JDBC and SQLJ Version 3.52 (This is platform independent)").

    slqlhosts file = file pointed to by INFORMIXSQLHOSTS variable in your server set up.

    {JCC_JAR_FILE} = The .jar file with all the classes for the JCC driver.  (Usually named db2jcc.jar or db2jcc4.jar).


    *Have IDS installed with DRDA connection set up.
    *Have the JCC drivers.(Informix sqli driver is NOT officially supported on Mac OS X)
    *Install the "stores_demo" database by running the following:
    "dbaccessdemo -log".  Script available in the $INFORMIXDIR/bin directory.
    This is for testing purposes.

    Steps to follow:

    1-Obtain the following information about your IDS server
    You want to find out host, port, database , username and password:
    (Its going to be needed for the JDBC URL which is in the form of: 
    jdbc:ids://<host>:<port>/<database>:user=<username>;password=<password>; )
    host:  IP address or name of machine running the database.  This is the computer's name.  You may find this in sqlhosts file.  The third value of any one of the lines that have a server thats being hosted locally on your computer (or on the computer in which the server you want to connect to is in) will be the "host".  Write it down.  

    port:  The "port" on which your server is listening to.  Its very important to keep in mind that the port has to be one that is using the 'drsoctcp' protocol.  To verify this check the sqlhost file.  Verify that the second value on one of the lines within this file is the indicated protocol.  At the end of the line you will see a number.  This will be the "port" that you'll need.  Write it down.

    database:  The database is the name of the database which you will want to access.  For example, if you are using the database that the script "dbaccessdemo" creates, then the "database" name would be "stores_demo".

    username:  The user name of the user that will be connecting to the database

    password:  The password of the previously mentioned user

    2-Determine the appropriate version of the JCC driver to use.
      If you have JDK 6, you should use JCC version 4.  The file related to this version is "db2jcc4.jar".  If you have a previous version of the JDK you should be using JCC version 3.  The related file is "db2jcc.jar".  A way to determine which JCC version you are using is as follows:
    "java -cp {JCC_JAR_FILE} -version"

    Something to keep in mind about your version is that if you are using JCC version 4, then you don't need to use the Class.forName() method.

    3-Now its time to create a sample app.  To see a good few samples see:

    To test out your installation, try out the included sample java class, which is just a slightly modified version of the sample found on:

    A few things to consider about the sample:
    You must give as a command line input the information to complete the URL.  The full URL needed is in the form of:
    (to see how to obtain the information needed to fill out the URL, see step 1.)

    The sample code already includes the "jdbc:ids" part of the URL, so you must pass the rest of the URL.  For example, if the host is 'myhost', the port is '1234', the database is 'test_db', the username is 'informix' and the password is 'mypass', then to run the sample application (with it being named EzJava), you'd run it as follows:

    java EzJava '//myhost:1234/test_db:user=informix;password=mypass;'

    NOTE:  This is supposing that the appropriate jar file is included in the CLASSPATH environment variable.  If not, you must include it with the "-cp" flag.

    Keep in mind that the query executed by the sample app will probably not work with your db, so change the query accordingly to test it out with a table of your own.  For this example, you need to have set up your logging mode to be buffered or to log.  Otherwise you'll get an exception when trying to commit your connection.  If you want to use the "stores_demo" database, to create it run the "dbaccessdemo -log"

    4-Make your own app.  If you were able to compile and run the sample app without a problem, congratulations, you are now ready to develop your own app.  Enjoy!

    External links:

    Sample app:


    Info center link to JCC:

    Establishing JCC connections with IDS:

    IBM Data Server Driver for JDBC and SQLJ Version 3.52 (This is platform independent) (JCC driver):

    IDS on Mac (free trial available here):

    Update on February 26,2009:

    The Code used for this report can be found at:

    --Omar Ferrer

    [Read More]

    IBM Support Experience - Need your feedback.

    cheetahblog Tags:  support ibm esupport informix experience 5,654 Views


    We would appreciate your help in building a more robust, efficient, and client-focused IBM Electronic Support system by filling out this questionnaire:

    It should take only approximately five to ten minutes to complete, but the results will help us make your IBM Support experience the best it can be.

    Thank you for your valuable input!

    The Enterprise IBM Electronic Support team

    [Read More]

    Informix Warehouse

    cheetahblog Tags:  informix sqw ibm warehouse 6,239 Views

    IBM hasjust announced TheInformix Warehouse feature  whichconsists of a powerful set of tools to build a warehouse infrastructureplatform. Here is a quote from the announcement:


    From KevinBrown, lead architect for IBM and Jim Kobielus from Forrester Research:


    "This can save weeks of effort into just a fewhours," Brown said. "Inaddition, customers often did without information because of the cost of effortto get the information. The lost opportunity cost savings is harder toquantify, but can be significant once they use their warehouse platform forsmarter decision-making."


    TheWarehouse feature includes the following components:


    SQW Design Studio - AnEclipse-based common design environment for connecting to source and targetdatabases, creating and reverse-engineering physical data models, and buildingSQL-based data flows and control flows.


    SQL Warehousing Tool -a graphical environment that works with the Design Studio to enable you toquickly and easily build in-database data movements and transformations intoyour warehouse.


    SQW Administration Console- a web-based application for managing and monitoring the data flows that youdesign using the Design Studio. The Administration Console allows you to:

    • Manage common resources such as database connections and machine resources
    • Schedule when the execution of control flows
    • Monitor the execution status


    Pressrelease at:

    InformixWarehouse page on the ibm site:

    Sanjit Chakraborty

    [Read More]

    Rebuild sysadmin database manually without restart the IDS

    cheetahblog Tags:  scheduler rebuild sysadmin dbscheduler 5 Comments 14,882 Views

    There may be a situation occurs when you need to rebuild sysadmindatabase without restart the IDS. For example,previously IDS started with '$INFORMIX/etc/sysadmin/stop'file present (or windows equivalent) and no sysadmin database exists in the instance. 


    Here is an example of rebuild sysadmin database manually:

    1.  cd$INFORMIXDIR/etc/sysadmin
    2.  dbaccess -db_create.sql
    3.  dbaccess sysadmindb_install.sql
    4.  dbaccess sysadminsch_tasks.sql
    5.  dbaccess sysadminsch_aus.sql
    6.  dbaccess sysadminsch_sqlcap.sql
    7.  dbaccess sysadminstart.sql

    If you encountered any problem during above mentioned process, restartingIDS automatically rebuild sysadmin database.Considering '$INFORMIX/etc/sysadmin/stop' file is not exists.

    Sanjit Chakraborty

    [Read More]

    Data modeling - not just for architects

    cheetahblog Tags:  data_modeling infosphere_data_architect rational_data_architect gopal 5,527 Views

    Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. Ideally, the data requirements are recorded as a conceptual model that is not platform-specific (the logical data model) with associated definitions for the entities and attributes. This logical model can serve as the ‘information hub’ to help define and enforce consistency across enterprise data. From the logical model, multiple database-specific implementations can be generated (via physical models).

    InfoSphere Data Architect (IDA) - formerly called Rational Data Architect (RDA) – is a great modeling tool for any shop that includes Informix Dyanmic Server databases in its mix. Even if you don’t have a ‘data architect’, this product can help DBAs communicate with developers and also help you keep tabs on what is and isn’t “real” any more (by comparing the model of what you think you have with what your database actually does have).
    screenshots of infosphere data architect

    I recently updated a tutorial on using IDA specifically from the perspective of using it with IDS.   There are several enhancements to the model elements for IDS in IDA 7.5.1:
    • You can set/view extra properties for Table elements (extent size, raw table type, lock level)
    • Enhancements to constraints to support modes (enabled/disabled/ filtering)
    • Trigger enhancements suited to IDS syntax including support for INSTEAD OF triggers on views
    • Generate DDL enhancements for all of the above
    Anyway, if you get a chance, download the IDA trial and check out the latest version of the tutorial and let me know what you think.

    Link to IDA trial download
    Link to an IDA demo

    [Read More]

    Live from the IIUG 2009 Conference

    cheetahblog Tags:  informix 2009 kansas iiug lenexa conference 6,487 Views

    Well - Itsfinally here - the 2009 Informix user group conference got underway in KansasCity today. This has all the makings of another exciting conference and as usual we haveall our Informix gurus under one roof… The conference has always been a greatplace to share your experiences, to learn new skills, to meet the developmentand support folks who strive to give you the best possible product and servicein the industry and this year is no different. We have an impressive line up of key notes and technical sessions that will no doubt challenge you in terms of what to attend. 


    Todaythere were several tutorial sessions from IDS workshop to Troubleshootingseries to getting ready for the IDS 11 certification exam ( You will be ableto take these certifications free of charge at the conference - a $150 value). There were several engineers from the support organization who presented these tutorials which is always special for me including John Miller, Ron Privett, Mark Jamison, Randy House, David Kolbinger, Suma Vinod, Sanjit Chakraborty and Manjula Panthagani. Hopefully these sessions were very well received and we look forward to your feedback in making this better.

    My comments would not be complete without commending the people who make this conference happen. There are several of you who have been tirelessly working to make this a success and I would like to thank each and every one of you for this opportunity. Hopefully you all get to enjoy Kansas City over the next 3 days and above all have a blast at the conference and enjoy all the sessions.

    Bye for now !


    [Read More]

    OpenAdmin Tool Security - HTTPS and ONSOCSSL

    cheetahblog Tags:  https onsocssl encryption openadmin oat security 8,113 Views

    The two major ways to secure OAT's network communication is with HTTPS and ONSOCSSL.


    • Encrypts communication between web browsers and OAT's webserver.
    • Uses digital certificates to certify identity of OAT's webserver.
    • OAT webmasters can setup OAT's webserver, so that only trusted users can access OAT
    Click here for detailed steps on how to setup HTTPS on OAT

    • Encrypts communication between OAT's webserver and IDS database.
    • Uses IBM Global Security Kit (IBM GSKit is bundled with IDS server)
    • onsocssl is supported on all informix platforms, excluding MacOS.
    Click here for detailed steps on how to setup ONSOCSSL on OAT

    Leo Chan Wai Hon

    [Read More]

    Live from IIUG 2009 - Day 1 and 2.

    cheetahblog Tags:  kansas iiug informix 5,677 Views

    CombiningDay 1 and Day 2 activities. It has been a busy 2 days with some greatannouncements. We started off with Arvind announcing xC4 and most importantlyour storage optimization i.e. compression feature and warehouse feature.

    Compressioncan save you as much as 80% of disk space and the demo at the upgrade kioskshowed a compression rate of 73 %. Check out our compression site at:

    This sitefeatures a ROI tool and a white paper that is a must read for all. It clearlyarticulates to the value proposition of compression and how easy it is to setupand use. The OAT sessions and the demos showcased this feature and ease of usevery well and these sessions were amongst the popular ones.

    Thewarehouse announcement is another exciting announcement that went out and thisincludes tooling that greatly simplify your warehouse design and deployment andaids in your BI decisions. The demo sessions highlighting the warehouse featurewas also a great attraction. Check out our warehouse site at:

    The sitefeatures a white paper that is a great read…. We had another great keynote withJerry and Kevin who walked us through the Informix roadmap. I am sure thissession gave everyone an idea of where Informix is heading and how each of youcan help with driving key features within the product as well. The rest of theday as usual had some great sessions and I just did not want to highlight oneover the other. Let me just state that there were almost 25 great sessions thattook place through the day.

    OnTuesday, Ananth gave his keynote on cloud computing and how it can be acritical success factor in your IT plans. This is an exciting space that IBM isin and the prospects and the potential seems endless in terms of its useespecially with its cost effectiveness and ease of deployment. Guy's blog hassome great entries on this and I would encourage each and every one to read upon the same

    Thesessions again were great and the feedback has been astounding in terms of thequality of the sessions. There is a reason why the IIUG conference is anattractive proposition and why you should make this a yearly event on yourcalendar.

    Oh btw,there is an interesting contest going on that Advanced Data Tools is sponsoringcalled the Fastest DBA where the goal is to take an SQL statement that runs inabout 30 min and make it run faster. The last I checked the time to beat was0.54 sec or somewhere around this … We definitely need more of these at futureconferences …

    Thecertification room was receiving its regular stream of test takers as well andatleast the ones I saw came out smiling which is good :)

    Tomorrowis the last day of the conference and has been filled with some excitingsessions. The key note is a Q&A session that will raise some gooddiscussions about marketing, support, sales and dev … will sum up theconference tomorrow… enjoy the rest of the conference….


    [Read More]

    Live from IIUG - Day 3.

    cheetahblog Tags:  informix kansas iiug 1 Comment 7,785 Views

    The IIUGconference has now come to an end. Day 3 ended strong as well and I heard verygood comments on the afternoon sessions of Day 3 - It is always good to seefolks attend the sessions right through to the last one. The morning startedwith a Q&A session with Alyse, Bernie, Rachel, Al and Jerry. There weresome very good questions that has been at the back of every users mind aroundpricing and some recent moves within support specifically related toentitlement. We received a lot of good feedback as well and I am sure therewill be some follow-up to these.  Racheltalked to our community investments and how there were several avenues topursue to participate within the community and how IBM could assist in theseefforts. I hope we get to grow our user community even more and we have astronger presence going forward.


    The Day 3sessions were equally good and there were several good sessions that occurredthat day. Some sessions of note were the compression and the disaster recoverysessions and these were great hits among the audience. If you were not able toattend any session and wanted to get your hands on the material, it will beavailable soon in an electronic format.


    BTW,congratulations to Spokey on winning the fastest DBA challenge - 54 seconds….{Thanks to Spokey for the clarification}


    So insummary  - IIUG 2009 was yet anothersuccessful conference. We had great announcements, great speakers, greatsessions and above all a great conference planning committee.. See you allagain in 2010 hopefully back in Kansas City!

    [Read More]

    940 is now End of Support

    cheetahblog Tags:  940 support eos 6,767 Views

    Please note that IDS 9.40 [All Versions] is now "End of Support". If you are planning on remaining on 9.4, you should start discussingwith your IBM Representative regarding purchasing a service extension else please consider upgrading to the current IDS release.

    IDS 11.50.xC4 has several new features like storage optimization and warehouse that you can take advantage of.

    The upgrade portal is a good resource for your upgrade plans and is a definite recommendation. If you need any additional assistance or have a quick question, please feel free to email

    Happy upgrades!!


    [Read More]

    OpenAdmin Tool Version 2.24 Available Now!

    cheetahblog Tags:  openadmin oat 1 Comment 8,238 Views


    It's been five months since the last release of OAT, but hopefully it's been worth the wait.  OpenAdmin Tool version 2.24 is now here to make IDS administration even easier!

    What's new in OAT 2.24?

    • Compression   
    • The Storage Optimization feature is new to IDS in 11.50.xC4.  And OAT 2.24 provides the graphical interface that makes saving disk space easy.


      Compress, uncompress, repack, and shrink operations are all available through OAT.  Save disk space by compressing tables and table fragments, consolidate free space in tables and fragments through a repack operation, and return free space to the dbspace through the shrink operation.  OAT also helps you to decide which tables or fragments to compress by graphically showing you the estimated the amount of space that compression will save.

    • SQL Trace for Historical Data   
    • The popular SQL Explorer feature just keeps getting better.  In OAT 2.24, the SQL Explorer now allows you to permanently store SQL Trace data in the sysadmin database.  Then you can use the SQL Explorer to perform a query drill-down to gather statistical information not only on live SQL statements, but historical SQL as well!

      Also new to the SQL Explorer, a Query Optimization Workbench which allows you to create and save external directives to modify a query's behavior.

    • Query By Example
    • Query By Example is a new addition to OAT's SQL Toolbox. Query by Example is a easy-to-use CRUD interface that can be used to perform these standard SQL operations on a table: query, insert, update, and delete.

    • Import/Export Connections
    • OAT now supports importing and exporting IDS connection information using XML.  This feature, available on the OAT administration page, will save you time in setting up OAT as it allows for the ability to export OpenAdmin Tool connection information to an XML file and import it into another instance of OAT.

    • Enterprise Replication Plug-in version 2.1
    • The Enterprise Replication plug-in has been building with each release.  Version 2.1 of the ER plugin marks a major stride forward - with its support for ER administration and setup.  The ER plugin can now be used to do remote graphical setup, administration, and monitoring of Enterprise Replication - on the domain level, on the node level, and on the replication level.

      The new ER setup and administration support in OAT includes the ability to:   

      • Define ER servers
      • Define replicates and replicate sets
      • Define and realize templates
      • Start, stop, suspend, and resume replicates and replicate sets
      • Check replicates and replicate sets for data inconsistencies
      • Repair data inconsistencies in replicates and replicate sets
      • Synchronize data to repair inconsistencies in replicates and replicate sets
      • Repair failed transactions by using Aborted Transaction Spooling (ATS) and Row Information Spooling (RIS) files


    Download OAT Version 2.24 now at

    For additional information on OpenAdmin Tool for IDS, including feature details, screenshots and demos, go to  To post comments or questions about OAT, use the IIUG OAT forum.

    Erika Von Bargen

    [Read More]

    Tips for Upgrading OAT

    cheetahblog Tags:  openadmin oat 6,990 Views

    This article will provide tips on the easiest ways to upgrade your OpenAdmin Tool installation.

    But first, we must start with describing the two different ways to install OAT:

    1. Automated Installer
    2. The automated OAT installer is provides an automated way to install and configure OAT and all of its prerequisites.  The following products are included as part of the automated installer:    

      • OpenAdmin Tool for IDS (and the ER plug-in)
      • IBM Informix I-Connect 3.50
      • Apache 2.2.4
      • PHP 5.2.4
      • PDO_INFORMIX 1.2.6

      What platforms? The automated installer is only available on the following platforms: Windows, Linux, MacOS.

      When to use it?  The automated installer is really intended for first time installations.  It's purpose is to drastically simplify the initial step of installing and configuring OAT's prerequisites.  The automated installer is also limited to those who do not require custom web server configurations.  And those installing on Windows, Linux, or MacOS platforms.

    3. Manual Installation
    4. The manual installation is when you install and configure your own Apache, PHP, PDO_INFORMIX, and Informix CSDK or I-Connect prior to installing OAT.  Under this installation method, OAT comes as a zip or tar file that you can just extract into the webserver's document root directory after all prerequisites have been installed and configured.

      What platforms?  Any platform for which you can get OAT's prerequisite's installed.  Although, for some platforms this can be very tricky.

      When to use it?  You are installing on a platform other than Windows, Linux, or MacOS.  Or you have special web server configuration needs that are not supported by the Apache web server shipped with the automated installer.  Or you already have a web server/PHP setup, which is exactly your situation if you are upgrading OAT!

    Upgrading OAT

    So now that we are clear on the initial installation choices, we can finally get to talking about upgrading OAT.

    The Automated Installer only supports full installations.  It does not support installing individual components (e.g. OAT only), or upgrading components of the installation.  Therefore using the automated installer for upgrading OAT only makes sense if you want a completely new installation of the webserver, PHP, and OAT.  (Note: on Windows, this will require completely removing your old installation first.)

    Therefore, the recommended method for upgrading OAT is to use the manual install package.  This makes sense because you already have the webserver, PHP, PDO_INFORMIX, and I-Connect installed and configured.  When upgrading, all you need is the new version of the OAT product -- which is just a set of php files.  To upgrade, download the OAT zip or tar file and extract it into your web servers document root directory and then visit the OAT install URL (http://<servername>/<oat_directory>/install) to upgrade.

    Now when you choose the manual upgrade of just extracting the new version of OAT into your webserver's document root directory, you have two choices:

    1. Extract over the existing installation of OAT   
    2. This method is fast and easy.  You just extract the newest OAT tar or zip file over the existing OAT directory, visit the OAT install URL (http://<servername>/<oat_directory>/install) to upgrade your OAT connections database, and you're on your way.  The disadvantage is that you lose your previous OAT installtion, including any custom changes you may have made to the files shipped as part of the OAT product.

    3. Extract into a new directory in your webserver's document root (Recommended)
    4. This allows you to preserve your OAT existing installation should you ever want to go back to it.  But what about preserving the connection information from your previous OAT installation?  Again there are two options:

      • You can make a copy of your previous connections.db (see the OAT Admin Config page for its location) and point your new installation of OAT to the copy.  Then when you visit the the OAT install URL (http://<servername>/<oat_directory>/install) of the new installation, it will automatically upgrade the connections.db to the latest version and preserve all of your connection information.  But since the OAT connections.db stores OAT plug-in information in addition to IDS server connection information, when you use this method, you will either have to install the same plug-ins into the new version of OAT or copy the OAT plugin directory from your previous installation into the new version of OAT.
      • The other option for preserving IDS server connection information when upgrading OAT is to use the new OAT 2.24 import/export connections feature.  This allows you to export OAT's IDS connection information into an XML file and then import it into another installation of OAT.  Of course, here too, if you want the same plugins as your previous installation of OAT, you'll still need to use the Plug-in Manager to install the plug-ins in the new OAT installation.

    For additional information on OpenAdmin Tool for IDS, including feature details, screenshots, demos, and the download link, go to  Any questions about OAT or its installation/upgrade can be posted on the IIUG OAT forum.

    Erika Von Bargen

    [Read More]

    New onstat Utility Quick Reference and Portal

    cheetahblog Tags:  quick onstat reference 3 Comments 12,110 Views

    Overwhelmed by all of the onstat options? Need to find which commands work with ER and which work with HDR? With generous help from Tech Support, the ID team has produced an onstat Quick Reference Card and an onstat portal that lists onstat utility commands by functional category. Categories include:

    • Archive Information Options
    • Cache Information Options
    • Debugging Options
    • Enterprise Replication Options
    • High-Availability Replication Options
    • I/O Options
    • Locks and Latches Options
    • Logs Options
    • Memory Options
    • Other useful onstat Utility Options
    • Network Options
    • Performance Checks (First Tier)
    • Performance Checks (Second Tier)
    • Table Options
    • Thread Options
    • User/Session Options
    • Virtual Processor Options
    • Waiting Options

    Each category represents a different IDS feature for which onstat commands provide troubleshooting and performance information. Commands that appear in bold typeface are especially useful for providing troubleshooting information. Certain onstat commands are specific to one category while others provide general information and are listed in more than one category. The card is designed as a double-sided trifold brochure and looks best when printed in color.

    The quick reference card can be found here (you'll need to log in with your IBM ID):

    The onstat portal (with links to the documentation for each command) can be found here:

    Bill Belisle


    [Read More]

    OpenAdmin Tool Version 2.25 Available Now!

    cheetahblog Tags:  oat openadmin 1 Comment 9,763 Views

    What's new in OAT 2.25?

    • Schema Manager Plug-in version 1.0
    • Introducing a new plug-in for OAT: theSchema Manager.  Use this plug-infor a dashboard-like of view databases and tables on your IDS databaseserver.  For databases, see stored procedures, functions, sequences,UDTs, privileges, DataBlades, aggregetes, casts, and opclasses from onesingle view.  For tables, views, and synonyms, the Schema Manager showsyou information about its columns, indexes, references, constraints,privileges, UDTs, fragments, and triggers.  The Schema Manager alsoprovides you with ability to drill-down on any of its pods for moredetailed information.


      A demo of the new Schema Manager funcationality isavailable here. More enhancements to this plug-in will be coming soon, so staytuned!

    • Enterprise Replication Plug-in version 2.2   
    • The Enterprise Replication plug-in administrationcapability has been expanded to allow the modification ofobjects:

      • Modify a replicate set by adding or deleting replicates
      • Modify a replicate set by changing the replication frequency for the replicates in the set

      Download OAT Version 2.25 now at And starting following OAT on Twitter:!

      For additional information on OpenAdmin Tool for IDS,including feature details, screenshots and demos, go to To post comments or questions about OAT, use the IIUG OAT forum.

      Erika Von Bargen

    [Read More]

    OpenAdmin Tool Version 2.27 Has Arrived!

    cheetahblog Tags:  openadmin oat 8,531 Views

    OpenAdmin Tool Version 2.27 Has Arrived!

    What's new in OAT 2.27?

    • UI Translations in 13 languages
    • Brazilian Portuguese, Chinese Simplified, Chinese Traditional, Czech, German, Hungarian, Italian, Japanese, Korean, Polish, Russian, Slovak, and Spanish

    • Connection Manager Proxy Support
    • The Connection Manager wizard on OAT's MACH page now allows you to configure the Connection Manager as a proxy server when clients connect to Informix data servers from outside a firewall (new to 11.50.xC6).

    • Onstat Utility Support
    • Ever want to run onstat commands remotely? Now you can. OAT 2.27 supports running any onstat command you like directly from your web browser!


    • Schema Manager Plug-in version 1.2
    • The Schema Manager plug-in arrives with its first installation of 'admin' actions.

      • Create tables, including external tables
      • Load and unload data from external tables (11.50.xC6 only)
      • Monitor the status of load and unload tasks (11.50.xC6 only)
      • Drop tables
      • Truncate tables


      A demo of the new create table and load from external table functionality is available at

      More schema management and administration actions to come, so stay tuned!

    • Enterprise Replication Plug-in version 2.4
    • ER plug-in version 2.4 now contains virtually all administration commands available for IDS Enterprise Replication! Besides rounding out the suite of administration commands supported, the focus on the ER plug-in version 2.4 was usability.

      • Improve the performance of the consistency check and synchronization tasks, using WHERE clauses and parallelism options.
      • Change the master replicate server for a replicate.
      • Instiate a template on multiple ER servers simultaneously.
      • Delete ER objects including servers, replicates, replicate sets, and templates. Also delete information about completed check and synchronization tasks.

      Download OAT Version 2.27 today, for free, at

      More information on OpenAdmin Tool for IDS, including demos, can be found at

      To post comments or questions about OAT, use the IIUG OAT forum. Feedback is always welcome!

      Erika Von Bargen