datetime(1970-01-01 00:00:00) year to second +
(dbinfo('utc_current')/86400)::int::char(9)::interval day(9) to day +
(mod(dbinfo('utc_current'), 86400))::char(5)::interval second(5) to second
The Informix time types do not embed the time zone information within the values themselves; so, once you have a value, you have to know the time zone or you don't really know when the event happened in the real world. One solution to this is to save the time zone information separately, another is to store the value as a string with the time zone appended, and the third is to always store time values in one, known time zone. One solution to the last option is to use Coordinated Universal Time (UTC). However, it is not necessarily simple to generate UTC values from within SQL; TODAY and CURRENT return values within the server time zone.
The DBINFO function can be used to obtain the current time in UTC as an integer value that is the number of seconds since the epoch. (See the Informix Guide to Syntax: DBINFO Function > Using the 'utc_current' Option.) Getting from this count of seconds to a DATETIME value may not be straightforward, but here is an example:
This expression works by creating a DATETIME at the epoch, and adding the number of seconds that have passed since then to it. This is complicated by two factors: There is no cast(conversion) from INTEGER to INTERVAL, and the number of seconds that have passed since the epoch is a number that contains more digits than can be contained in an INTERVAL SECOND(n) TO SECOND. There is a conversion from CHAR to INTERVAL, and that can be used to solve the first problem. The maximum precision problem can be solved by breaking the original value into two parts, number of days and number of seconds. There are 86400 seconds in a day, (24 * 60 * 60 = 86400). The second part of the expression is the number of days since the epoch. The last part adds the remaining seconds.
DBINFO('utc_current') is set at the start of the statement execution; so, there should be no problem with the two calls to DBINFO getting different values.
cheetahblog 0600028TE8 Tags:  fet_buf_size fbs performance communication setting size fetch buffer optimizations 7,980 Views
The fetch buffer size (FET_BUF_SIZE, FBS) setting applies to Informix client-server communications when using the SQLI protocol. While it can be set for other types besides TCP (shared memory or pipe), as we will see, it is only really useful for TCP connections.
When the server responds to a client request for data, it arranges the data in an buffer of the size requested by the client. The data is arranged by tuples, or rows. When the buffer is full, or the server has come to the end of the rows being fetched, the buffer is copied over the communication channel to a like buffer within the client application process memory. The client API (ODBC, JDBC, ESQL...) reads from the buffer, and copies or transforms the data into application variables as the application requests rows. When the client comes to the end of the buffer, and there is still more data in the result set, the client requests the server to send it another buffer of data.
The fetch buffer size setting is intended to allow the application to reduce the number of times the client API has to request another buffer. This setting is very much in the domain of speed versus size tradeoffs. Since at least one buffer has to be sent, there is no advantage to setting the buffer size to be larger than the typical, expected result set. Also, each buffer is associated with one executing statement; so, if there are lots of statements being processed concurrently, the size of the buffer can have an impact on the application's memory footprint.
A simple way of thinking of how the fetch buffer size affects the application performance is to think of it in terms of number of buffers per result set, times how long of an inherent delay there is between the client and server. If the result set size is small, or the communication channel has little to no delay (which is always the case with shared memory or pipe connections), there is no significant benefit to larger buffer sizes. What follows are the results of series of tests where the result set size, fetch buffer size, and network delay are varied.
These tests were written using the ODBC
interface, but the pattern of results should be the same across all
Informix APIs when using the SQLI protocol. The client and server
are on the same physical machine, with the client running under
Windows and the Informix server on a Linux virtual machine. The
network delay is controlled using Linux's network emulation (netem)
functionality. The delay is in milliseconds; it is roughly equivalent to a what a 'ping' command will tell you is the delay between the host and the client.
The data do not actually exist in the
database; they are manufactured at the time of query execution
through the use of views created to multiply the base result set.
This configuration is not representative of any real world scenarios. The actual performance of this set of queries is not important; the important aspect of these tests is the performance of one condition relative to another. So, the data have been converted to measurements of relative efficiency by dividing each data point within a series by the maximum value within that series. The most efficient setting tested has a relative efficiency of 1.0, and the rest are between 1.0 and 0.0.
The empirical results match predictions. As the size of the result set is increased, the effect of increasing the size of the fetch buffer is that slower communication between the client and the server has less effect. When sending large results to the client, it is possible to compensate for a slow network with a large fetch buffer. When the result size is small, or the network has little delay between the sending and receiving of packets, the setting of fetch buffer size has little effect.
Example 1: Small result set size - 1K
Increasing the fetch buffer size beyond the size of the result set has no meaningful effect, across all network conditions.
Table 1. Result set 1K data
Example 2. Moderate result set size – 256K
When there is a negligible network delay, the size of the fetch buffer has negligible effect. As the network delay increases, the effect of increasing the fetch buffer size becomes more significant. When the network delay is 100ms, increasing the fetch buffer can increase performance by over 6 times.
Table 2. Result set 256K data
Example 3. Large result set size – 4MB
When working with large result sets over a slow network, setting a large fetch buffer size can be crucial to performance. Notice that the data transfer rate of the largest buffer over the slowest network is only about 15% slower than the fastest condition, and 12 times faster than the same transfer using a small buffer.
Table 3. Result set 4MB data
The results above should provide a useful guide on an appropriate setting for fetch buffer size based on the actual queries being performed and the network infrastructure in use. The main drawback to a high setting of fetch buffer size is limited to when the memory footprint of the application starts to impede on memory requirements of other processes.
Both CSDK version 3.50 and CSDK 3.70 are supported on Windows
7 operating system, which makes difficult to decide which one to use. A useful
way to decide which CSDK to use is:
The 'deadlock' mutex ensures that there is never be more than one session checking for a deadlock situation at any given time. Whenever some lock is requested due to the LOCK MODE set to WAIT [seconds] statement used by a session, the code for deadlock checking get executed.
The deadlock checking code
starts with marking all the sessions in the instance with an internal flag
saying 'not in deadlock'. Then it walks through the list of all the locks held
by the current session (including the newly requested one) and for each lock it
identifies whether there are some sessions waiting for it. If such a session
(called waiter) is found, its internal flag changed to 'deadlock candidate' and
the waiter added into the so called 'deadlock list'. Each of the waiters in
this list is then checked in the same way - the code checks whether any of the
waiters holds any lock the current session is waiting for. If such a lock is
found a deadlock is detected.
- Tomas Zahradnik
There a new utility introduced in 11.70.xC4 that helps compare two Informix configuration (ONCONFIG) files and produce the differences. It is really helpful during upgrade process. You can use it for compare your current ONCONFIG to an ONCONFIG from a newer version.
The onconfig_diff is located in $INFORMIXDIR/bin. There are two ways to run onconfig_diff but this document focuses only on the direct comparison of two ONCONFIG files. Here are some ways you can use the utility:
The usage of onconfig_diff utility is as follows:
Here is an example of onconfig_diff:
$ onconfig_diff -c -f onconfig.std -s onconfig.production
Following is an output from the above command:
At present, the onconfig_diff utility does not available on Windows platform but hopefully it will be available soon.
When products are built with different versions of CSDK, or include a different version of CSDK, they must be installed in separate directories. The Informix server 11.50 is shipped with a Client SDK (CSDK) whose version matches the server version. Informix 11.50.xC9 will ship with CSDK 3.50.xC9, where 'x' is F for 64 bit systems, and 'x' is either U,T or H for 32 bit systems. Likewise, Informix server 11.70 is shipped with a Client SDK whose version matches the server version. Informix 11.70.xC5 will ship with CSDK 3.70.xC5, etc..
The tools such as the 4GL compiler, debugger, or rapid development system, or ISQL (and their respective runtimes) are also built with CSDK. The version of CSDK used will depend on the current release level of CSDK at that time, and whether it is intended to be used with Informix 11.50 or Informix 11.70.
Because of product architectural issues, 3.50.* and 3.70.* libraries cannot be mixed. Doing so will lead to application failure. Failure symptoms may include, but are not limited to blank form screens and core dumps. Application program failures do not appear to effect data in the database. When you install tools in the same directory, you may overwrite the library of one product with the same library of the second product. This is not a problem as long as the products are built with the same version of CSDK, down to the minor release level. (e.g. 3.50.xC5 where C5 is the minor release). If the products are built with different versions of CSDK, it may cause problems which lead to application program failure.
If you have products which have been built with different versions of CSDK and installing them in the same directory has led to problems such as blank screens and core dumps, then segregate the products by CSDK build version. Set your PATH and LD_LIBRARY_PATH variables as appropriate for the users. You can find the version of CSDK used to build a tool by untaring the product file, and checking the machine notes file under $INFORMIXDIR/release/<language>, or by going on line to the Information Center for the appropriate Informix server and drilling down to Release Information, then Informix 4GL, then the particular release version, then Release Notes. In that file you will find the version of CSDK used to build the tool.
Note that the version of the tool and the version of CSDK used are not necessarily the same. For example, 4GL 7.50.xC6 was built with CSDK 3.70.xC4.
The fetch buffer is a memory location in a client-server environment where cursor data received from the server and held before transmit to the application. Setting a bigger fetch buffer size can significantly improve query performance by reducing network traffic. It is the amount of data that client accepts from the server without waiting to send a "received" message which make the difference. Starting with Informix version 11.70.xC5, you can set the maximum size of the fetch buffer to 2 GB to increase query performance.
The FET_BUF_SIZE environment is used to set the size of the application fetch buffer. This environment variable can override the default setting for the size of the fetch buffer for all data types except BYTE and TEXT values. An ANSI database needs addition consideration; you must set transactions to READ ONLY mode for the FET_BUF_SIZE environment variable to improve performance, otherwise rows are returned one by one.
A valid FET_BUF_SIZE setting is in effect for the local database server and for any remote database server from which you retrieve rows through a distributed query in which the local server is the coordinator and the remote database is subordinate. The greater the size of the buffer, the more rows can be returned, and the less frequently the client application must wait while the database server returns rows. A large buffer can improve performance by reducing the overhead of filling the client-side buffer.
In case of an invalid FET_BUF_SIZE setting, Informix not generates any error. It uses the default fetch buffer size.
The combination of fetch buffer, communication buffer (the "b" option in the sqlhosts file), network driver size and application cursor can enhance the query performance farther.
You can use HTTPS to protect the IBM OpenAdmin Tool (OAT) for Informix web server from eavesdropping, tampering, and message forgery. When HTTPS is enabled, messages from OAT clients are encrypted before they are sent to the OAT web server. Encryption prevents hackers from listening over the line and stealing sensitive information. When HTTPS is enabled, OAT clients can also authenticate with the OAT host, so that hackers cannot deceive OAT clients with fake OAT web servers.
For detailed, step-by-step instructions on how to setup HTTPS for the OAT web server on Linux and Windows, download the "Securing the OpenAdmin Tool for Informix web server with HTTPS" PDF file.
- Erika Von Bargen
cheetahblog 0600028TE8 2,656 Views
Other day I was setting up a high availability cluster environment and ran into to an interesting problem. I followed all necessary instruction for setup a RSS server. However, RSS stuck in recovery process and message log on primary serer reported error that could not send log. For example, when executed following command on RSS server to set data replication type:
onmode -d RSS <primary server name>
The RSS server stuck in recovery mode and message log on primary server showing following messages:
RSS Server <RSS server name> -
state is now connected
The log number mentioned in error message was not close to the current log on primary or RSS server. For example, current log on primary was 7438 and on RSS 7436 but message log stating ‘Can not send log 825241904’. So, from where server getting a out of sequence log number?
Initially I though it some kind of corruption. However, after some investigation figured out, I was using delayed application (DELAY_APPLY) on RSS server and the directory specified with LOG_STAGING_DIR configuration parameter holding some unwanted file(s). For example, file 'ifmxUniqueLog_825241904' in LOG_STAGING_DIR. So, during recovery RSS server requested to primary for log number 825241904 but that log not exists on primary server.
cheetahblog 0600028TE8 Tags:  optimize message sql_enable_insert_cursor bulk open close optimization optofc autofree transfer free optmsg fetch insert automatically 4,418 Views
I will reiterate few existing features in Informix which you can use for optimize the connection to database. It is always an expensive operation when establishing a connection to a database. The goal should be that applications perform as many operations as possible while connected to database. Following are some connection attributes that can be use to tune application performance by reducing the network traffic:
insert optimization (SQL_ENABLE_INSERT_CURSOR): Create an insert cursor is a good idea to efficiently insert
rows into a table in bulk. You need to set the SQL_ENABLE_INSERT_CURSOR
attribute with QLSetStmtOption(), then call SQLParamOptions() with the number of rows as a
parameter. An Insert cursor creates a buffer in memory to hold a block of rows.
The buffer receives data from the application program; once buffer is full data
send to the database server in a block. The buffer reduces the amount of
communication between the program and the database server. As a result, the
insertions go faster.
Automatically free a cursors (AUTOFREE): In general, when an application uses a cursor, it usually sends CLOSE and FREE statements to the database server to de-allocate memory. Execution of these statements involve round trip of message requests between the application and the database server. The Automatic free feature (AUTOFREE) reduces the number of round trips by one. You can use the IFX_AUTOFREE environment variable or SQL statement SET AUTOFREE to enable this feature.
I just provide some touch point here, there are several other way to improve network and connection to database. Please check the Improve Application Performance in Informix Infocenter for details.
How often you purge Informix log files? I am sure most of you cleanup your log files on regular basis. However there are people, to whom purging log files comes as a reactionary in troubleshooting a file system full situation. Do you know there is way within Informix to programmatically manage common log files e.g., message.log, bar_act.log, bar_debug.log etc. Informix introduces this capability in version 11.70.xC3 to target embedded environment, where disk space is fixed and database instance must run years without intervention from administrators. Nevertheless administrators can take the advantage of this functionality to organize Informix log files outside the embedded environment also. The Auto Rotate Log feature provides a platform independent way to automatically manage Informix log files. This feature implemented via database scheduler that provides the flexibility to use it from dbaccress using SQL admin API, as well as from OAT.
Using this feature you can rotate, truncate, or delete log files. It can be run manually or automatically. However, to perform this operation automatically you need to enable it first. There are several new admin API commands added for this feature:
Manually Rotate Log File
The rotate operation switches to a new log file and increments the ID numbers for the previous log files by one. When the maximum number of log files is reached, the oldest log file deleted. To rotate the "/usr/informix/online.log” and keep last 10 files use:
EXECUTE FUNCTION task("file rotate", "/usr/informix/online.log", 10);
Manually Truncate Log File
The truncate operation removes the contents but keep the log file. To truncate "/usr/informix/bar_act.log", specify:
EXECUTE FUNCTION task("file truncate", "/usr/informix/bar_act.log");
Manually Delete Log File
The delete operation removes the file. To delete the entire "/usr/informix/bar_debug.log" file, specify:
EXECUTE FUNCTION task("file delete", "/usr/informix/bar_debug.log");
Automatic Rotate Log
There are three tasks added to sysadmin:ph_task task table to automatically rotate log files, for example: online_log_rotate, bar_act_log_rotate and bar_debug_log_rotate. The tk_enable column in sysadmin:ph_task table for appropriate task needs to be enabled to rotate log automatically. You only need to enable each task once.
You can manage log files from OAT v.2.73 and higher. The rotate, delete and truncate log operations are available under “Logs” section on the left pane.
We all know how critical it is to keep accurate and up-to-date database server statistics. Update statistics are key part of getting consistent performance. It is a very common question to ask: What UPDATE STATISTICS command one should run?
The following SQL script offers a way to generate list all necessary UPDATE STATISTICS commands for a database. You just need to replace the "<Database Name>" string with an appropriate database name. It will create an output file call 'update_stat_cmd.sql' in the current directory.
----- Beginning of the SQL script -----
SELECT name, value
WHERE name IN ("AUS_AGE","AUS_AUTO_RULES")
INTO TEMP temp_threshold;
SET value = -1
WHERE name = "AUS_AGE";
SET value = 1
WHERE name = "AUS_AUTO_RULES";
SELECT tk_next_execution FROM sysadmin:ph_task
WHERE tk_name = "Auto Update Statistics Evaluation"
INTO TEMP temp_ph_task;
EXECUTE FUNCTION sysadmin:exectask ("Auto Update Statistics Evaluation");
UNLOAD TO 'update_stat_cmd.sql' DELIMITER ';'
WHERE aus_cmd_dbs_partnum = (
WHERE name = "<Database Name>"
AND bitand(flags, 3 ) > 0);
SET value =
( SELECT value FROM temp_threshold
WHERE name = "AUS_AGE" )
WHERE name = "AUS_AGE";
SET value =
( SELECT value FROM temp_threshold
WHERE name = "AUS_AUTO_RULES" )
WHERE name = "AUS_AUTO_RULES";
SET tk_next_execution =
( SELECT tk_next_execution from temp_ph_task
WHERE tk_name = "Auto Update Statistics Evaluation" )
WHERE tk_name = "Auto Update Statistics Evaluation";
----- End of the SQL script -----
You can run the above SQL script against sysadmin database to generate UPDATE STATISTICS commands according to Informix guideline. It goes by the following guideline.
The above SQL script has dependencies on syadmin database and database scheduler. SQL script will fail, if the database scheduler turned off.
cheetahblog 0600028TE8 Tags:  client_locale dbdate db_locale dbcentury minguo y1c 1 Comment 6,279 Views
Informix and the Minguo “Y1C Problem”
The “Y1C Problem” affects
Minguo dates, a calendar currently used in the Republic of China (Taiwan). The issues are similar to those encountered
in the Y2K Problem.
The Minguo calendar begins with year 1 in 1912, the year the Republic of China was founded. Months and days in the Minguo calendar are the same as the Gregorian calendar. 2010 is year 99 of the Minguo calendar, and year 2011 will be year 100 - the year the Y1C problem is encountered.
The first Minguo century is from Minguo year 1 (1912) through 100 (2011) and the second Minguo century is from Minguo year 101 through 200. The century part of the Minguo year refers to the digit in the hundreds numeric position, not the century the year belongs in (just as the year 2010 is in the 21st century but the century part of the year 2010 is 20).
The Y1C problem in IDS has two issues:
3 digit year support:
Prior to a change made in October 2008, only 2 or 4 digit year formats were supported for Minguo years, e.g. DMY2/C1 and DMY4/C1. The change added support for the DMY3/C1 format specifier. The fix was implemented in versions 11.10xC3, 10.00xC10 and 11.50xC1.
Minguo support of the DBCENTURY environment variable
The second fix adds DBCENTURY support for Minguo years. Previous to the fix (11.50xC7), DBCENTURY settings were ignored when formatting Minguo dates. If you specified a 2 digit year in a Minguo date and formatted it to a 3 digit year, it would always have returned a year with the century digit set to 0. With the fix, the century part of a 2 digit year formatted to 3 or 4 digit year will depend on the DBCENTURY setting.
DBCENTURY settings and their meanings:
R – Current century - century from the current date
P – Past century - century for the date earlier than current date
F – Future century - century for the date later than current date
C – Closest century - century for date closest to current date
The following table demonstrates the results of formatting Minguo dates with 2 digit years into Minguo dates with 3 digit years with various DBCENTURY settings and current dates. The “Current Date” column is the date the action occurs on.
Environment variables (other than DBCENTURY):
Without the fix, the 3 digit formatted dates would all have had a 0 in the century (100’s) place. Note that if a -1 century would be returned, we will return a 0 instead since there is no -1 century in the Minguo calendar.
Minguo DBCENTURY support is included in 11.50xC7.
Example of use:
Create the following two files, then run test.sh.
if [ "x$INFORMIXSERVER" = x ]; then
echo This file must be run from within an IDS environment
cat minguo.sql | dbaccess - -
cat minguo.sql | dbaccess - -
cat minguo.sql | dbaccess - -
cat minguo.sql | dbaccess - -
drop database zhtw;
create database zhtw;
create table t1(c1 date);
insert into t1 values('08/08/99');
insert into t1 values('08/08/02');
insert into t1 values('080848') ;
insert into t1 values('080852') ;
select * from t1;
Informix Enterprise Replication made easy with OpenAdmin Tool
Check out this new developerWorks article to learn how to use OpenAdmin Tool to configure and monitor IDS Enterprise Replication: Informix Enterprise Replication made easy with OpenAdmin Tool
Erika Von Bargen
OpenAdmin Tool Version 2.27 Has Arrived!
What's new in OAT 2.27?
Brazilian Portuguese, Chinese Simplified, Chinese Traditional, Czech, German, Hungarian, Italian, Japanese, Korean, Polish, Russian, Slovak, and Spanish
The Connection Manager wizard on OAT's MACH page now allows you to configure the Connection Manager as a proxy server when clients connect to Informix data servers from outside a firewall (new to 11.50.xC6).
Ever want to run onstat commands remotely? Now you can. OAT 2.27 supports running any onstat command you like directly from your web browser!
The Schema Manager plug-in arrives with its first installation of 'admin' actions.
A demo of the new create table and load from external table functionality is available at www.openadmintool.org/oat_227_demo.swf
More schema management and administration actions to come, so stay tuned!
ER plug-in version 2.4 now contains virtually all administration commands available for IDS Enterprise Replication! Besides rounding out the suite of administration commands supported, the focus on the ER plug-in version 2.4 was usability.
Download OAT Version 2.27 today, for free, at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd.
More information on OpenAdmin Tool for IDS, including demos, can be found at www.openadmintool.org.
To post comments or questions about OAT, use the IIUG OAT forum. Feedback is always welcome!
Erika Von Bargen