There a new utility introduced in 11.70.xC4 that helps compare two Informix configuration (ONCONFIG) files and produce the differences. It is really helpful during upgrade process. You can use it for compare your current ONCONFIG to an ONCONFIG from a newer version.
The onconfig_diff is located in $INFORMIXDIR/bin. There are two ways to run onconfig_diff but this document focuses only on the direct comparison of two ONCONFIG files. Here are some ways you can use the utility:
- Compare your current ONCONFIG with the onconfig.std of same version
- Compare your current ONCONFIG with the onconfig.std of a newer version
- Compare two ONCONFIG s from different servers
The usage of onconfig_diff utility is as follows:
onconfig_diff [-d] | [-c -f file_name_1 -s file_name_2]
compare current onconfig to defaults
compare file to another file
second file name
Here is an example of onconfig_diff:
$ onconfig_diff -c -f onconfig.std -s onconfig.production
Following is an output from the above command:
File 1: onconfig.std
File 2: onconfig.production
Parameters Found in File 1, not in File 2
Parameters Found in File 2, not in File 1
Parameters Found in both files, but different
File 1: $INFORMIXDIR/tmp/demo_on.rootdbs
File 2: /usr2/support/grantf/g1150fc8/rootdbs
File 1: 6
File 2: 10
File 1: 10000
File 2: 3000
At present, the onconfig_diff utility does not available on Windows platform but hopefully it will be available soon.
On certain Operating System (for example: Linux) you may noticed dbaccess (and other client applications) always doing DNS lookup while connecting to a database, evern after found out the hostname or IP address in the local host file. This behavior sometimes caused slow connection, if you have problem related to DNS. Following is an excerpt of strace output shows the sequence of file accessed by a dbaccess request:
$ strace dbaccess sysadmin -e | grep open
open("/etc/services", O_RDONLY) = 6
open("/etc/resolv.conf", O_RDONLY) = 6
open("/etc/host.conf", O_RDONLY) = 6
open("/etc/hosts", O_RDONLY) = 6
open("/etc/ld.so.cache", O_RDONLY) = 6
open("/lib64/libnss_dns.so.2", O_RDONLY) = 6
open("/lib64/libresolv.so.2", O_RDONLY) = 6
open("/etc/hosts", O_RDONLY) = 6
Question is why a connection request to database server from dbaccess is trying to access DNS server for hostname resolution, after it found the same in the local host file.
Traditionally, hostname and service name resolution were performed by functions such as gethostbyname(), getservbyname() etc. These traditional lookup functions are still available, however those are not forward compatible to IPv6. Instead, the IPv6 socket API provides new lookup functions that consolidate the functionality of several traditional functions. These new lookup functions are also backward compatible with IPv4, so a programmer can use the same translation algorithm in an application for both the IPv4 and Ipv6. The getaddrinfo() is the new primary lookup function and a connection request from the dbaccess ultimately calls this socket API. You can pass several parameters to the getaddrinfo(), one of those parameter is addrinfo structure. By default, dbaccess passes value “AF_INET6” for addrinfo.ai_family. The ai_family field indicates the protocol family associated with the request, and will be PF_INET6 for IPv6 or PF_INET for IPv4.
If the ai_family set to AF_INET6 (IPv6) the getaddrinfo() will search the DNS everytime. If the ai_family set to AF_INET, then it don't query the DNS server. You can consult the 'man' page for getaddrinfo() for detailed information.
Beginning with Informix 10.00.xC4 and Client SDK 2.90.xC4, the database server checks, on startup, whether IPv6 is supported in the underlying operating system. If IPv6 is supported it is used. If the underlying operating system does not support IPv6, the IPv4 address is used.
In case of a problem with DNS lookup and encountering slow connection to databse, you may use the environment variable IFX_DISABLE_IPV6 (IFX_DISABLE_IPV6=1) to disable Ipv6 and this will set the ai_family to AF_INET only and will not do subsiquent query to the DNS server.
Informix Enterprise Replication made easy with OpenAdmin Tool
Check out this new developerWorks article to learn how to use OpenAdmin Tool to configure and monitor IDS Enterprise
Replication: Informix Enterprise Replication made easy with OpenAdmin Tool
Erika Von Bargen
The value of locale variable, CLIENT_LOCALE or DB_LOCALE can be broken into 4 parts.
1 2 3 4
<language>_<territory>.<Code set name/Code set number>[@modifier]
-------- -------- ----------------------------- --------
Conventional, I represent this as ll_tt.xxxx@xyz, where ...
ll ............ represents the Language
tt ........... represents the Territory, or cultural convention.
xxxx ....... represents the Code set Name or the Code set Number supported by the locale and
xyz ......... represents the Modifier. This is the only optional part in a locale value.
The modifier, sometimes refered as variant, modifies the cultural-convention settings that the language and territory settings imply. It usually indicates a special localized collating order that the locale supports.
Let us look at an example. Example:
CLIENT_LOCALE = de_at.cp1252@euro, and
CLIENT_LOCALE = de_at.1252@euro
- Here, both CLIENT_LOCALE values represent the same locale.
- 1252 is the Code set number for Code set name, cp1252. We can specify either Code set name or the Code set number in a locale value.
- de ........... represents the German language
- at ............ the territory, Austria
- cp1252 ... the code set used for the encoding and
- euro ....... the modifier used for the locale
So, this is German language locale, for Austria, using cp1252 encoding and euro modifier.
Now, to check if this locale file exists, where to lookup ?
All locale files reside under directory $INFORMIXDIR/gls/lc11
To lookup for locale files for language (ll) and territory (tt), we check under $INFORMIXDIR/gls/lc11/ll_tt directory.
In our example, to lookup for locale files for German language (de), for territory Austria (at), we will lookup $INFORMIXDIR/gls/lc11/de_at directory
Next, under the specified locale directory, look for files with name represented by hex value of the code set name/ code set number, along with modifier name if modifier is specified, with an extension .lco
In our example, hex value for Code set cp1252 is 04e4 and modifier euro is used. So, we will look for file 04e4euro.loc under directory $INFORMIXDIR/gls/lc11/de_at.
How and where to find the hex value for a Code set name ?
For any Code set name, its Code set number and hex value can be looked-up in file $INFORMIXDIR/gls/cm3/registry.
Let us find the hex value for Code set name Latin-3.
We can find the information in file
In the registry file ...
- first coulmn represents the code set name,
- second column is code set number
- third column is the hex value of the code set number, and
- fourth column, is either blank or has comment about the code set.
Let us lookup for code set, Latin-3 in registry file and see what we find.
We get the following value.
Latin-3 57346 0xe002
--------- ------------ ----------- --------------------------------
code set name code set number hex value in this case, there is no comment
- Locale values are case in-sensitive.
DB_LOCALE = de_de.cp1252, DB_LOCALE = de_de.CP1252.
Here, both locale values are valid, representing the same code set.
- You can specify either code set name or code set number in a locale value, but you cannot use the hex value of the code set number.
DB_LOCALE = fr_ca.57372 or fr_ca.utf8, ........ both values are valid and they represent the same code set.
DB_LOCALE = de_de.cp1252 or de_de.1252 .... both values are valid and they represent the same code set
DB_LOCALE = de_de.04e4 ............... this in invalid. Code set's hex value cannot be used in a locale value.
- If modifier is not specified in the locale variable, like say ...
CLIENT_LOCALE = de_at.cp1252
- to locate the locale file, look for .loc under the language_territory directory. In this case, we look for following file ...
- If modifier is specified in the locale variable, like ...
CLIENT_LOCALE = de_at.cp1252@euro
- to locate the locale file, look for .lco file under language_territory directory. In this case, we look for following file ...
- Code set name, its corresponding Code set number and hex value is specified in file
- Locale Territory/ Country code and Language code can be looked up in file
- Conventionally, for LOCALE variable having value ll_tt.xxxx[@xyz], following locale file should exist.
$INFORMIXDIR/gls/lc11/ll_tt/<hex value of xxxx>[xyz].lco
Most of us fairly familiar with errno -28 (No space left on device) during Assertion Failure (AF), while Informix Dynamic Server (IDS) generates diagnostics data (AF file and shared memory dump). Diagnostics data are very critical to determine the root cause of failure. AF files are generally not too big, where as shared memory dumps often huge in size, almost same as the total memory size used by the IDS instance. The lack of disk space can cause partially dump of shared memory file, which add very little or no value to diagnose the failure.
In large IDS systems, the amount of space required to dump the shared memory is excessive because of gigantic sizes of the resident segment. Most of it contains is BUFFERPOOL information. Large size of the shared memory dump file not only create space issue, it difficult also for technical support to extract useful information in a timely manner.
The IDS version 11.50 provides some flexibility to control how much memory is written to a dump file. We can exclude the buffer pool information from resident segment to significantly reduce the shared memory dump file size. Configuration parameter DUMPSHMEM and onstat both provide some new options to control the shared memory dump size.
Use the DUMPSHMEM configuration parameter to automatically create a dump file during AF. Set DUMPSHMEM to 2 to create a shared memory dump that excludes the buffer pool. You can dynamically change the value of DUMPSHMEM with onmode -wm and onmode -wf. The DUMPSHMEM can take following values:
0 - Do not dump shared memory during AF 1 - Dump full shared memory (default) 2 - Dump shared memory without bufferpool (new option)
The 'onstat -o' command also allows to dump shared memory file on-demand. Use the new ‘nobuffs’ options with 'onstat -o' to generate shared memory dump without bufferpool. If you use 'onstat -o' without 'nobuffs' option, the DUMPSHMEM configuration parameter controls the content of shared memory file. The 0 or 1 configuration value will generate full shared memory dump file and 2 exclude buffer pool information.
All oncheck options works on the shared memory dump file without buffer pool, except options that access buffer information e.g. -b, -B, -P.
Typically onstat shows segments as “FACADE” while working with full shared memory, where as shared memory without buffer pool shows as "FAÇADE NOBUFFERS". Sanjit Chakraborty
One of the new features in Cheetah is the ability to enable tracing for SQL. While this has been available in limited method though the use of the IBM product I-SPY, the drawback before has always been the performance hit to actually monitor this information. That limitation is effectively eliminated with this feature in Cheetah. So how is SQL Tracing enabled? It's actually pretty simple, one way is to just put SQLTRACE in your ONCONFIG. One configuration I use most often is:
This allows me to keep track of the last 1000 sql statements on the instance.
However ONCONFIG is not the only way to enable tracing, you can also dothis through the sysadmin database by running the following:
EXECUTE FUNCTION task("set sql tracing on", 1000, 2,"low","global");
So by using either of the above methods you now have enabled SQLRACE, buthow do you validate it. The easiest method is by running :
onstat -g his
Below is a sample of onstat -g his output.
In Part 2 we will look at how to read this in SQL, and possible optionsfor the data.
The newest version of OpenAdmin Tool for IDS, Version 2.22, is available now! The latest features center on a Enterprise Replication (ER) monitoring and security.
ER Plug-in Version 1.1: Version 1.1 of OAT’s ER plug-in greatly enhances OAT’s Enterprise Replication monitoring capability.
- The ER Routing Topology page has been transformed to allow monitoring of all nodes in the ER domain from a single page without having to drill-down on each node. Users can set thresholds for key ER statistics and then use the Routing Topology page to monitor alerts and profile data for each node in their domain. (Requires IDS server version 11.50xC2.)
- The ER Node Details pages have been expanded to show errors for the current node or the entire ER domain (Errors tab) and to list current values of the ER configuration parameters (Configuration tab).
- Check out a demo of the newest ER monitoring features here: ER Monitoring with Alerts Demo
Secure SQLToolbox: OAT admins can now choose to turn on an additional level of security for the SQL Toolbox pages. If “Secure SQLToolbox” is turned on, OAT users will have to re-authenticate in order to view schema data or use the SQL Editor. This additional layer of security can be used to ensure that OAT users are not automatically allowed free access to databases or tables as the user informix.
Download OAT Version 2.22 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd
For additional information on OpenAdmin Tool for IDS, including feature details, screenshots and demos, go to www.openadmintool.org.
Also check out the new DeveloperWorks article on writing custom plug-ins for OAT: www.ibm.com/developerworks/db2/library/techarticle/dm-0808vonbargen Erika Von Bargen
Well - Itsfinally here - the 2009 Informix user group conference got underway in KansasCity today. This has all the makings of another exciting conference and as usual we haveall our Informix gurus under one roof… The conference has always been a greatplace to share your experiences, to learn new skills, to meet the developmentand support folks who strive to give you the best possible product and servicein the industry and this year is no different. We have an impressive line up of key notes and technical sessions that will no doubt challenge you in terms of what to attend.
Todaythere were several tutorial sessions from IDS workshop to Troubleshootingseries to getting ready for the IDS 11 certification exam ( You will be ableto take these certifications free of charge at the conference - a $150 value). There were several engineers from the support organization who presented these tutorials which is always special for me including John Miller, Ron Privett, Mark Jamison, Randy House, David Kolbinger, Suma Vinod, Sanjit Chakraborty and Manjula Panthagani. Hopefully these sessions were very well received and we look forward to your feedback in making this better.
My comments would not be complete without commending the people who make this conference happen. There are several of you who have been tirelessly working to make this a success and I would like to thank each and every one of you for this opportunity. Hopefully you all get to enjoy Kansas City over the next 3 days and above all have a blast at the conference and enjoy all the sessions.
Bye for now !
Please note that IDS 9.40 [All Versions] is now "End of Support". If you are planning on remaining on 9.4, you should start discussingwith your IBM Representative regarding purchasing a service extension else please consider upgrading to the current IDS release.
IDS 11.50.xC4 has several new features like storage optimization and warehouse that you can take advantage of.
The upgrade portal is a good resource for your upgrade plans and is a definite recommendation. If you need any additional assistance or have a quick question, please feel free to email getIDS11@us.ibm.com.
I will reiterate few existing features in Informix which you
can use for optimize the connection to database. It is always an expensive
operation when establishing a connection to a database. The goal should be that
applications perform as many operations as possible while connected to database.
Following are some connection attributes that can be use to tune application
performance by reducing the network traffic:
insert optimization (SQL_ENABLE_INSERT_CURSOR): Create an insert cursor is a good idea to efficiently insert
rows into a table in bulk. You need to set the SQL_ENABLE_INSERT_CURSOR
attribute with QLSetStmtOption(), then call SQLParamOptions() with the number of rows as a
parameter. An Insert cursor creates a buffer in memory to hold a block of rows.
The buffer receives data from the application program; once buffer is full data
send to the database server in a block. The buffer reduces the amount of
communication between the program and the database server. As a result, the
insertions go faster.
free a cursors (AUTOFREE): In general, when an application uses a cursor, it usually
sends CLOSE and FREE statements to the database server to de-allocate memory.
Execution of these statements involve round trip of message requests between
the application and the database server. The Automatic free feature (AUTOFREE)
reduces the number of round trips by one. You can use the IFX_AUTOFREE
environment variable or SQL statement SET AUTOFREE to enable this feature.
message transfer (OPTMSG):
This feature allows you to minimize message transfers
between the database server and application. Application can optimized message
transfers by chaining messages together and even eliminating some small message
packets. You can enable this feature by setting OPTMSG
environment variable at application side.
close optimization (OPTOFC):
Application uses DECLARE and OPEN statements to execute a
cursor, each statement involves a round trip of message requests between the
application and the database server. The open-fetch-close optimization (OPTOFC)
reduces the number of round trips network traffic. With this feature, the OPEN
statement from application does not open the cursor immediately. Instead, it
saves any input value that was supplied in the USING clause of the OPEN
statement. Next time along with the FETCH statement it sends this input value.
The database server opens the cursor and returns the first value in this
cursor. Once database server reaches the last value of an open cursor, it
automatically closes the cursor after it sends the data to the client
application. You can use the OPTOFC environment variable to enable this feature.
I just provide some touch point here, there are several
other way to improve network and connection to database. Please check the Improve
Application Performance in Informix Infocenter for details.
In most cases the Informix error codes are very good at explaining why there is a problem and how to fix it. Finderr(Error Message Utility), is the utility shipped with IBM Informix Dynamic Server and client products(CSDK and IConnect) that’s helps to check an error code and returns error messages corresponding to IBM Informix error numbers.
Problem:Finderr uses WinHelp. The Help for this program was created in Windows Help format, which was used in previous versions of Windows and it is not supported in the newer flavors of Windows Operating Systems like Vista or Windows 2008. Windows Vista and Windows Server 2008 are not shipped with Winhelp application. So you will get error messages popping up while executing finderr utility shipped with IDS 11.50 on Windows Server 2008 and Vista:
The GUI finderr program supplied with CSDK and IConnect also doesn't work on Windows Vista or Windows Server 2008. The execution of finderr utility in client products pops-up the following error messages.
The workaround is to download the Windows Help program (WinHlp32.exe) from the following Microsoft support Web site.
For Windows 2008:
The problem has been fixed in IDS 11.50.xC3 and CSDK-3.50.xC3 releases. So you can always upgrade to these releases to get finderr utility working in newer flavors of Windows Operating System.
Snigdha Sahu[Read More]
There is good news for those who were downloading the Node.1.0 DataBlade
from the developer works
and using it AS-IS. As per policy there is no support for AS-IS software downloaded from the developer works.
Node DataBlade creates an Informix DataBlade with a node opaque type data that addresses the problem of transitive closure/hierarchical data queries.
From IDS V11.10 onwards, Node.2.0 DataBlade (with some added functionality) is being bundled as one of the default DataBlades along with the server. That means moving forward Node.2.0 DataBlade (and subsequent higher versions) would be eligible for support. Anup Nair
IBM hasjust announced TheInformix Warehouse feature whichconsists of a powerful set of tools to build a warehouse infrastructureplatform. Here is a quote from the announcement:
From KevinBrown, lead architect for IBM and Jim Kobielus from Forrester Research:
"This can save weeks of effort into just a fewhours," Brown said. "Inaddition, customers often did without information because of the cost of effortto get the information. The lost opportunity cost savings is harder toquantify, but can be significant once they use their warehouse platform forsmarter decision-making."
TheWarehouse feature includes the following components:
SQW Design Studio - AnEclipse-based common design environment for connecting to source and targetdatabases, creating and reverse-engineering physical data models, and buildingSQL-based data flows and control flows.
SQL Warehousing Tool -a graphical environment that works with the Design Studio to enable you toquickly and easily build in-database data movements and transformations intoyour warehouse.
SQW Administration Console- a web-based application for managing and monitoring the data flows that youdesign using the Design Studio. The Administration Console allows you to:
- Manage common resources such as database connections and machine resources
- Schedule when the execution of control flows
- Monitor the execution status
Pressrelease at: http://www-03.ibm.com/press/us/en/pressrelease/26840.wss
InformixWarehouse page on the ibm site: http://www-01.ibm.com/software/data/informix/warehouse/
Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. Ideally, the data requirements are recorded as a conceptual model that is not platform-specific (the logical data model
) with associated definitions for the entities and attributes. This logical model can serve as the ‘information hub’ to help define and enforce consistency across enterprise data. From the logical model, multiple database-specific implementations can be generated (via physical models
InfoSphere Data Architect (IDA) - formerly called Rational Data Architect (RDA) – is a great modeling tool for any shop that includes Informix Dyanmic Server databases in its mix. Even if you don’t have a ‘data architect’, this product can help DBAs communicate with developers and also help you keep tabs on what is and isn’t “real” any more (by comparing the model of what you think you have with what your database actually does have).
I recently updated a tutorial on using IDA
specifically from the perspective of using it with IDS. There are several enhancements to the model elements for IDS in IDA 7.5.1:
- You can set/view extra properties for Table elements (extent size, raw table type, lock level)
- Enhancements to constraints to support modes (enabled/disabled/ filtering)
- Trigger enhancements suited to IDS syntax including support for INSTEAD OF triggers on views
- Generate DDL enhancements for all of the above
Anyway, if you get a chance, download the IDA trial and check out the latest version of the tutorial and let me know what you think. Link to IDA trial download Link to an IDA demo
A new version of OpenAdmin Tool for IDS
(formerly known as IDSAdmin) is available to download from the IDS 11.10 Open Beta site
(sign-in required). This new version has a simpler installation, new graphics, and many new features.
Please keep in mind that:
- The product is still listed as IDSAdmin on the Beta download site.
- The Readme file downloadable from the Beta site is (at the time of writing) the out of date one from March - do not use it, instead unzip the oatids package and refer to the Readme in there.
Here is a copy of the current Readme:
OpenAdmin Tool For IDS - v2.10- July 2007
OpenAdmin Tool For IDS ( OAT ) , is a PHP based administration consolewhich can be used to administer one or more IBM Informix Dyanamic Server11.10 instances.
DependenciesOAT requires the following products to be installed:
Note: the versions in brackets indicate the versions that OAT has been testedwith.
- A Webserver (Apache 2.2.3)
- IBM I-Connect or CSDK (3.00)
- PHP 5 compiled with PDO, PDO_SQLITE, GD and SOAP enabled. (5.2.2)
- Informix PDO Module.
Installation Instructions1. Install and set up a working web server that has been configured toserve php pages.
(For more information see the NOTES section below.)
2. Update the php configuration file.
3. Install IBM I-Connect or Client SDK.
- edit the php.ini
- add two new lines to the 'extension' section of the configuration fileif they are not present:
- modify the parameter memory_total to 256M.
4. Install the OpenAdmin Tool For IDS package.
5. Change the ownership of the "<OAT>/install" directoryto the user and group that runs the apache ( httpd ) server.
- Extract the OAT package into your web server document root directory.
6. Start the webserver, making sure INFORMIXDIR points to the Client SDKor I-Connect install location in the webserver environment.
- Find the user and group that runs the apache ( httpd ) server from the httpd.conf.
- chown <user>:<group> <OAT>/install ( Unix/Linux )
7. Launch the OAT installer by visiting the web page:
SERVERNAME = the name of your machine
LOCATION = where you extracted the tar file (e.g. oat )
Follow the instructions in the OAT installation screens, including obtainingan optional Google Maps API key for your domain if required.
Once installation is complete, point your browser at the OAT root URL.From there you can click on Admin and add a new IDS 11.10 Connection tothe default group or create a new group. If using an I-Connect or ClientSDK version prior to 3.0 then for each new connection you add, there needsto be a corresponding SQLHOSTS entry for that connection on the webservermachine.
Once a connection is created the "Get Servers" link from theOAT root page will retrieve the list of connections you have created.
Installing an Apache webserver with PHP is not a straight forward task, thankfully there are 3rd party solutions available. A popular choiceis XAMPP.
Using XAMPPXAMPP is an easy to install Apache distribution containing MySQL , PHPand Perl. XAMPP can be obtainedfrom,
The XAMPP version 1.6.2 for Windows contains everything you need to runOAT (with the exception of IBM I-Connect or CSDK ), download and installationinformation is available at
XAMPP ( also called as LAMPP ) is also available for Linux, however you will need to download the 'Development Package' in order to compile the Informix PDO Driver. Download and installation information is available at
Informix PDO DriverThe following developerWorks article contains useful information to assistwith building an Informix PDO driver:
We all know how critical it is to keep accurate and up-to-date database server statistics. Update statistics are key part of getting consistent performance. It is a very common question to ask: What UPDATE STATISTICS command one should run?
The following SQL script offers a way to generate list all necessary UPDATE STATISTICS commands for a database. You just need to replace the "<Database Name>" string with an appropriate database name. It will create an output file call 'update_stat_cmd.sql' in the current directory.
----- Beginning of the SQL script -----
SELECT name, value
WHERE name IN ("AUS_AGE","AUS_AUTO_RULES")
INTO TEMP temp_threshold;
SET value = -1
WHERE name = "AUS_AGE";
SET value = 1
WHERE name = "AUS_AUTO_RULES";
SELECT tk_next_execution FROM sysadmin:ph_task
WHERE tk_name = "Auto Update Statistics Evaluation"
INTO TEMP temp_ph_task;
EXECUTE FUNCTION sysadmin:exectask ("Auto Update Statistics Evaluation");
UNLOAD TO 'update_stat_cmd.sql' DELIMITER ';'
WHERE aus_cmd_dbs_partnum = (
WHERE name = "<Database Name>"
AND bitand(flags, 3 ) > 0);
SET value =
( SELECT value FROM temp_threshold
WHERE name = "AUS_AGE" )
WHERE name = "AUS_AGE";
SET value =
( SELECT value FROM temp_threshold
WHERE name = "AUS_AUTO_RULES" )
WHERE name = "AUS_AUTO_RULES";
SET tk_next_execution =
( SELECT tk_next_execution from temp_ph_task
WHERE tk_name = "Auto Update Statistics Evaluation" )
WHERE tk_name = "Auto Update Statistics Evaluation";
----- End of the SQL script -----
You can run the above SQL script against sysadmin database to generate UPDATE STATISTICS commands according to Informix guideline. It goes by the following guideline.
All tables are updated in LOW mode. All the leading index keys are updated in HIGH mode. All non-leading index keys are updated in MEDIUM mode. The minimum resolution for MEDIUM mode is 2.0. The minimum confidence for MEDIUM mode is 0.95. The minimum resolution for HIGH mode is 0.5.
The above SQL script has dependencies on syadmin database and database scheduler. SQL script will fail, if the database scheduler turned off.
- Sanjit Chakraborty
Note if you have missed any articles in this series, check Introduction and Table of Contents
Setting up yourworkspace and connecting to IDS
Now that you have installed the Developer Workbench it is now time torun it. ON the first run, you should be asked to set up yourworkspace. This should give you a screen like the following:
Once you select this, the Workbench takes a little while to build yourworkspace. Once done however you get the classic EclipseWelcome screen. I have highlighted your workspace in the followingscreen shot , because that is the next place you want to go:
Once you click on the workbench you get the generic workbench. The nextstep at that point is to start a project , and establish a connection:
Setting up a new project to handle queries is fairly straightforward,and can be reached very quickly , all you have to do is select the DataDevelopment Project option as in the following screen shot:
Selecting the above puts you to the new Data Development projectscreen, as shown below:
As you can see, there are some DB2 flavored words here.Schema in DB2, is the IDS equivalent of a database, and as such, keepthe default radio button, as seen in the screen shot above, set. I would recommend that you giver your project a meaningfulname, but that is completely under you control. In my case I listed itas "Mark's Test project for Blog".
This brings us to the next page which assigns an existing databaseconnection, or creates a new one. Since this is our first time through,just hit the next button, as we will need to create a new connection.
The Connection parameters gives you several options for connections, aswell as several data server choices. Take a look at the below screenshot:
This is a standard Informix JDBC connection string at this point. Thescreen shot above differs slightly from what you see as the defaultsettings, as I have selected the "Informix JDBC driver" instead of the"IBM Data Server Driver for JDBC and SQLJ." In short I took the JDBCdriver instead of the JCC driver. The main reason is because JCC itselfis in beta, and I wanted to use something I knew how to connect withand trace. JCC also requires a DRDA listener thread, and I don't have one set up for the instance I am using. Update : Check out Establishing JCC connections in 11.10 if you want to use JCC connections.
Fill the remaining fields above with the normal Informix connectionsettings, and test you connection. Once you connection is working,click on the next button.
This final screen allows you to filter your schema. Here is the screenshot :
Please note: The default is only for users who have IDS on a windowsplatform. Since schema id is actually a user id in IDS, filtering onINFORMIX means that no information of any kind will return for anydatabase objects if your IDS instance is on a Unix platform. The reasonof course is the user id is case sensitive, and informix != INFORMIX.Quite honestly I suggest you just disable filtering for now, when Icover the the Database explorer I will show you how to change thefilter how you want to.
As soon as you have selected your filter objects, you will finally seethe workbench laid out, and ready to be used for a Data Developmentproject. Here is mine :
In the next article I will discuss the Database Explorer Window in thelower left hand corner. Mark Jamison
The 'deadlock' mutex ensures
that there is never be more than one session checking for a deadlock situation
at any given time. Whenever some lock is requested due to the LOCK MODE set to
WAIT [seconds] statement used by a session, the code for deadlock checking get executed.
The deadlock checking code
starts with marking all the sessions in the instance with an internal flag
saying 'not in deadlock'. Then it walks through the list of all the locks held
by the current session (including the newly requested one) and for each lock it
identifies whether there are some sessions waiting for it. If such a session
(called waiter) is found, its internal flag changed to 'deadlock candidate' and
the waiter added into the so called 'deadlock list'. Each of the waiters in
this list is then checked in the same way - the code checks whether any of the
waiters holds any lock the current session is waiting for. If such a lock is
found a deadlock is detected.
Each execution of the deadlock checking code depends on a consistent state of
the internal 'deadlock' flag in each session. In other words there can't be
more sessions checking for the deadlock at the same time, as they would
overwrite that internal flag mutually. To ensure this, each session has to
acquire the 'deadlock' mutex before the deadlock checking code can be executed
and release it once the deadlock detection is finished.
- Tomas Zahradnik
CombiningDay 1 and Day 2 activities. It has been a busy 2 days with some greatannouncements. We started off with Arvind announcing xC4 and most importantlyour storage optimization i.e. compression feature and warehouse feature.
Compressioncan save you as much as 80% of disk space and the demo at the upgrade kioskshowed a compression rate of 73 %. Check out our compression site at: http://www-01.ibm.com/software/data/informix/compression/
This sitefeatures a ROI tool and a white paper that is a must read for all. It clearlyarticulates to the value proposition of compression and how easy it is to setupand use. The OAT sessions and the demos showcased this feature and ease of usevery well and these sessions were amongst the popular ones.
Thewarehouse announcement is another exciting announcement that went out and thisincludes tooling that greatly simplify your warehouse design and deployment andaids in your BI decisions. The demo sessions highlighting the warehouse featurewas also a great attraction. Check out our warehouse site at: http://www-01.ibm.com/software/data/informix/warehouse/
The sitefeatures a white paper that is a great read…. We had another great keynote withJerry and Kevin who walked us through the Informix roadmap. I am sure thissession gave everyone an idea of where Informix is heading and how each of youcan help with driving key features within the product as well. The rest of theday as usual had some great sessions and I just did not want to highlight oneover the other. Let me just state that there were almost 25 great sessions thattook place through the day.
OnTuesday, Ananth gave his keynote on cloud computing and how it can be acritical success factor in your IT plans. This is an exciting space that IBM isin and the prospects and the potential seems endless in terms of its useespecially with its cost effectiveness and ease of deployment. Guy's blog hassome great entries on this and I would encourage each and every one to read upon the same http://www.ibm.com/developerworks/blogs/page/gbowerman
Thesessions again were great and the feedback has been astounding in terms of thequality of the sessions. There is a reason why the IIUG conference is anattractive proposition and why you should make this a yearly event on yourcalendar.
Oh btw,there is an interesting contest going on that Advanced Data Tools is sponsoringcalled the Fastest DBA where the goal is to take an SQL statement that runs inabout 30 min and make it run faster. The last I checked the time to beat was0.54 sec or somewhere around this … We definitely need more of these at futureconferences …
Thecertification room was receiving its regular stream of test takers as well andatleast the ones I saw came out smiling which is good :)
Tomorrowis the last day of the conference and has been filled with some excitingsessions. The key note is a Q&A session that will raise some gooddiscussions about marketing, support, sales and dev … will sum up theconference tomorrow… enjoy the rest of the conference….
luster for H
igh Availability(MACH11) is a code name for the new feature called ContinuousAvailability introduced in Informix Dynamic Server (IDS) 11. This new feature significantly expands the highavailability options in IDS to provide increased failover, capacity,flexibility, and scalability.
Traditionally, IDS has provided multiple robust solutions forsupporting high availability data replication options. Previousreleases of IDS have supported two replication technologies: EnterpriseReplication (ER) and High Availability Data Replication (HDR). By usingthese technologies together, customers are able to achieve very highlevels of data availability. Both replication technologiescan be integrated with each other and coexist with other availabilitysolutions such as disk mirroring.
While HDR and ER have been features of IDS for many years, and haveproven to be highly reliable and low-maintenance technologies. IDS nowadds support for two new types of secondary servers:
- Secondary servers that share the same physical disk. TheShared Disk Secondary (SDS) servers provide increased availability byallowing one or more instances of the IDS server to attach to the samedisk subsystem, providing redundancy for the server in addition to dataredundancy solutions
- Additional remote secondary servers. The Remote StandaloneSecondary (RSS) servers extend HDR to provide multiple local or remotebackup servers that also replicate the data.
Both SDS and RSS servers provide customers a way to obtain increasedcapacity by distributing workload across multiple servers.Customers canchoose any of these solutions on their own. They become even morepowerful by combining all three types of topologies together. AddingEnterprise Replication, a completely customized availability solutioncan be delivered to meet each unique availability requirement.. Theseconfigurations are simple to set up and maintain, and are highlyscalable.
The following image depicts a Continuous Availability solution with 3tier protection. For example, if the Primary in Building-A inNew Orleans went down for some reason the role of the primary caneasily be switched to one of the SDS servers running on the bladeserver in Building-B. This would cause all other secondaryservers to automatically connect to the new primary server. If both the servers in New Orleans died, Memphisbecomes your Primary and Denver can be made HDR secondary and you mayeven add some SDS servers to Memphis blade server for load balance.
To learn more about this feature, please read the following white paperwritten by the architects of this feature:Whitepaper on MACH11 technology.Vijay Lolabattu