This article describes a customer’s experience with their poll thread configuration while upgrading from IDS 7.31.FD9 to IDS 10.00.FC6. This particular upgrade was related to their busiest IDS server running on an HP Superdome. Typically, one could observe upwards of at least 3000 short-lived soctcp connections on this system.
Original IDS 7.31 Configuration:
Key configuration settings that were active in the IDS 7.31 environment were initially used after upgrading to IDS 10.00:
23 CPU VPs
20 poll threads running on NET VPs
multiple server aliases
Optimal IDS 10.00 Configuration:
An optimal configuration was ultimately determined and incorporated the following configuration settings in the IDS 10.00 server:
23 CPU VPs
a handful (3-5) of poll threads running on NET VPs
enable new IDS 10.00 ONCONFIG parameter, FASTPOLL
multiple server aliases
Stress testing supportive of this optimal configuration was conducted on a 16-processor/32-core HP server using the latest IDS 7.31 and IDS 10.00 64 bit products. The testing involved a multi-threaded ESQL/C application that would spawn 3000 threads over 3 server aliases. Each thread would connect to the server, complete a small amount of read-only work and disconnect from the server 30 times. These 90,000 total connections mimicked the customer’s workload and considered poll threads running both on CPU VPs (inline) and on NET VPs against servers configured with 23 CPU VPs. The following chart shows results from the stress testing that were considered for the optimal customer configuration:
Other day I was setting up a high availability cluster
environment and ran into to an interesting problem. I followed all necessary
instruction for setup a RSS server. However, RSS stuck in recovery process and
message log on primary serer reported error that could not send log. For
example, when executed following command on RSS server to set data replication
onmode -d RSS <primary server name>
RSS server stuck in recovery mode and message log on primary server showing
RSS Server <RSS server name> -
state is now connected Can not send log <log
number mentioned in error message was not close to the current log on primary
or RSS server. For example, current log on primary was 7438 and
on RSS 7436 but message log stating ‘Can not send log 825241904’. So, from
where server getting a out of sequence log number?
Initially I though it some kind of corruption. However, after some investigation
figured out, I was using delayed
application (DELAY_APPLY) on RSS server
and the directory specified with LOG_STAGING_DIR configuration parameter holding
some unwanted file(s). For example, file 'ifmxUniqueLog_825241904' in LOG_STAGING_DIR.
So, during recovery RSS server requested to primary for log number 825241904
but that log not exists on primary server.
Once I removed all files from the LOG_STAGING_DIR
directory on RSS server
able to successfully set the high availability cluster
environment. Conclusion, next
time you try to setup a RSS with DELAY_APPLY, make sure nothing is in the LOG_STAGING_DIR on RSS server.
During the installation of IDS 11.50, Program Group folder is opened and Start Menu shortcuts are added. If, these actions were optional then it would make IDS more embeddable with other applications, which uses IDS in background. Customers who are embedding IDS as part of their software package usually desire near-complete invisibility.
This request was met by adding a new comma-line option “hidden” for instillation. This option will prevent creation of start menu shortcuts and suppress the Program Group folder from popping-up. Users can use this option by invoking the setup.exe (IDS 11.50 installation) from command-line and supplying “–hidden” option with the same command.
There may be a situation occurs when you need to rebuild sysadmindatabase without restart the IDS. For example,previously IDS started with '$INFORMIX/etc/sysadmin/stop'file present (or windows equivalent) and no sysadmin database exists in the instance.
Here is an example of rebuild sysadmin database manually:
If you encountered any problem during above mentioned process, restartingIDS automatically rebuild sysadmin database.Considering '$INFORMIX/etc/sysadmin/stop' file is not exists.
Scope: This article covers redistributing ESQL/C based demos and application. The steps required to redistribute other Informix client applications by copying files are being investigated.
Depending on how you deploy your Informix applications there is sometimes a need to bypass the Informix Client SDK or I-Connect installation process and copy the Informix library and API files directly to a target computer. Though the officially recommended and supported approach is to use the supplied CSDK/I-Connect installer, these instructions are provided as an alternative approach involving copying files for scenarios where using the installer is not possible. This article demonstrates how and where to copy all the required CSDK files and Microsoft Windows DLLs to a target computer in order to deploy applications.
Note that while this is not an officially recommended approach, these instructions have been tested by IBM and demonstrated to work. If you encounter problems with this method and need to talk to IBM technical support you may be asked to try installing via the installer to rule out other problems with your configuration.
Install Client SDK 3.50 or I-Connect on your Windows development computer. After successful installation, verify that all the shortcuts in the programs groups are created and registry keys are updated.
Make a copy of the entire CSDK installation folder (INFORMIXDIR) and transfer those to the target computer (For example by zipping the files and unzipping on the target computer). Choose any location on target computer for copying files for example c:\informix.
Copy the required Microsoft Windows runtime DLLs. Since the Informix product is not being installed via the regular installer the required runtime DLLs may not be present on the target computer. As a result applications such as setnet32.exe, ilogin.exe and finderr may not run correctly.
If a manifest is present in your application but a required Visual C++ library is not installed in the WinSxS folder, you may get one of the following error messages depending on the version of Windows on which you try to run your application:
The application failed to initialize properly (0xc0000135).
This application has failed to start because the application configuration is incorrect. Reinstalling application may fix this problem.
Alternative approach: If your deployment requirements prevent you from installing the Visual C++ Redistributable Package directly, from the development computer, copy the %WINDIR%\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.762_x-ww_6b128700 directory to same location on the target computer. (Create the same directory structure on the target computer as the development computer if it does not exist.)
Also copy the policy files from the development computer %WINDIR%\WinSxS\Policies\x86_policy.8.0.Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_x-ww_77c24773 to the same location on target computer. (Again, create the same directory structure on target computer as the development computer if it doesn’t exist.)
Note: This workaround is only applicable to Microsoft Visual C++ 2005 SP1 runtime components. If later versions of the Client SDK are built with a later version of Visual Studio then the corresponding version of the runtime components would be required. Check your release notes to see which version of Visual Studio is required.
Running ESQL/C demos:
If you do not have the required Visual C++ libraries installed in the WinSxS folder then while running the demo1 program the following error pops up.
Once you install Visual C++ libraries or copy the runtime DLLs and policy files in C:\WINDOWS\WinSxS folder you should see the ESQL/C demos successfully as shown below.
Most of us fairly familiar with errno -28 (No space left on device) during Assertion Failure (AF), while Informix Dynamic Server (IDS) generates diagnostics data (AF file and shared memory dump). Diagnostics data are very critical to determine the root cause of failure. AF files are generally not too big, where as shared memory dumps often huge in size, almost same as the total memory size used by the IDS instance. The lack of disk space can cause partially dump of shared memory file, which add very little or no value to diagnose the failure.
In large IDS systems, the amount of space required to dump the shared memory is excessive because of gigantic sizes of the resident segment. Most of it contains is BUFFERPOOL information. Large size of the shared memory dump file not only create space issue, it difficult also for technical support to extract useful information in a timely manner.
The IDS version 11.50 provides some flexibility to control how much memory is written to a dump file. We can exclude the buffer pool information from resident segment to significantly reduce the shared memory dump file size. Configuration parameter DUMPSHMEM and onstat both provide some new options to control the shared memory dump size.
Use the DUMPSHMEM configuration parameter to automatically create a dump file during AF. Set DUMPSHMEM to 2 to create a shared memory dump that excludes the buffer pool. You can dynamically change the value of DUMPSHMEM with onmode -wm and onmode -wf. The DUMPSHMEM can take following values:
0 - Do not dump shared memory during AF 1 - Dump full shared memory (default) 2 - Dump shared memory without bufferpool (new option)
The 'onstat -o' command also allows to dump shared memory file on-demand. Use the new ‘nobuffs’ options with 'onstat -o' to generate shared memory dump without bufferpool. If you use 'onstat -o' without 'nobuffs' option, the DUMPSHMEM configuration parameter controls the content of shared memory file. The 0 or 1 configuration value will generate full shared memory dump file and 2 exclude buffer pool information.
All oncheck options works on the shared memory dump file without buffer pool, except options that access buffer information e.g. -b, -B, -P.
Typically onstat shows segments as “FACADE” while working with full shared memory, where as shared memory without buffer pool shows as "FAÇADE NOBUFFERS".
From IDS 10.0x onwards, Enterprise Replication (ER) supports alteroperations on a replicated table while replication is active however, RENAMEsupport was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.
When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database is correctlyupdated.
MacWorld was an amazing event this past week in San Francisco at the Moscone Center. Although this is generally a consumer oriented event there were many companies with an enterprise focus showing their solutions. IBM was in the smallest size pedestal in the Enterprise Solutions Area -- People would constantly walk by and say - "IBM ---- the biggest company with the smallest pedestal" - We smiled and told them we make the biggest impact!!!!
As you know we just announced our beta for the Mac OS X this week and were at the show to start to get the word out and meet customers and business partners who are interested in an "Industrial Strength" data base for their application. There was a ton of traffic in the ped - -People were at first surprised and then pleased to see us there as they have been looking for a dependable data base that is secure and scalable to build their applications on. Many developers who have a long history using Informix were thrilled to see the announcement and anxious to try the beta.
MAC users are very happy to see an enterprise class database on the LATEST MAC OS.
Looking for something better than what is currently available on MAC which just hasn't been reliable and has not scaled to meet their needs.
Some really like the idea of OAT that is open source and allow users to customize, especially the free part.
One mid sized system integrator commented " We are glad to see IBM supporting the MAC platform as we are building applications to take into Government and Financial Markets - We need a data base that our customers can depend on. IDS is exactly what we are looking for."
It is already known to us that onbar backs up some critical administrative files during storage space backup. These critical files are as follows:
- The onconfig file
- UNIX: The sqlhosts file
- The ON-Bar emergency boot file: ixbar.servernum
- The server boot file: oncfg_servername.servernum
There may be a situation when you need to restore one of these files, for example: replace disks, restore to a second computer system (imported restore) etc. However, previously there was no easy way to restore these files from onbar backup. You need to depend on the Informix Technical Support to perform this operation.
Starting with Inofrmix vesion 12.10.xC2, you can restore the critical files that you backed up with onbar utility. Onbar cold restore has an option now to restore these files or you can only restore these file without perform the storate space restore.
Use 'onbar -r -cf yes' to restore critical file during cold restore. Alternatively, you can use 'onbar -r -cf only' to extract critical files while Informix server is offline.
You can use HTTPS to protect the IBM OpenAdmin Tool (OAT) for Informix web server from eavesdropping, tampering, and message forgery. When HTTPS is enabled, messages from OAT clients are encrypted before they are sent to the OAT web server. Encryption prevents hackers from listening over the line and stealing sensitive information. When HTTPS is enabled, OAT clients can also authenticate with the OAT host, so that hackers cannot deceive OAT clients with fake OAT web servers.
Now that you have installed the Developer Workbench it is now time torun it. ON the first run, you should be asked to set up yourworkspace. This should give you a screen like the following:
Once you select this, the Workbench takes a little while to build yourworkspace. Once done however you get the classic EclipseWelcome screen. I have highlighted your workspace in the followingscreen shot , because that is the next place you want to go:
Once you click on the workbench you get the generic workbench. The nextstep at that point is to start a project , and establish a connection: Setting up a new project to handle queries is fairly straightforward,and can be reached very quickly , all you have to do is select the DataDevelopment Project option as in the following screen shot:
Selecting the above puts you to the new Data Development projectscreen, as shown below:
As you can see, there are some DB2 flavored words here.Schema in DB2, is the IDS equivalent of a database, and as such, keepthe default radio button, as seen in the screen shot above, set. I would recommend that you giver your project a meaningfulname, but that is completely under you control. In my case I listed itas "Mark's Test project for Blog".
This brings us to the next page which assigns an existing databaseconnection, or creates a new one. Since this is our first time through,just hit the next button, as we will need to create a new connection.
The Connection parameters gives you several options for connections, aswell as several data server choices. Take a look at the below screenshot:
This is a standard Informix JDBC connection string at this point. Thescreen shot above differs slightly from what you see as the defaultsettings, as I have selected the "Informix JDBC driver" instead of the"IBM Data Server Driver for JDBC and SQLJ." In short I took the JDBCdriver instead of the JCC driver. The main reason is because JCC itselfis in beta, and I wanted to use something I knew how to connect withand trace. JCC also requires a DRDA listener thread, and I don't have one set up for the instance I am using. Update : Check out Establishing JCC connections in 11.10 if you want to use JCC connections.
Fill the remaining fields above with the normal Informix connectionsettings, and test you connection. Once you connection is working,click on the next button.
This final screen allows you to filter your schema. Here is the screenshot :
Please note: The default is only for users who have IDS on a windowsplatform. Since schema id is actually a user id in IDS, filtering onINFORMIX means that no information of any kind will return for anydatabase objects if your IDS instance is on a Unix platform. The reasonof course is the user id is case sensitive, and informix != INFORMIX.Quite honestly I suggest you just disable filtering for now, when Icover the the Database explorer I will show you how to change thefilter how you want to.
As soon as you have selected your filter objects, you will finally seethe workbench laid out, and ready to be used for a Data Developmentproject. Here is mine :
In the next article I will discuss the Database Explorer Window in thelower left hand corner.
Informix has four protocols, or communication channels, which you can use to access the server, SQLI, DRDA, JSON, and RESTful. If you are familiar with one of these and want to know how to perform the same basic operations using one of the others, consider this post to be your Rosetta stone.
The sample application is written in Java. It iterates through using the Informix JDBC driver (which uses the SQLI protocol), the DB2 JCC driver (which uses DRDA), the MongoDB driver for Java (JSON), and native Java HTTP classes (REST). For each of these protocols, it goes through the steps to:
Create a user
Create a database
Create a table/collection
Create a row or document (entry)
Read an entry
Update an entry
Drop a table/collection
Drop a database
Drop a user
The code should be self-explanatory, but there are some differences between the APIs worth noting.
Using the SQLI and DRDA protocols, connections are stateful. There is an open channel maintained between the port listening for commands on the server and the port on the client sending commands. So, transaction states are easily maintained, and multiple commands can be executed without re-authenticating. Most operations are identical between these two, except the URLs are different. Informix JDBC requires an INFORMIXSERVER setting. You can not drop the database you are currently accessing. Authentication takes place at the server level, allowing you to access whatever database for which you have appropriate privileges.
The MongoDB clients are what I'll call semi-stateful. Client and db objects have the appearance of maintaining a connection with the database server, but really the connection information is cached on the client side and as far as the listener is concerned every operation is independent. (Exceptions to this model are beyond the scope of this write-up.) The database you are dropping must be the current database. Authentication takes place at the database level.
Fitting the REST model, there is no concept of a stateful connection in the RESTful API. However, there are two ways to authenticate: Providing a user name and password as is common, and sending a session id cookie from a previously authenticated operation. Using a session cookie is how you can, say, authenticate against the admin database, and use your userAdminAnyDatabase privilege to create a new user within another database.
Users with the ability to create other users must already exist for this demo to work. Examples:
Informix DBSA: create user dbsauser with password 'a1b2c3d4e5' properties user 'nobody' authorization(DBSA)
The download file contains the source code and Eclipse project that can be used to build it. It also contains all the required drivers; it is your choice whether to use the ones included or use more modern ones you have installed. Included in the project is an ANT script for building the jar.
Run the demo with the command: java -jar basicChecks.jar connSettings.properties
Running the demo without the required properties file will give you instructions for what the file should contain.
$ java -jar basicChecks.jar
Usage: java Main <connection properties file>
The connection properties file should contain lines of <name>=<value> pairs.
There are new functions in Informix 12.10 designed to raise applicaiton compatibility with other vendors. Some of these are packaged in the excompat (External Compatibility) library, and within that library are functions for enabling tracing or logging of user routines. These are the DBMS_OUTPUT functions, which are:
System-defined routines available in the DBMS_OUTPUT package
The following is a short example of how these might be used.
-- Setup example conditions
-- Register the compatibility library if not already done. Uncomment these lines
-- if the DBMS_OUTPUT routines are not found.
--EXECUTE FUNCTION sysbldprepare('excompat.*', 'drop');
--EXECUTE FUNCTION sysbldprepare('excompat.*', 'create');
-- Include tracing information on an event or a routine.
create trigger if not exists
customer_insert insert on customer
for each row (
execute procedure dbms_output_put_line(
'customer row inserted by session ' || dbinfo('sessionid')
-- For our purposes, it is useful to be able to fetch back a message buffer
-- line from an SQL call; so, creating a procedure to enable that.
drop function if exists read_trace_buffer();
create dba function
define buffer lvarchar(2000);
define line_found integer;
let buffer = '';
let line_found = 0;
execute procedure dbms_output_get_line(buffer, line_found);
-- enable tracing and test
-- Enablement of tracing can be enabled and disabled at runtime
insert into customer (lname, fname) values ('Barker', 'Bob');
-- Demonstrate that information was put in the message buffer by the
-- INSERT trigger.
On the call to read_trace_buffer(), something like the following should be returned.
(expression) customer row inserted by session 71
This is a simple example to demonstrate the setup and enablement of using the DBMS_OUTPUT messaging buffer.
Also check out the newly redesigned landing page for IBM Informix! All the IBM Informix resources you need are in one place - product offerings, select customer references, videos, demos, white papers, code samples, and downloads.
The value of locale variable, CLIENT_LOCALE or DB_LOCALE can be broken into 4 parts.
1 2 3 4 <language>_<territory>.<Code set name/Code set number>[@modifier] -------- -------- ----------------------------- --------
Conventional, I represent this as ll_tt.xxxx@xyz, where ...
ll ............ represents the Language
tt ........... represents the Territory, or cultural convention.
xxxx ....... represents the Code set Name or the Code set Number supported by the locale and
xyz ......... represents the Modifier. This is the only optional part in a locale value.
The modifier, sometimes refered as variant, modifies the cultural-convention settings that the language and territory settings imply. It usually indicates a special localized collating order that the locale supports.
Let us look at an example.
CLIENT_LOCALE = de_at.cp1252@euro, and CLIENT_LOCALE = de_at.1252@euro
Here, both CLIENT_LOCALE values represent the same locale.
1252 is the Code set number for Code set name, cp1252. We can specify either Code set name or the Code set number in a locale value.
- de ........... represents the German language - at ............ the territory, Austria - cp1252 ... the code set used for the encoding and - euro ....... the modifier used for the locale
So, this is German language locale, for Austria, using cp1252 encoding and euro modifier.
Now, to check if this locale file exists, where to lookup ?
All locale files reside under directory $INFORMIXDIR/gls/lc11
To lookup for locale files for language (ll) and territory (tt), we check under $INFORMIXDIR/gls/lc11/ll_tt directory.
In our example, to lookup for locale files for German language (de), for territory Austria (at), we will lookup $INFORMIXDIR/gls/lc11/de_at directory
Next, under the specified locale directory, look for files with name represented by hex value of the code set name/ code set number, along with modifier name if modifier is specified, with an extension .lco
In our example, hex value for Code set cp1252 is 04e4 and modifier euro is used. So, we will look for file 04e4euro.loc under directory $INFORMIXDIR/gls/lc11/de_at.
How and where to find the hex value for a Code set name ?
For any Code set name, its Code set number and hex value can be looked-up in file $INFORMIXDIR/gls/cm3/registry.
Let us find the hex value for Code set name Latin-3.
We can find the information in file
In the registry file ...
- first coulmn represents the code set name, - second column is code set number - third column is the hex value of the code set number, and - fourth column, is either blank or has comment about the code set.
Let us lookup for code set, Latin-3 in registry file and see what we find. We get the following value.
Latin-3 57346 0xe002 --------- ------------ ----------- -------------------------------- code set name code set number hex value in this case, there is no comment
Locale values are case in-sensitive.
DB_LOCALE = de_de.cp1252, DB_LOCALE = de_de.CP1252. Here, both locale values are valid, representing the same code set.
You can specify either code set name or code set number in a locale value, but you cannot use the hex value of the code set number.
DB_LOCALE = fr_ca.57372 or fr_ca.utf8, ........ both values are valid and they represent the same code set.
DB_LOCALE = de_de.cp1252 or de_de.1252 .... both values are valid and they represent the same code set
DB_LOCALE = de_de.04e4 ............... this in invalid. Code set's hex value cannot be used in a locale value.
If modifier is not specified in the locale variable, like say ...
CLIENT_LOCALE = de_at.cp1252
- to locate the locale file, look for .loc under the language_territory directory. In this case, we look for following file ...
If modifier is specified in the locale variable, like ...
CLIENT_LOCALE = de_at.cp1252@euro
- to locate the locale file, look for .lco file under language_territory directory. In this case, we look for following file ...
Code set name, its corresponding Code set number and hex value is specified in file
Locale Territory/ Country code and Language code can be looked up in file
Conventionally, for LOCALE variable having value ll_tt.xxxx[@xyz], following locale file should exist.
$INFORMIXDIR/gls/lc11/ll_tt/<hex value of xxxx>[xyz].lco