Please note that IDS 9.40 [All Versions] is now "End of Support". If you are planning on remaining on 9.4, you should start discussingwith your IBM Representative regarding purchasing a service extension else please consider upgrading to the current IDS release.
IDS 11.50.xC4 has several new features like storage optimization and warehouse that you can take advantage of.
The upgrade portal is a good resource for your upgrade plans and is a definite recommendation. If you need any additional assistance or have a quick question, please feel free to email getIDS11@us.ibm.com.
The best-fit Java class for an Informix DATETIME value is java.sql.Timestamp. Java.sql.Timestamp technically inherits from java.util.Date; although, there are some semantic differences which are not typical of inheritance relationships. All the constructors, getters, and setters which are not deprecated for Date and Timestamp use values based on Greenwich Mean Time (GMT). They are offsets in milliseconds from the epoch, January 1, 1970 00:00:00 GMT. So, it would be logically consistent for Timestamps to convert to DATETIMEs using the GMT reference frame by default. However, the JDBC Tutorial and Reference specifies that, when no Calendar is provided, Timestamp values are converted to the time zone of the JVM in which the application is running. The difficulties surrounding Timestamps and time zones are not limited to the scope of the Informix JDBC driver; they are generic to all of JDBC because that is the specification.
This model creates problems in a variety of ways. For distributed applications, the Date or Timestamp values might be streaming into the database from JDBC drivers located in different time zones. With multi-tier applications, you might find yourself contending with a user interface in time zone A, the JDBC driver in time zone B, and the database server in time zone C. The driver is the library where external types are converted to internal types; so, where it is running is where the local time zone comes into play. It can be difficult for either the front end or the database to know in what time zone the driver is operating. Dealing with the conversion from Timestamp in GMT to DATETIME in the JDBC driver locale has to be done in the application layer directly interfacing with the JDBC driver.
The conversion done by the driver applies in both directions, and this can lead to complications if anything other than the JDBC Timestamp class is used in conjunction with DATETIME values. If strings representing DATETIMEs or DATETIME literals are used anywhere in your application SQL (including CURRENT), the values will not be converted on the way in, but will be converted on the way out. Similarly, different APIs will not convert in either direction. If you insert new Timestamp(86400001) (January 2, 1970 00:00:00.001 GMT) through JDBC, and select it back from within the same JVM, you will get that same value back. So, you might expect that selecting the same row-column from, say, DBAcccess, would also give you the same value, but it will not because the value was converted to JVM time zone on the way in through JDBC, but not converted on the way out through ESQL/C. If you are in the U.S., you will get some time on January 1st, not even the same day. The reverse also applies, you may find yourself with values inserted through other APIs, which perform no conversion, that are converted to some other time zone when selected through JDBC.
DATETIMEs, of course, do not contain time zone information. There are basically two ways to deal with this, record the time zone information in another column along with the DATETIME or establish a convention that all the DATETIMES in the database are in the same time zone. Establishing a convention is the simpler approach both from an application implementation perspective, and in terms of being able to easily compare values in different rows, columns, or tables. If you know that every Java application which will ever access the data will operate in its own JVM, you can code your applications to set the default time zone of the JVM, but this is a problematic solution when more than one, independent application is running in the same JVM, such as servlets in a web server. It is unclear what would happen with different threads of execution changing the time zone for all other threads, but if they were actually able to do that, it would not be pretty.
At JDK 1.1 two things happened, all the getters and setters for java.util.Date where the meaning was ambiguous with respect to time zone were deprecated and the java.util.Calendar class was introduced. Along with the Calendar class itself, methods accepting a Calendar object were added to the JDBC interface, for example PreparedStatement.setTimestamp(int parameterIndex, Timestamp x, Calendar cal) and ResultSet.getTimestamp(int columnIndex, Calendar cal). In these methods, the Calendar object establishes the frame of reference used to convert from the GMT offset in the Timestamp object to or from a DATETIME value. If you create a constant Calendar object in your applications, and use it every time you read or write a Timestamp object to a DATETIME, the values DATETIME will remain consistent with each other, and the meaning of the value will not change dependent on the time zone of the JVM.
Using a GMT calendar would be efficient because it requires less operations to convert between the Java classes and the value in the database. Also, the value will not change if other APIs, like ODBC or ESQL/C, are used. Conceptually, what is stored in the database is January 1, 1970 00:00:00.000 GMT + tstamp.getTime() milliseconds.
Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
Coordinated Universal Time (UTC) is a close relative to GMT and is just as viable.
In this model, all date and time values in the database are on the same time line, and conversion to other time lines or calendar systems are handled at the user interface level. Using UTC or GMT for all DATETIME values provides a simple model to implement in applications. It also removes all the problems related to knowing the time zone of the JDBC driver runtime or if applications written in anything other than Java will ever be used to access the data.
Some customers find the single user mode in IDS 10 limited because only informix and the DBSA group have access tothe database server. In IDS 11.10, you can give access to other DBAs and specific users in addition to informix andthe DBSA group members when the server is in single user mode. I have listed the steps to give access below:
1. Make sure that the onconfig variable ADMIN_USER_MODE_WITH_DBSA is set to 1.
2. Set the onconfig variable ADMIN_MODE_USERS to a comma separated user list.
When the database server switches to single user mode, informix, the DBSA group, tom and don will have access to
3. Start the server in single user mode.
4. Dynamically change the server mode using onmode.
You have upgraded to Informix 11.70.xC3 or higher version. After upgrade encountered some of your queries running slow. The common observation, a query was using index path prior to upgrade and running faster. After an upgrade the same query is doing a sequential scan. Using an index directive the query uses the index and runs much faster than the sequential scan. However, the index path has a higher estimated cost associated with it than the sequential scan.
cheetahblog 0600028TE8 Tags:  online index re-optimize re-prepare recompile -710 2 Comments 4,443 Views
Auto Reprepare is a feature in IDS 11.10 to minimize the occurrence of -710 errors in applications.
What is a -710 error?
User applications can get -710 errors while executing an explicitly prepared statement if tables referenced have been changed after the statement was prepared. Applications can get -710 errors while executing a stored procedure if the tables referenced indirectly by them have been changed after the procedure was optimized. Let us look at some examples of prepared statements where you can get a -710.
$prepare s1 from "select c1, c2 from t710";$execute s1;$create index i1 on t710(c1);$execute s1; -------> -710 errorWhen this feature is enabled, if the server detects that there has been a change to the tableor tables referenced by a prepared statement during execution time, it dynamically re-preparesand re-optimizes that statement. The statement is executed if the re-optimization succeeds. Similarly, if the server detects that there has been a change to a table or tables referencedby a stored procedure at execution time, it dynamically re-optimizes the stored procedure. Notethat stored procedures get dynamically optimized even when the feature is disabled in some cases.Let us look at the previous examples with this feature enabled:
$prepare s1 from "select c1, c2 from t710";$execute s1;$create index i1 on t710(c1);$execute s1; -------> re-prepares and re-optimizes the statement and executes itIf you have queries that reference tables on remote IDS servers, as long as the changes areon tables that are directly referenced in your SQL statement, you will not get any -710 errors.But if a table referenced indirectly in your SQL statement changes, you will get -710 errorsunless you have Auto Re-prepare enabled on the remote IDS server.
You can still get -710 errors in your applications in one of the following scenarios:
This feature is enabled by default. To disable it, set the ONCONFIG parameter AUTO_REPREPARE to 0 and re-start IDS. You can disable it at a session level using the sql statement "set environment IFX_AUTO_REPREPARE '0'.
Suma Vinod[Read More]
With Cheetah, IDS now allows a DBA to use ontape to point to adirectory instead of just a particular file. This can be handy when youhave multiple backups that you want to keep in the samedirectory.
It involves a little extra administration though, so here’s aquick primer on it.
Setting it up for archives
It’s actually pretty easy to set IDS up to do it.Here are the steps:
When you take an archive by default it will use your<HOSTNAME_SERVERNUM> as the standard prefix. For example:my machine name is ryleh and my SERVERNUM=11 When I do an archive andthen look at the file it is this: ryleh_11_L0
If you want to name it something different, you can set theundocumented environmental variable IFX_ONTAPE_FILE_PREFX
You will have to recycle the engine after you set it for it to takeeffect.
This way you can have multiple archives stored in the same directory.
All in all, backing up to a directory can be a very handy utility tohave for an administrator that has multiple instances, especially forsuch things as development and test instances. You can savean archive under the name of say “benchmark” andthen when you need to load it up, you don’t have to look highand low for the file or the tape. It’s right there,and convenient.
Joe Baric[Read More]
So you want to get the BTS datablade running on a database? Well hereis a step by step Quick Reference to get your database ready to use theBTS Datablade.
SELECT company FROM customer WHEREbts_contains(company, '%all');
And so now you should have the BTS set up and ready to run.
In our final Blog on the BTS we will go over some performance tuning,and how to troubleshoot some issues.
Mark Jamison[Read More]
So now that we have set up the BasicText Search Blade, or BTS forshort, the next question is what performance tuning tools areavailable, andhow do we primarily troubleshoot problems?
The biggest performance tuning tool is bts_index_compact. From anadministrative standpoint, we treat the BTS indices the same way wetreat btreeindices, with one exception, we have no automated thread which goes outandclean the BTS indices, for btree indices we have btscanner threads.bts_index_compact is the BTS version of what the btscannerthread does.So from a performance standpoint, you should frequently runbts_index_compactin order to decrease the size, both disk space and number of elements,and thusincrease performance. Please note that you can make thisprocessautomated by creating your index with delete= 'immediate' in youstatement.This will eliminate the need for compacting the index, but will makealldeletes run longer. A better option would be to set up a function usingtheadmin api to periodically run bts_index_compact on your indices, thuscreatinga poor mans btscanner for bts, so to speak.
Another performance tuning tip is to make sure that you have createdyour extspaces on fast disk. If you create an ext space on an NFS mount you arebasically asking for horrible performance. A little bit about extspaces may bein order. External spaces, or extspaces, are exactly as they sound,they arespaces external to the database proper. What this means is that youcannot usenormal onstat's to monitor an extspace.
Troubleshooting BTS Issues:
So now you have run into a problem, maybe something so simple as notremembering where your extspace is actually located, maybe you aregettingbigger problems, in either case, this is when you need to enable BTStracing.This process is actually fairly straight forward, as BTS has provided 2functions to handle this. First you will call bts_tracefile, to setyour tracefile location, then you should call bts_tracelevel which will dumpinformationto the file you specified.
Let's say that you had just forgotten the syntax and location of yourbts. Thequickest way would be to run the following:
execute function bts_tracefile('/tmp/bts.log');
execute function bts_tracelevel(10);
select * from customer where bts_contains(company,'all~');
After that just read the file bts.log
Below is a snippet from the trace file generated from the above query:
Taking a quick look at this you can see what your extspace info is.
database name = stores_demo
owner name = informix
ext space name = bts1
You also know now that your extspace is located in/work1/informix/11.10.FC1/bts_idx/stores_demo/informix
A quick word on extspaces. Extspaces are spaces that are external tothe database;as such there is no onstat that monitors them. If you forget what thename ofyour ext space is or forget its location, the above method isessentially youronly option by which to find them.One other important note about extspaces, if you want to back them up, there is only one support method, and is as follows:
onmode -c block
External backup of external space
External backup of the instance
onmode -c unblock
What the above means is that rather than going through the above to safely backup your BTS indices you should just plan on rebuilding your BTS indices in the case of a restore.
One other consideration when troubleshooting BTS issues, BTS does notallow formultiple bts_contains in the predicate portion of an SQL Statement.
select * from customer where bts_contains(company, "fred") orbts_contains(company, 'all')
would fail with the error:
(BTS22) - bts_error bts_contains requires an index on thesearch column
This concludes my series on the bts blade.
Sanjit Chakraborty[Read More]
--> Error -23101      Unable to load locale categories.
- $INFORMIXDIR/gls/lc11/CLIENT_LOCALE's(ll_tt)/(hex value of db's code set).lco
- $INFORMIXDIR/gls/lc11/CLIENT_LOCALE's(ll_tt)/(hex value of client's-codeset).lco
where ll_tt represents the language and the territory specified by the locale.
Hex value for any code set can be looked up in file $INFORMIXDIR/gls/cm3/registry
say, DB_LOCALE = en_us.ut8 ......................... (hex value for code set utf8 is e01c)
CLIENT_LOCALE = zh_cn.gb18030-2000 ..... (hex value for code set gb18030-2000 is 1570)
following files must exist:
When GL_USEGLU is set (i.e. when using ICU implementation), you need to check the following as well.
- Ensure that ICU files are loaded correctly.
- $INFORMIXDIR/gls/etc/* ........... (all 4 files must exist)
--> Error -23104      Error opening required code-set conversion object file.
where cccc is hex value for CLIENT_LOCALE's code set,
dddd is hex value for DB_LOCALE's code set,
assuming code set for CLIENT_LOCALE and DB_LOCALE are not the same.
Having same code set for CLIENT_LOCALE and DB_LOCALE means no code set conversion is involved and hence no -23104 error.
If DB_LOCALE = en_us.ut8 ............................ (hex value of utf8 is e01c)
CLIENT_LOCALE = zh_cn.gb18030-2000 ....... (hex value of gb18030 is 1570)
following files must exist, else it results in error -23104.
--> Error -23197      Database locale information mismatch.
- the code set of locale_spec used in SET COLLATION sql statement is not same as the code set of database locale.
* Database locale is the locale used when the database was created.
* To know your database locale, you can lookup for dbs_collate value in sysmaster database by
executing following command,
* Remember, if DB_LOCALE is not set, CLIENT_LOCALE is used as DB_LOCALE.
Seema Kumari[Read More]
cheetahblog 0600028TE8 Tags:  fet_buf_size fbs performance communication setting size fetch buffer optimizations 6,402 Views
The fetch buffer size (FET_BUF_SIZE, FBS) setting applies to Informix client-server communications when using the SQLI protocol. While it can be set for other types besides TCP (shared memory or pipe), as we will see, it is only really useful for TCP connections.
When the server responds to a client request for data, it arranges the data in an buffer of the size requested by the client. The data is arranged by tuples, or rows. When the buffer is full, or the server has come to the end of the rows being fetched, the buffer is copied over the communication channel to a like buffer within the client application process memory. The client API (ODBC, JDBC, ESQL...) reads from the buffer, and copies or transforms the data into application variables as the application requests rows. When the client comes to the end of the buffer, and there is still more data in the result set, the client requests the server to send it another buffer of data.
The fetch buffer size setting is intended to allow the application to reduce the number of times the client API has to request another buffer. This setting is very much in the domain of speed versus size tradeoffs. Since at least one buffer has to be sent, there is no advantage to setting the buffer size to be larger than the typical, expected result set. Also, each buffer is associated with one executing statement; so, if there are lots of statements being processed concurrently, the size of the buffer can have an impact on the application's memory footprint.
A simple way of thinking of how the fetch buffer size affects the application performance is to think of it in terms of number of buffers per result set, times how long of an inherent delay there is between the client and server. If the result set size is small, or the communication channel has little to no delay (which is always the case with shared memory or pipe connections), there is no significant benefit to larger buffer sizes. What follows are the results of series of tests where the result set size, fetch buffer size, and network delay are varied.
These tests were written using the ODBC
interface, but the pattern of results should be the same across all
Informix APIs when using the SQLI protocol. The client and server
are on the same physical machine, with the client running under
Windows and the Informix server on a Linux virtual machine. The
network delay is controlled using Linux's network emulation (netem)
functionality. The delay is in milliseconds; it is roughly equivalent to a what a 'ping' command will tell you is the delay between the host and the client.
The data do not actually exist in the
database; they are manufactured at the time of query execution
through the use of views created to multiply the base result set.
This configuration is not representative of any real world scenarios. The actual performance of this set of queries is not important; the important aspect of these tests is the performance of one condition relative to another. So, the data have been converted to measurements of relative efficiency by dividing each data point within a series by the maximum value within that series. The most efficient setting tested has a relative efficiency of 1.0, and the rest are between 1.0 and 0.0.
The empirical results match predictions. As the size of the result set is increased, the effect of increasing the size of the fetch buffer is that slower communication between the client and the server has less effect. When sending large results to the client, it is possible to compensate for a slow network with a large fetch buffer. When the result size is small, or the network has little delay between the sending and receiving of packets, the setting of fetch buffer size has little effect.
Example 1: Small result set size - 1K
Increasing the fetch buffer size beyond the size of the result set has no meaningful effect, across all network conditions.
Table 1. Result set 1K data
Example 2. Moderate result set size – 256K
When there is a negligible network delay, the size of the fetch buffer has negligible effect. As the network delay increases, the effect of increasing the fetch buffer size becomes more significant. When the network delay is 100ms, increasing the fetch buffer can increase performance by over 6 times.
Table 2. Result set 256K data
Example 3. Large result set size – 4MB
When working with large result sets over a slow network, setting a large fetch buffer size can be crucial to performance. Notice that the data transfer rate of the largest buffer over the slowest network is only about 15% slower than the fastest condition, and 12 times faster than the same transfer using a small buffer.
Table 3. Result set 4MB data
The results above should provide a useful guide on an appropriate setting for fetch buffer size based on the actual queries being performed and the network infrastructure in use. The main drawback to a high setting of fetch buffer size is limited to when the memory footprint of the application starts to impede on memory requirements of other processes.
IDS 11.50 introduced a new installation wizard (installer), which makes life easier than in past releases to set up an instance to use a variety of database clients. In this section we will discuss how to set DRDA connection during installation process.
cheetahblog 0600028TE8 Tags:  sysdbopen configure sysdbclose close open connect database session 3,362 Views
In IDS 11.10, a DBA or user informix can create sysdbopen() and sysdbclose() procedures to be executed when adatabase is opened and closed. These procedures can be used to change the properties of a session without changingthe application that the session executes. Any statements that are valid in a UDR can be executed in these proceduresto change the session behavior.
A DBA or user informix can create the following procedures in a database:
The owner name is not ignored when you create sysdbopen() and sysdbclose() procedures in non-ANSI databases, so youcan create these procedures for specific users in non-ANSI databases.
The following procedure creates a table oltp_stat, sets the role to oltp and PDQ priority to 10 for user oltp_user ina database:
create table oltp_stat(userid int, connect_time datetime year to second);
set role to oltp;
set pdqpriority 10;
The following procedure creates a table public_stat, sets the role to others and the PDQ priority to 1 for the PUBLICgroup in a database.
create table public_stat(userid int, connect_time datetime year to second);
set role to others;
set pdqpriority 1;
You can clean up any activities started by sysdbopen() in sysdbclose(). For example, if you had created some tables in sysdbopen(), you can drop those tables in sysdbclose().
The following procedures drop the tables created by the sysdbopen() procedures given above:
drop table oltp_stat;
create procedure public.sysdbclose()
drop table public_stat;
A DBA or user informix can set the environment variable IFX_NODBPROC to any value, including 0, to prevent the execution of sysdbopen() and sysdbclose() procedures. When you set up sysdbopen() and sysdbclose() procedures, youcan set the environment variable IFX_NODBPROC and execute the procedures to test if the procedures work as expected. You need to unset the environment variable IFX_NODBPROC after testing.
For more information, see information on sysdbopen() and sysdbclose() in the IBM Informix Guide to SQL: Syntax
Suma Vinod[Read More]
cheetahblog 0600028TE8 Tags:  message optimize sql_enable_insert_cursor bulk open close optimization optofc autofree transfer free fetch optmsg insert automatically 3,903 Views
I will reiterate few existing features in Informix which you can use for optimize the connection to database. It is always an expensive operation when establishing a connection to a database. The goal should be that applications perform as many operations as possible while connected to database. Following are some connection attributes that can be use to tune application performance by reducing the network traffic:
insert optimization (SQL_ENABLE_INSERT_CURSOR): Create an insert cursor is a good idea to efficiently insert
rows into a table in bulk. You need to set the SQL_ENABLE_INSERT_CURSOR
attribute with QLSetStmtOption(), then call SQLParamOptions() with the number of rows as a
parameter. An Insert cursor creates a buffer in memory to hold a block of rows.
The buffer receives data from the application program; once buffer is full data
send to the database server in a block. The buffer reduces the amount of
communication between the program and the database server. As a result, the
insertions go faster.
Automatically free a cursors (AUTOFREE): In general, when an application uses a cursor, it usually sends CLOSE and FREE statements to the database server to de-allocate memory. Execution of these statements involve round trip of message requests between the application and the database server. The Automatic free feature (AUTOFREE) reduces the number of round trips by one. You can use the IFX_AUTOFREE environment variable or SQL statement SET AUTOFREE to enable this feature.
I just provide some touch point here, there are several other way to improve network and connection to database. Please check the Improve Application Performance in Informix Infocenter for details.
Like a lot of fellow System Administrator, you may need to execute many system or administrative works automatically. Some of these works should occur at regular intervals. Other works need to run only once, perhaps during off hours such as evenings or weekends. You can use the Informix DB Scheduler to create a user define task to run administrative work automatically in the background at predictable times. The DB Scheduler uses SQL statements instead of operating system job scheduling tools, e.g., cron job.
In this document you will see an example of user define task that delete some unwanted files on a periodic basis. For better understanding you can perform this work in two phases.
Create a stored procedure
First you are creating a procedure under 'sysadmin' database named 'delete_apb_documents_proc()' that removes all files as 'apb_document*' from /tmp directory which accessed more than 60 minutes ago.
Use the UNIX manual page for 'find' command (man find) to get details on options used in the above example.
Check the Informix Information Center to get more information on define a task.
You can run all the above SQL statements against 'sysadmin' database using dbaccess utility.
cheetahblog 0600028TE8 Tags:  data_modeling infosphere_data_architect rational_data_architect gopal 3,735 Views
Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. Ideally, the data requirements are recorded as a conceptual model that is not platform-specific (the logical data model) with associated definitions for the entities and attributes. This logical model can serve as the ‘information hub’ to help define and enforce consistency across enterprise data. From the logical model, multiple database-specific implementations can be generated (via physical models).
InfoSphere Data Architect (IDA) - formerly called Rational Data Architect (RDA) – is a great modeling tool for any shop that includes Informix Dyanmic Server databases in its mix. Even if you don’t have a ‘data architect’, this product can help DBAs communicate with developers and also help you keep tabs on what is and isn’t “real” any more (by comparing the model of what you think you have with what your database actually does have).
I recently updated a tutorial on using IDA specifically from the perspective of using it with IDS. There are several enhancements to the model elements for IDS in IDA 7.5.1:
Link to IDA trial download
Link to an IDA demo
To All Informix users, I would like to encourage you to take a look at our new IBM Data Studio Developer 2.1 offering. There are significant enhancements we think that IDS developers will like.
We now have new tools to view update statistics and perform object management tasks easily, such as specifying data partitions (either round robin or expression-based) and raw tables. Also, trigger support has been greatly improved to take advantage of the ability in IDS to have before, after, and ‘for each’ triggers, whereas only ‘for each’ was supported previously. These capabilities are available at no charge. Guy Bowerman talks a bit more about these features in his blog.
In Data Studio Developer, which includes the base tooling and all the value-add capabilities, we also have many new features that IDS developers can take advantage of.
Using our previous releases, IDS developers could improve productivity by using the SQL content assist and validation capabilities in the Java editor and to generate applications quickly using the pureQuery API. You could capture SQL from any Java application and correlate which SQL was issued to the database and from where in the application. Using Data Studio, you could gain insight into which database tables and columns were used by the SQL. Developers and DBAs could easily isolate poorly performing queries in the application and work better together, to assess changes needed in the application as a result of database schema changes.
Data Studio Developer 2.1 has even more to offer for Informix developers and DBAs:.
The download for Data Studio Developer 2.1 should be available soon.
This single package contains both the no-charge, perpetual features available and supported for IDS customers with up-to-date maintenance contracts, but also 30-day trial features of the value-added capabilities of the priced Data Studio Developer product. The trial features are optional during installation, but I think you’ll be happy if you give them a test drive.
In the meantime don’t hesitate to send me any feedback or suggestions that you may have. Then let me know either here or on the Data Studio Forum what you think of the new release.
-- Sonali Surange
DB_LOCALE and CLIENT_LOCALE are compatible if ...
a locale consisting of the language_terriorty from CLIENT_LOCALE and the code set from DB_LOCALE exist
CLIENT_LOCALE = de_de.850
Here, both the locales are using same code set, 850 which is supported by both en_us and de_de, that is, US English and Germany German.
Hence, they are compatible.
DB_LOCALE = de_de.cp1252 (code set 1252, its hex value 04e4)
ii) Conversion between CLIENT_LOCALE's code set, 850 and
should see the above listed files exists.
Hence, they are compatible locale.
DB_LOCALE = cs_cz.8859-2 (code set 8859-2, its hex value 0390)
DB_LOCALE = en_us.utf8 (code set utf8, its hex value e01c)
are not compatible because Israelian hebrew (iw_il) does not support utf8 encoding.
That is, there is no such locale file to support utf8 encoding by Israelian hebrew.
Note: If a locale file is not available in your INFORMIXDIR, check the Informix International
As mentioned in this blog, IDS 11.10 allows SELECT statements to include syntax to specify a full select sub-query in the FROM clause. I would like to describe it with some examples here.In IDS 10, if you had to use sub-queries in the FROM clause, you had to cast the sub-query statement to a multiset andthen to a table. Here are some example SELECT statements syntax in prior IDS versions:
SELECT * FROM table(multiset(select a, b, c from tab1, tab2 where tab1.a = tab2.c)) vt(va, vb, vc),emptab WHERE vt.va = emptab.id;a, b, c: columns of any data type
SELECT * from table((select coll_dat from sales_tab)) AS c_table(s_month, s_day);coll_dat: should be a collection data type column
SELECT * from table(function f1());f1(): an iterator function
In IDS 11.10, the above statements can be written in the same way or you can re-write them as:
SELECT * FROM (select a, b, c from tab1, tab2 where tab1.a = tab2.c) vt(va, vb, vc), emptab WHERE vt.va = emptab.id;f1(): f1() can be any function
These sub-queries in the FROM clause are called derived tables or table expressions. You can use simple, UNION, orjoined sub-queries, including OUTER joins, and can include the ORDER BY clause. You can also write nestedFROM clause queries and can combine old and new syntax in a SQL statement. Here is an example:
select * from (select * from (select col1 from tab3 where col1 = 10 order by col2 desc)),table(multiset(select col1 from tab2 where col1 = 20)) order by 1 ;
The derived table support in IDS 11.10 enhances its capability to execute queries that complies with the SQL-92 standard without any modifications.
Suma Vinod[Read More]
Use the cdr change config command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1.
For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7
Before changing the values you can confirm the existing values by doing:
$onstat -g cdr config | grep SUP
$cdr change config "CDR_SUPPRESS_ATSRISWARN 2-5,7"
WARNING:The value specifed updated in-memory only.
The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfrom
ONCONFIG will take effect. Now lets confirm the new values:
$onstat -g cdr config | grep SUP
CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7
Use the cdr add config command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.
Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.
$cdr add config "CDR_SUPPRESS_ATSRISWARN 10"
WARNING:The value specifed updated in-memory only.
We can confirm the new values from "onstat -g cdr config" as following.
$onstat -g cdr config | grep SUP
CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7,10
Use the cdr removeconfig command to removethe existingvalue of an Enterprise Replication configuration parameter whilereplication is active. The value persists while replication is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1.
Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.
$cdr remove config "CDR_SUPPRESS_ATSRISWARN 2-5"
WARNING:The value specifed updated in-memory only.
Confirm your changes:
$onstat -g cdr config | grep SUP
CDR_SUPPRESS_ATSRISWARN configuration setting: 7,10
NOTE: Thecommands changeconfiguration parameters only in memory; they do not update theONCONFIG file. To update environment variables in the ONCONFIG file,use the CDR_ENV configuration parameter.
The following table shows which kind of changes are valid for eachEnterprise Replication configuration parameter.
In IDS Cheetah release, you can encrypt communication between an HDR pair, to secure the transmission of data over unsecured networks, including the internet. After you enable encryption, the HDR primary encrypts the data before sending it to the HDR Secondary server. The HDR secondary server decrypts the data. Use new ONCONFIG parameter ENCRYPT_HDR to enable encryption between the HDR pair. You can also customized encryption using following parameters.
Nilesh Ozarkar[Read More]
The fetch buffer is a memory location in a client-server environment where cursor data received from the server and held before transmit to the application. Setting a bigger fetch buffer size can significantly improve query performance by reducing network traffic. It is the amount of data that client accepts from the server without waiting to send a "received" message which make the difference. Starting with Informix version 11.70.xC5, you can set the maximum size of the fetch buffer to 2 GB to increase query performance.
The FET_BUF_SIZE environment is used to set the size of the application fetch buffer. This environment variable can override the default setting for the size of the fetch buffer for all data types except BYTE and TEXT values. An ANSI database needs addition consideration; you must set transactions to READ ONLY mode for the FET_BUF_SIZE environment variable to improve performance, otherwise rows are returned one by one.
A valid FET_BUF_SIZE setting is in effect for the local database server and for any remote database server from which you retrieve rows through a distributed query in which the local server is the coordinator and the remote database is subordinate. The greater the size of the buffer, the more rows can be returned, and the less frequently the client application must wait while the database server returns rows. A large buffer can improve performance by reducing the overhead of filling the client-side buffer.
In case of an invalid FET_BUF_SIZE setting, Informix not generates any error. It uses the default fetch buffer size.
The combination of fetch buffer, communication buffer (the "b" option in the sqlhosts file), network driver size and application cursor can enhance the query performance farther.
In IDS, when a new row containing a column that can vary in length is to be inserted on a page, the row will be inserted only if the page contains sufficient space to allow the new row to grow to its maximum length. This can result in low disk space usage. In IDS 11, you can change this default behavior by setting the configurationparameter MAX_FILL_DATA_PAGES to 1. When MAX_FILL_DATA_PAGES is enabled, the server will add a new row to a page ifafter adding the row, at least 10 percent of the page is free for the future expansion of the rows. The database server needs to be restarted after changing this configuration parameter.
To take advantage of this setting:
Advantages of enabling MAX_FILL_DATA_PAGES are:
The possible disadvantages of enabling MAX_FILL_DATA_PAGES are:
Example: Suppose you have a table with a column of type lvarchar(6000). If the dbspace in which the table resides is of 4K or 8k pagesize, you might see a lot of unused space in the pages. If you do not have to make updates which would cause most of the rows to expand to their full length, you might want to enable MAX_FILL_DATA_PAGES tobetter use the space in each page.
Suma Vinod[Read More]
EstablishingJCC connections for IDS
While working with the open beta of IBMData Server Developer Workbench 9.5 , I noticed that inaddition to that product you also become open beta participants for theJava Common Client (JCC) driver. In fact, the Workbench uses the JCCdriver by default. On the one hand, this is absolutely great, because ithelps you see where IBM's long range direction is with clients. On theother hand, JCC requires some special setup on the IDS instance level.This article will detail how to setup you instance to handle JCCconnections, and then will use the Developer Workbench to show how youconnect using JCC.
Whathappened to my SQLI?The reason why JCC connections require engine configuration changes isthe default means by which IDS instance communicate is through the SQLIprotocol, whereas JCC needs the DRDA protocol. Currently IDS does notallow for both SQLI and DRDA to connect through the same listenerthread, so this means is we need to configure a listener thread andport to accept DRDA calls.
This is done by making a simple change to the SQLHOSTS file as shownbelow:
<instance name> drsoctcp<hostname> <service/port>
<instance name> drsipcshm<hostname> <service/port>
As you can see the protocols look the same as our old ones, exceptinstead of the prefix ol,you use dr.
And full example is below:
cheetah_dr drsoctcp gofishsqlexec2
After changing the SQLHOST all you need to do is place this entry in youonconfig file, in the DBSERVERALIASES variable. So if you use theexample SQLHOSTS entry above and had no aliases currently you wouldhave the following in you onconfig file:
Now just bring your instance off-line and then back on-line, and then weare ready to use the Developer Workbench.
Setting up yourJCC Driver for 11.10 in Developer Workbench 9.5Now that you have set your IDS instance up to accept JCC driverconnections, let's establish one using the Developer Workbench 9.5
In your Database Explorer window, select Connection->New Connectionas below:
This will bring you to the Connection Settings window. Below is anexample of the JCC driver with all field filled in:
As you can see, you do not need an INFORMIXSERVER for your connection,just the Database, Host and Port Number. Once you configure this foryour server, just click the TestConnection button. After that just click the Finish button. Younow have a JCC connection that you can use for your Applicationrunning against IDS 11.10
Mark Jamison[Read More]