IBM hasjust announced TheInformix Warehouse feature whichconsists of a powerful set of tools to build a warehouse infrastructureplatform. Here is a quote from the announcement:
From KevinBrown, lead architect for IBM and Jim Kobielus from Forrester Research:
"This can save weeks of effort into just a fewhours," Brown said. "Inaddition, customers often did without information because of the cost of effortto get the information. The lost opportunity cost savings is harder toquantify, but can be significant once they use their warehouse platform forsmarter decision-making."
TheWarehouse feature includes the following components:
SQW Design Studio - AnEclipse-based common design environment for connecting to source and targetdatabases, creating and reverse-engineering physical data models, and buildingSQL-based data flows and control flows.
SQL Warehousing Tool -a graphical environment that works with the Design Studio to enable you toquickly and easily build in-database data movements and transformations intoyour warehouse.
SQW Administration Console- a web-based application for managing and monitoring the data flows that youdesign using the Design Studio. The Administration Console allows you to:
- Manage common resources such as database connections and machine resources
- Schedule when the execution of control flows
- Monitor the execution status
Pressrelease at: http://www-03.ibm.com/press/us/en/pressrelease/26840.wss
InformixWarehouse page on the ibm site: http://www-01.ibm.com/software/data/informix/warehouse/
With Cheetah, IDS now allows a DBA to use ontape to point to adirectory instead of just a particular file. This can be handy when youhave multiple backups that you want to keep in the samedirectory.
It involves a little extra administration though, so here’s aquick primer on it.Setting it up for archives
It’s actually pretty easy to set IDS up to do it.Here are the steps:
Setting it up for back upof your logical logs with the log_full.sh $INFORMIXDIR/etc/ALARMPROGRAM
- Create your directory: for instance:/opt/Informix/archives/ as Informix (group Informix) (I have the bestluck putting the ending slash on the path.)
- chmod 777 the directory
- set TAPEDEV in your $ONCONFIG to that directory.
- run ontape to create your archive
- As above, create your directory and set permissions.
- save a copy of your $INFORMIXDIR/etc/log_full.sh tolog_full.org or something like that
- edit log_full.sh with your favorite editor (i.e. vi oremacs)
- Add the –y option to the command (ontape–a –y)
- Recycle the engine.
When you take an archive by default it will use your<HOSTNAME_SERVERNUM> as the standard prefix. For example:my machine name is ryleh and my SERVERNUM=11 When I do an archive andthen look at the file it is this: ryleh_11_L0
If you want to name it something different, you can set theundocumented environmental variable IFX_ONTAPE_FILE_PREFX Example:
You will have to recycle the engine after you set it for it to takeeffect.
This way you can have multiple archives stored in the same directory. Conclusion:
All in all, backing up to a directory can be a very handy utility tohave for an administrator that has multiple instances, especially forsuch things as development and test instances. You can savean archive under the name of say “benchmark” andthen when you need to load it up, you don’t have to look highand low for the file or the tape. It’s right there,and convenient.Joe Baric
database is created in "root dbspace"
at server initialization. This database is required for the Scheduler API and Remote Administration
feature. Until B5 drop, there wasn't any way of moving the sysadmin database safely to any other dbspace. A new SQL Admin API command in B6 drop now simplifies this task by allowing "informix" user to drop and recreate this database to any other dbspace.
If it's determined that root dbspace does not have enough space for storing task properties and command history information, you could move the sysadmin database to a different dbspace by using the "reset sysadmin"
SQL Administration API command. This command drops the sysadmin database from root dbspace and recreates it in the specified dbspace.
Here's an example to move the sysadmin database...
1. Make sure the following message has appeared in the online message log after server startup:
SCHAPI: Started 2 dbWorker threads.
2. Let's create a special dbspace to store sysadmin... admindbs
database sysadmin;execute function task("create dbspace", "admindbs", "/vol/dbspaces/admindbs", "1 GB", "0");
onspaces -c -d admindbs -p /vol/dbspaces/admindbs -s 1000000 -o 0
3. As user informix
, run the following commands:
dbaccess sysadmin -execute function task("reset sysadmin", "admindbs");
The command will return the following message:
SCHAPI: 'sysadmin' database will be moved to 'admindbs'. See online message log.
The internal thread, bld_sysadmin
(seen via onstat -g ath), waits up to five
minutes to obtain exclusive access to the sysadmin database. The progress of the bld_sysadmin thread is logged in the online message log.
4. Our own session has sysadmin database open. So, terminate the dbaccess session with the "close database
On successful completion, the sysadmin database is dropped and recreated in the new dbspace. The dbScheduler and dbWorker threads are restarted automatically.Mirav Kapadia
Upgrading Informix Dynamic Server Version 10.00 to Versions 11.10 or 11.50:
When installing IDS 11.10 or 11.50, if you choose the option "Upgrade from the previous version", all the server binaries will be upgraded to the newer version(s) automatically.
Upgrading Informix Dynamic Server from Version 11.10 to 11.50:
When installing IDS 11.50 on Windows, direct upgrade from IDS 11.10 is not supported. If “Upgrade from the previous version” is selected the following message pops up. Installer stops beyond this point.
Since the support to install both 11.10 and 11.50 on the same machine exists, it is recommended to choose the option “Install into a default/different directory” (shown in the panel below) to install IDS 11.50. Note: This will not upgrade IDS 11.10 to IDS 11.50.
As part of providing a folder name to install the product, a completely new path must be supplied. If the folder selected already contains binaries from IDS 11.10, then the following message pops up.
User can respond to the question “Do you want to select another folder? “. Clicking “yes”, the installer returns to the destination panel where they can provide a different folder. Clicking No, the installer goes to finish panel
The recommended approach to upgrade to 11.50 is
- Uninstall IDS 11.10 using "Retain database, but remove server binaries option"
- Save all the Registry Keys in HKEY_LOCAL_MACHINE\SOFTWARE\Informix\OnLine\
- Install the later version (IDS 11.50) to the same path.
- Make sure that you do not initialize the server when installing.
- Copy the ONCONFIG file to the target and set parameters that are new for the current release.
- Bring the server up using Control Panel->Services or any other method without initializing.
A new version of OpenAdmin Tool for IDS
(formerly known as IDSAdmin) is available to download from the IDS 11.10 Open Beta site
(sign-in required). This new version has a simpler installation, new graphics, and many new features.
Please keep in mind that:
- The product is still listed as IDSAdmin on the Beta download site.
- The Readme file downloadable from the Beta site is (at the time of writing) the out of date one from March - do not use it, instead unzip the oatids package and refer to the Readme in there.
Here is a copy of the current Readme:
OpenAdmin Tool For IDS - v2.10- July 2007
OpenAdmin Tool For IDS ( OAT ) , is a PHP based administration consolewhich can be used to administer one or more IBM Informix Dyanamic Server11.10 instances.
DependenciesOAT requires the following products to be installed:
Note: the versions in brackets indicate the versions that OAT has been testedwith.
- A Webserver (Apache 2.2.3)
- IBM I-Connect or CSDK (3.00)
- PHP 5 compiled with PDO, PDO_SQLITE, GD and SOAP enabled. (5.2.2)
- Informix PDO Module.
Installation Instructions1. Install and set up a working web server that has been configured toserve php pages.
(For more information see the NOTES section below.)
2. Update the php configuration file.
3. Install IBM I-Connect or Client SDK.
- edit the php.ini
- add two new lines to the 'extension' section of the configuration fileif they are not present:
- modify the parameter memory_total to 256M.
4. Install the OpenAdmin Tool For IDS package.
5. Change the ownership of the "<OAT>/install" directoryto the user and group that runs the apache ( httpd ) server.
- Extract the OAT package into your web server document root directory.
6. Start the webserver, making sure INFORMIXDIR points to the Client SDKor I-Connect install location in the webserver environment.
- Find the user and group that runs the apache ( httpd ) server from the httpd.conf.
- chown <user>:<group> <OAT>/install ( Unix/Linux )
7. Launch the OAT installer by visiting the web page:
SERVERNAME = the name of your machine
LOCATION = where you extracted the tar file (e.g. oat )
Follow the instructions in the OAT installation screens, including obtainingan optional Google Maps API key for your domain if required.
Once installation is complete, point your browser at the OAT root URL.From there you can click on Admin and add a new IDS 11.10 Connection tothe default group or create a new group. If using an I-Connect or ClientSDK version prior to 3.0 then for each new connection you add, there needsto be a corresponding SQLHOSTS entry for that connection on the webservermachine.
Once a connection is created the "Get Servers" link from theOAT root page will retrieve the list of connections you have created.
Installing an Apache webserver with PHP is not a straight forward task, thankfully there are 3rd party solutions available. A popular choiceis XAMPP.
Using XAMPPXAMPP is an easy to install Apache distribution containing MySQL , PHPand Perl. XAMPP can be obtainedfrom,
The XAMPP version 1.6.2 for Windows contains everything you need to runOAT (with the exception of IBM I-Connect or CSDK ), download and installationinformation is available at
XAMPP ( also called as LAMPP ) is also available for Linux, however you will need to download the 'Development Package' in order to compile the Informix PDO Driver. Download and installation information is available at
Informix PDO DriverThe following developerWorks article contains useful information to assistwith building an Informix PDO driver:
Previous version of JDBC 3.50 could not be installed on Windows 64-bit using 64-bit JRE. Installing JDBC 3.50 gave an error "Directory not writable" for all directories. Same error message was received, even after running all the process as administrator by turning UAC (User Account Control) off. Workaround for this problem was to use 32-bit JRE on Windows 64-bit to install JDBC 3.50.
This problem was solved in IDS 11.50. Now JDBC 3.50 uses newer version of Install Shield, which allows for JDBC 3.50 to be installed on 64-bit Windows using 64-bit JRE. Now, users no longer have to use 32-bit JRE to install JDBC 3.50 on Windows 64-bit. This makes users life easier, since users do not have to set up 32-bit JRE on 64-bit Windows. Bhadrik Patel
Modified on by cheetahblog
The best-fit Java class for an Informix DATETIME value is java.sql.Timestamp. Java.sql.Timestamp technically inherits from java.util.Date; although, there are some semantic differences which are not typical of inheritance relationships. All the constructors, getters, and setters which are not deprecated for Date and Timestamp use values based on Greenwich Mean Time (GMT). They are offsets in milliseconds from the epoch, January 1, 1970 00:00:00 GMT. So, it would be logically consistent for Timestamps to convert to DATETIMEs using the GMT reference frame by default. However, the JDBC Tutorial and Reference specifies that, when no Calendar is provided, Timestamp values are converted to the time zone of the JVM in which the application is running. The difficulties surrounding Timestamps and time zones are not limited to the scope of the Informix JDBC driver; they are generic to all of JDBC because that is the specification.
This model creates problems in a variety of ways. For distributed applications, the Date or Timestamp values might be streaming into the database from JDBC drivers located in different time zones. With multi-tier applications, you might find yourself contending with a user interface in time zone A, the JDBC driver in time zone B, and the database server in time zone C. The driver is the library where external types are converted to internal types; so, where it is running is where the local time zone comes into play. It can be difficult for either the front end or the database to know in what time zone the driver is operating. Dealing with the conversion from Timestamp in GMT to DATETIME in the JDBC driver locale has to be done in the application layer directly interfacing with the JDBC driver.
The conversion done by the driver applies in both directions, and this can lead to complications if anything other than the JDBC Timestamp class is used in conjunction with DATETIME values. If strings representing DATETIMEs or DATETIME literals are used anywhere in your application SQL (including CURRENT), the values will not be converted on the way in, but will be converted on the way out. Similarly, different APIs will not convert in either direction. If you insert new Timestamp(86400001) (January 2, 1970 00:00:00.001 GMT) through JDBC, and select it back from within the same JVM, you will get that same value back. So, you might expect that selecting the same row-column from, say, DBAcccess, would also give you the same value, but it will not because the value was converted to JVM time zone on the way in through JDBC, but not converted on the way out through ESQL/C. If you are in the U.S., you will get some time on January 1st, not even the same day. The reverse also applies, you may find yourself with values inserted through other APIs, which perform no conversion, that are converted to some other time zone when selected through JDBC.
DATETIMEs, of course, do not contain time zone information. There are basically two ways to deal with this, record the time zone information in another column along with the DATETIME or establish a convention that all the DATETIMES in the database are in the same time zone. Establishing a convention is the simpler approach both from an application implementation perspective, and in terms of being able to easily compare values in different rows, columns, or tables. If you know that every Java application which will ever access the data will operate in its own JVM, you can code your applications to set the default time zone of the JVM, but this is a problematic solution when more than one, independent application is running in the same JVM, such as servlets in a web server. It is unclear what would happen with different threads of execution changing the time zone for all other threads, but if they were actually able to do that, it would not be pretty.
At JDK 1.1 two things happened, all the getters and setters for java.util.Date where the meaning was ambiguous with respect to time zone were deprecated and the java.util.Calendar class was introduced. Along with the Calendar class itself, methods accepting a Calendar object were added to the JDBC interface, for example PreparedStatement.setTimestamp(int parameterIndex, Timestamp x, Calendar cal) and ResultSet.getTimestamp(int columnIndex, Calendar cal). In these methods, the Calendar object establishes the frame of reference used to convert from the GMT offset in the Timestamp object to or from a DATETIME value. If you create a constant Calendar object in your applications, and use it every time you read or write a Timestamp object to a DATETIME, the values DATETIME will remain consistent with each other, and the meaning of the value will not change dependent on the time zone of the JVM.
Using a GMT calendar would be efficient because it requires less operations to convert between the Java classes and the value in the database. Also, the value will not change if other APIs, like ODBC or ESQL/C, are used. Conceptually, what is stored in the database is January 1, 1970 00:00:00.000 GMT + tstamp.getTime() milliseconds.
Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
Coordinated Universal Time (UTC) is a close relative to GMT and is just as viable.
In this model, all date and time values in the database are on the same time line, and conversion to other time lines or calendar systems are handled at the user interface level. Using UTC or GMT for all DATETIME values provides a simple model to implement in applications. It also removes all the problems related to knowing the time zone of the JDBC driver runtime or if applications written in anything other than Java will ever be used to access the data.
Here we will look at each of the XML functions given in Part 1 with examples.genxml()/genxmlclob()
You can use these functions to create an XML row element for each row of a SQL query result. Each column in the rowis an attribute of the row element. genxml() is used for returned row values that are LVARCHAR(32739) or less. Forlarger values, you should use genxmlclob(), which returns a CLOB.
These functions process the rows without any specific order. If the order of the rows is important, you can use thederived table queries to get the result set in the correct order, and then apply the functions on the result set.
select genxml(row (customer_num, fname), "row") from customer;
Output from the above sql:
<row customer_num="101" fname="Ludwig "/><row customer_num="102" fname="Carole "/><row customer_num="103" fname="Philip "/><row customer_num="104" fname="Anthony "/>
To order the results on fname, use the following sql:
select genxml(row(num,name),"row") from (select customer_num, fname from customer order by fname) as vt(num, name);
To select all columns in the table, use the following sql:
select genxml(customer, "row") from customer;genxmlelem()/genxmlelemclob()
These functions return each column value as separate elements, in contrast to genxml(), which returns column valuesas attributes of the row element.
select genxmlelem(row(customer_num,fname), "cust") from customer;
Output from the above sql:
<cust><row><customer_num>101</customer_num><fname>Ludwig </fname></row><row><customer_num>102</customer_num><fname>Carole </fname></row><row><customer_num>103</customer_num><fname>Philip </fname></row><row><customer_num>104</customer_num><fname>Anthony </fname></row></cust>
You can select all columns in a table by passing the table name as the first argument as shown below:
select genxmlelemclob(customer, "cust") from customer;genxmlschema() & genxmlschemaclob()
They are like genxml() but generate full XML schema including XML header and data. An XML header specifies documentproperties such as the document encoding, the document type definition(DTD), and XML stylesheet(XSL). Thefollowing example shows a select using genxmlschema and its output:
select genxmlschema(customer, "cust") from customer;
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?> xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.ibm.com" xmlns="http://www.ibm.com" ElementFormDefault="qualified"> <xs:element name="cust"> <xs:complexType> <xs:sequence> <xs:element name="customer_num" type="xs:serial"/> <xs:element name="fname" type="xs:char(15)"/> <xs:element name="lname" type="xs:char(15)"/> <xs:element name="company" type="xs:char(20)"/> ...genxmlquery() & genxmlqueryclob()
They are versatile functions that take a SQL query as argument and return the result set in XML.
execute function genxmlquery('cust','SELECT customer_num FROM customer');
Output from the above sql:
<cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row></cust>genxmlqueryhdr() & genxmlqueryhdrclob()
These functions return the same data as genxmlquery() but with an XML header.
execute function genxmlqueryhdr('cust','SELECT customer_num FROM customer');
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?><xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"><cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row>extract() and extractxmlclob()
These functions evaluate an XPATH expression on a XML column, document, or string. These functions are identicalexcept that extractxmlclob() returns a CLOB instead of LVARCHAR. They are used to return an XML fragment of theevaluated XML column, document, or string. They are compatible with the Oracle extract()function.
execute function extract('<name><first>fred</first></name>','/name/first');
Output from the above sql:
(expression) <first>fred</first>extractvalue() and extractxmlclobvalue()
These functions return the value of the XML node in contrast to extract(), which returns the XML node. They are compatible with the Oracle extractvalue() function.
execute function extractvalue('<name><first>fred</first></name>','/name/first');
Output from the above sql is just the name without the XML tags:
This function returns 1 if the specified XML node exists in an XML document. It is compatible with Oracle exists() function
Ouput from the above sql:
Parse an XML document or fragment to determine whether it is well formed. This function returns an XML document orfragment if the input XML is well formed else an error is returned.
SELECT idsxmlparse('<purchaseOrder poNo="124356"><customerName>ABC Enterprises</customerName><itemNo>F123456</itemNo></purchaseOrder>') AS PO FROM systables where tabid = 1;
Output from the above sql:
<purchaseOrder poNo="124356"><customerName>ABC Enterprises<itemNo>F123456</purchaseOrder>
Note that the string in the quotes should not have any carriage returns.[Read More
OAT 2.21 has incorporated IDS Enterprise Replication monitoring, a plugin manager to allow customization of OAT functionality and an automated installer on Mac OS X. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.
New feature highlights of OpenAdmin Tool for IDS version 2.21 include:
- Mac OS X Automated Installer: Automatically setup Apache, PHP, I-Connect and OAT on Mac OS X. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.
- IDS Enterprise Replication Monitoring: Monitors Spool Disk usage, Send and Receive queues, Receiving/Apply statistics, Routing Topology, node details and much more. OAT Enterprise Replication is provided as a separate plugin. The latest automated installer will install the ER plugin for you, you can also manually place the ER plugin zip file under the OAT plugin_install directory and install it with the plugin manager (requires php zip extension).
- Plugin manager: A simple way to customize OAT functionality. You can download customized OAT plugins and install it with the plugin manager. Sample plugin code is also provided and you can follow the sample in creating your own plugins. Note that the plugin manager requires the php zip extension. The latest OAT2.21 automated installer includes the php zip extension, but OAT2.20 installer does not.
Download OAT version 2.21 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd .
Click here to see OAT Enterprise Replication at work!Leo Chan
Onbar backup order on Cheetah
1> Root dbsapce , SBLOBspace, BLOB space, and normal dbspace onbar backup order.
Rootdbspace is always first, then SBLOB space, BLOB space, normal dbspace is the last one.
Onstat –d output as following:
IBM Informix Dynamic Server Version 11.10.UC1 -- On-Line -- Up 00:19:47 -- 44672 Kbytes
address number flags fchunk nchunks pgsize flags owner name
44cdf7f0 1 0x60001 1 1 2048 N B informix rootdbs
45ca6ea8 2 0x40001 2 1 2048 N B informix ddbs1
45c26ea8 3 0x40001 3 1 2048 N B informix ddbs2
45ca7a00 4 0x48001 4 1 2048 N SB informix sbdbs
45bbeeb0 5 0x60011 5 1 8192 N BB informix bdbs
5 active, 2047 maximum
Note: For BLOB chunks, the number of free pages shown is out of date.
Run 'onstat -d update' for current stats.
address chunk/dbs offset size free bpages flags pathname
44cdf950 1 1 0 50000 15421 PO-B /home/informix/cheetah/dsk/rootdbs
45c98e38 2 2 0 5000 4947 PO-B ./dsk/ddbs1
45ca73b0 3 3 0 5000 3449 PO-B ./dsk/ddbs2
45ca7b60 4 4 0 5000 4587 4587 POSB ./dsk/sbdbs
Metadata 360 268 360
45bb0e48 5 5 0 5000 ~1250 1250 POBB ./dsk/bdbs
5 active, 32766 maximum
After onbar –b –w ,and the backup order can be showed in the bar_act.log infor as following:
2007-06-15 04:41:10 5218 5216 /home/informix/cheetah/bin/onbar_d -b -w
2007-06-15 04:41:10 5218 5216 Archive started on rootdbs, sbdbs, bdbs, ddbs2, ddbs1 (Requested Level 0).
2007-06-15 04:41:11 5218 5216 Begin level 0 backup rootdbs.
2007-06-15 04:41:11 5218 5216 Successfully connected to Storage Manager.
2007-06-15 04:41:11 5218 5216 Completed level 0 backup rootdbs (Storage Manager copy ID: 751421824 0).
2007-06-15 04:41:12 5223 5218 Process 5223 5218 successfully forked.
2007-06-15 04:41:12 5223 5218 Begin level 0 backup sbdbs.
2007-06-15 04:41:12 5223 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5224 5218 Process 5224 5218 successfully forked.
2007-06-15 04:41:12 5224 5218 Begin level 0 backup bdbs.
2007-06-15 04:41:12 5224 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5225 5218 Process 5225 5218 successfully forked.
2007-06-15 04:41:12 5225 5218 Begin level 0 backup ddbs2.
2007-06-15 04:41:12 5226 5218 Process 5226 5218 successfully forked.
2007-06-15 04:41:12 5226 5218 Begin level 0 backup ddbs1.
2> Normal dbspace backup order
The onbar –b –w backup on Normal dbsapces doesn’t depend on the order which the dbspace is created but depend on the size is used on that dbsapce.
In the above example, the ddbs2’s usage is bigger than the ddbs1’s usage, the onbar backup’s order is the ddbs2 first and then the ddbs1 without following the create order.
Note if you have missed any articles in this series, check Introduction and Table of Contents
Setting up yourworkspace and connecting to IDS
Now that you have installed the Developer Workbench it is now time torun it. ON the first run, you should be asked to set up yourworkspace. This should give you a screen like the following:
Once you select this, the Workbench takes a little while to build yourworkspace. Once done however you get the classic EclipseWelcome screen. I have highlighted your workspace in the followingscreen shot , because that is the next place you want to go:
Once you click on the workbench you get the generic workbench. The nextstep at that point is to start a project , and establish a connection:
Setting up a new project to handle queries is fairly straightforward,and can be reached very quickly , all you have to do is select the DataDevelopment Project option as in the following screen shot:
Selecting the above puts you to the new Data Development projectscreen, as shown below:
As you can see, there are some DB2 flavored words here.Schema in DB2, is the IDS equivalent of a database, and as such, keepthe default radio button, as seen in the screen shot above, set. I would recommend that you giver your project a meaningfulname, but that is completely under you control. In my case I listed itas "Mark's Test project for Blog".
This brings us to the next page which assigns an existing databaseconnection, or creates a new one. Since this is our first time through,just hit the next button, as we will need to create a new connection.
The Connection parameters gives you several options for connections, aswell as several data server choices. Take a look at the below screenshot:
This is a standard Informix JDBC connection string at this point. Thescreen shot above differs slightly from what you see as the defaultsettings, as I have selected the "Informix JDBC driver" instead of the"IBM Data Server Driver for JDBC and SQLJ." In short I took the JDBCdriver instead of the JCC driver. The main reason is because JCC itselfis in beta, and I wanted to use something I knew how to connect withand trace. JCC also requires a DRDA listener thread, and I don't have one set up for the instance I am using. Update : Check out Establishing JCC connections in 11.10 if you want to use JCC connections.
Fill the remaining fields above with the normal Informix connectionsettings, and test you connection. Once you connection is working,click on the next button.
This final screen allows you to filter your schema. Here is the screenshot :
Please note: The default is only for users who have IDS on a windowsplatform. Since schema id is actually a user id in IDS, filtering onINFORMIX means that no information of any kind will return for anydatabase objects if your IDS instance is on a Unix platform. The reasonof course is the user id is case sensitive, and informix != INFORMIX.Quite honestly I suggest you just disable filtering for now, when Icover the the Database explorer I will show you how to change thefilter how you want to.
As soon as you have selected your filter objects, you will finally seethe workbench laid out, and ready to be used for a Data Developmentproject. Here is mine :
In the next article I will discuss the Database Explorer Window in thelower left hand corner. Mark Jamison
We would appreciate your help in building a more robust, efficient, and client-focused IBM Electronic Support system by filling out this questionnaire:
It should take only approximately five to ten minutes to complete, but the results will help us make your IBM Support experience the best it can be.
Thank you for your valuable input!
The Enterprise IBM Electronic Support team
Suppose you have an application which depends on IDS. Your business requirement is application must starts after a successful IDS memory initialization. Previously there was no way you can validate the return code of oninit process to make a decision whether or not IDS initialize successfully. So, if you have a script that automatically starts IDS and the application respectively, it possible the application may start even though IDS failed to initialize.
A new oninit option has introduced to IDS 11.50 that generate a return code. Based on the return code you can customize the script and automate startup process.
The 'oninit -w' command generates following return codes:
- 0 - when success
- 1 - when initialization fails or exceed the timeout value
The 'oninit -w' command forces IDS to wait until it successfully initializes before returning to a shell prompt. You can also provide an addition argument for timeout value with '-w' option. Without any timeout value with '-w' option, IDS will use the default value e.g. 10 minutes. If IDS cannot initialize within the timeout period, oninit generates return code 1 and writes following error message to the online.log file:
Warning: wait time expired
The syntax of new command as follows:
You can use the '-w' option with combination of any other oninit initialization options.
Couple of points to remember:
- In a high-availability environment, you can only use the 'oninit -w' command on primary server; it is not valid on secondary servers.
- The oninit command returns success when sysmaster, sysutils, sysuser and sysadmin are successfully created.
Sanjit Chakraborty[Read More]
Derived tables in the FROM Clauseof Queries
TheSELECT statement can now include syntax that complies with ISO/IEC9075:1992, the SQL-92 standard, to specify a full select subquery inthe FROM clause as a data source for the query. These subqueries arecalled derived tables or table expressions, they can be simple, UNION,or joined subqueries, including OUTER joins, and can include the ORDERBY clause. In addition, AS correlation specifications in the FROMclause can declare temporary names for columns within the query.Informix-extension syntax, such as the FUNCTION keyword with iteratorfunctions or the TABLE (MULTISET (SELECT ...)) keywords forcollection-derived tables, can now be replaced in the FROM clause bySQL-92 syntax. This feature expands the capability of Informix DynamicServer to run without modification queries that are interoperable onother database servers that support industry-standard SQL syntax.
You can find examples using derived tables in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/sel_sql99.sql.
Optimizer Directives in ANSI-Compliant Joined Queries
Earlier IDS versions supported optimizer directives inInformix-extension joined queries, but not in queries that usedANSI/ISO syntax to specify joins. For both inline directives andexternal directives, this release extends support in ANSI/ISO joinedqueries to the following classes of optimizer directives:
- Access-method directives (FULL, AVOID_FULL, INDEX,AVOID_INDEX, INDEX_SJ, AVOID_INDEX_SJ)
- Explain-mode directives (EXPLAIN, AVOID_EXECUTE)
- Optimization-goal directives (ALL_ROWS, FIRST_ROWS).
The join-order directive (ORDERED) is supported only inANSI/ISO-compliant LEFT OUTER joins and INNER joins. Because ofordering requirements for OUTER joins, in ANSI-compliant joined queriesthat specify the RIGHT OUTER JOIN or FULL OUTER JOIN keywords, theORDERED join-order directive is ignored, but it is listed underDirectives Not Followed in the sqexplain.out file.
This feature does the not support the join-method directives (USE_NL,AVOID_NL, USE_HASH, AVOID_HASH, /BUILD, and /PROBE) in ANSI/ISO joinedqueries, except in cases where the optimizer rewrites the query so thatit is no longer uses the ANSI/ISO syntax.Trigger Enhancements
Several new features expand the syntax and the functionality oftriggers on tables and on views:
- You can now define multiple INSERT, DELETE, UPDATE, andSELECT triggers on a table and multiple INSTEAD OF triggers for theview.
- When a table, view, or column list has multiple triggersfor a DML event type, Informix Dynamic Server executes all BEFOREtriggered actions before the FOR EACH ROW actions, and executes all FOREACH ROW actions before the AFTER actions.
- You can create SPL procedures that refer to applicable OLDand NEW trigger correlated values. Within the procedure you can accessapplicable OLD and NEW values and modify the NEW values: e.g. using LETstatements. From a FOR EACH ROW trigger action, you can execute thisSPL procedure [syntax: execute procedure foo() with trigger references].
- New Boolean operators (DELETING, INSERTING, SELECTING, andUPDATING) can be used in procedures executed from trigger actionstatements. These test whether the currently executing triggered actionwas triggered by the specified type of DML event and return a booleanvalue. The IF statement of SPL and the CASE expression of SQL canspecify these operators as the condition in a trigger routine.
These features make it easier to incorporate IDS triggers on tables andon views within a heterogeneous information management system wheremultiple applications need to share the table or view.
You can find examples using multiple triggers in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/cr_trig.sql.Index Self-Join Query Plans
Inearlier Informix Dynamic Server versions, queries of tables withcomposite indexes performed inefficiently if the ratio of duplicatevalues to the number of distinct values was much higher for the leadingcolumns than for subsequent columns of the index. A new feature of thequery optimizer supports a new type of index scan, called an indexself-join path, that uses only subsets of the full range of a compositeindex. The table is logically joined to itself, and the more selectivenon-leading index keys are applied as index bound filters to eachunique combination of the leading key values. By default, the optimizerconsiders this type of scan.
The optimizer also supports two newjoin-method directives, INDEX_SJ and AVOID_INDEX_SJ. The INDEX_SJdirective forces an index self-join path using the specified index, orchoosing the least costly index in a list of indexes, even if datadistribution statistics are not available for the leading index keycolumns. The AVOID_INDEX_SJ directive prevents a self-join path for thespecified index or indexes. This feature can improve query performanceon tables with composite indexes.Enhanced Concurrency withCommitted Read Isolation
In Committed Read isolation level, exclusive row-level locks held byother sessions can cause SQL operations to fail when attempting to readdata in the locked rows. This release introduces a new LASTCOMMITTED keyword option to the SET ISOLATION COMMITTED READ statementto reduce the risk of locking conflicts when attempting to read atable. This new syntax instructs IDS to return the most recentlycommitted version of the rows, even if another concurrent session holdsan exclusive row-level lock. This behavior can be extended to the DirtyRead, Read Uncommitted, and Read Committed isolation levels by settingthe new USELASTCOMMITTED configuration parameter or through new optionsto the SET ENVIRONMENT statement.
This feature supports B-tree indexes and functional indexes, but notR-tree indexes. It does not support tables that are being accessed byDataBlade modules, tables with columns of collection data types, tablescreated using a Virtual Table Interface, tables with page-levellocking, tables with exclusive table-level locks, unlogged tables, ortables in databases with no transaction logging.
Enhanced Data Types andUDR Support in Cross-Server Distributed Queries
Earlier releases of IDS restricted the remote execution of UDRs indatabases of other IDS instances to SPL routines that the EXECUTEFUNCTION or EXECUTE PROCEDURE statement invoke explicitly, and to SPLroutines that queries and other DML operations invoked implicitly.
This release extends support for UDRs in cross-database andcross-server distributed operations to most contexts where a UDR isvalid in the local database. In addition, external routines written inthe C or Java languages are now valid in any distributed operationwhere an SPL routine is valid. This release also extends thedata types that are valid as parameters or return values ofcross-server UDRs, which were formerly restricted to non-opaquebuilt-in SQL data types, by supporting these additional data types:
– DISTINCT of built-in types that are not opaque
– DISTINCT of BOOLEAN
– DISTINCT of LVARCHAR
– DISTINCT of the DISTINCT types listed above.
These data types can be returned by SPL, C, or Java language UDRs thatuse these data types as parameters or as return values, if the UDRs aredefined in all the participating databases. Any implicit or explicitcasts defined over these data types must be duplicated across all theparticipating Dynamic Server instances. The DISTINCT data types musthave exactly the same data type hierarchy defined in all databases thatparticipate in the distributed query.
This feature does not relax existing restrictions on other opaque andDISTINCT types or on large-object, serial, and collection data types inlocally or remotely executed SPL routines or external routines.
CombiningDay 1 and Day 2 activities. It has been a busy 2 days with some greatannouncements. We started off with Arvind announcing xC4 and most importantlyour storage optimization i.e. compression feature and warehouse feature.
Compressioncan save you as much as 80% of disk space and the demo at the upgrade kioskshowed a compression rate of 73 %. Check out our compression site at: http://www-01.ibm.com/software/data/informix/compression/
This sitefeatures a ROI tool and a white paper that is a must read for all. It clearlyarticulates to the value proposition of compression and how easy it is to setupand use. The OAT sessions and the demos showcased this feature and ease of usevery well and these sessions were amongst the popular ones.
Thewarehouse announcement is another exciting announcement that went out and thisincludes tooling that greatly simplify your warehouse design and deployment andaids in your BI decisions. The demo sessions highlighting the warehouse featurewas also a great attraction. Check out our warehouse site at: http://www-01.ibm.com/software/data/informix/warehouse/
The sitefeatures a white paper that is a great read…. We had another great keynote withJerry and Kevin who walked us through the Informix roadmap. I am sure thissession gave everyone an idea of where Informix is heading and how each of youcan help with driving key features within the product as well. The rest of theday as usual had some great sessions and I just did not want to highlight oneover the other. Let me just state that there were almost 25 great sessions thattook place through the day.
OnTuesday, Ananth gave his keynote on cloud computing and how it can be acritical success factor in your IT plans. This is an exciting space that IBM isin and the prospects and the potential seems endless in terms of its useespecially with its cost effectiveness and ease of deployment. Guy's blog hassome great entries on this and I would encourage each and every one to read upon the same http://www.ibm.com/developerworks/blogs/page/gbowerman
Thesessions again were great and the feedback has been astounding in terms of thequality of the sessions. There is a reason why the IIUG conference is anattractive proposition and why you should make this a yearly event on yourcalendar.
Oh btw,there is an interesting contest going on that Advanced Data Tools is sponsoringcalled the Fastest DBA where the goal is to take an SQL statement that runs inabout 30 min and make it run faster. The last I checked the time to beat was0.54 sec or somewhere around this … We definitely need more of these at futureconferences …
Thecertification room was receiving its regular stream of test takers as well andatleast the ones I saw came out smiling which is good :)
Tomorrowis the last day of the conference and has been filled with some excitingsessions. The key note is a Q&A session that will raise some gooddiscussions about marketing, support, sales and dev … will sum up theconference tomorrow… enjoy the rest of the conference….
On IDS version 11.10 a significant amount of focus has been given to automatically keep administrators up-to-date on database performance. Several performance advisory triggers added to Informix server to flag administrator once a threshold reached. The IDS will automatically calculate different thresholds based on current database activities and generate an appropriate performance advisory in message log (MSGPATH).
Following is a list of performance advisory available in IDS version 11.10:
- Physical log too small for RTO_SERVER_RESTART
- IDS server boot time too long
- Physical log too small
- Logical log too small
- Long transactions blocking checkpoints
- Physical log too small to accommodate bufferpool flushing
- Logical log too small to accommodate bufferpool flushing
- Physical log too small to accommodate automatic checkpoints
- Logical log too small to accommodate automatic checkpoints
- Bufferpool flushing
In this article we will discuss about the checkpoint related advisory. The ‘onstat -g ckp’ command has introduced with IDS version 11.10 to get checkpoint information and checkpoint related advisory. This command will display checkpoint related configuration parameters recommendation, once a sub-optimal configuration detected.
Following configuration parameters can significantly impact performance of IDS:
During checkpoint IDS will evaluate checkpoint related configuration parameters and produce a performance advisory once a sub-optimal configuration detected. If configuration parameters is not tune properly, database server can encounter block during checkpoint.
Following is an example of ‘onstat -g ckp’ command output:
The description of ‘onstat -g ckp’ command output:
- The first section of the display describes checkpoint information
- The second section of ‘onstat -g ckp’ output describe the checkpoint related advisory
In the above example physical log file size (PHYSFILE) was set to 15000 Kbytes. During checkpoint IDS evaluate the configuration parameters and found out physical log size was small, which can cause checkpoint block, and suggested PHYSFILE value as 16800 Kbytes.
The checkpoint performance advisory also available in the message log file. Following is an example of message log with a checkpoint performance advisory:
To All Informix users, I would like to encourage you to take a look at our new IBM Data Studio Developer 2.1
offering. There are significant enhancements we think that IDS developers will like.
We now have new tools to view update statistics and perform object management tasks easily, such as specifying data partitions (either round robin or expression-based) and raw tables. Also, trigger support has been greatly improved to take advantage of the ability in IDS to have before, after, and ‘for each’ triggers, whereas only ‘for each’ was supported previously. These capabilities are available at no charge. Guy Bowerman talks a bit more about these features in his blog
In Data Studio Developer, which includes the base tooling and all the value-add capabilities, we also have many new features that IDS developers can take advantage of.
Using our previous releases
, IDS developers could improve productivity by using the SQL content assist and validation capabilities in the Java editor and to generate applications quickly using the pureQuery API. You could capture SQL from any Java application and correlate which SQL was issued to the database and from where in the application. Using Data Studio, you could gain insight into which database tables and columns were used by the SQL. Developers and DBAs could easily isolate poorly performing queries in the application and work better together, to assess changes needed in the application as a result of database schema changes.
Data Studio Developer 2.1 has even more to offer for Informix developers and DBAs:.
- You can now understand more about how the SQL in your application is performing using visual displays of elapsed time per SQL statement, and then you can change the SQL that is issued without having to change the application. Use these features from Data Studio and pureQuery Runtime together to help develop enterprise-ready applications before throwing it over the wall into production.
- You can reduce or eliminate the risk of SQL injection because now it is possible to ensure that only SQL that was previously captured and approved is allowed to execute. This is available for dynamic SQL not just static, so by using with pureQuery Runtime, Informix applications can take advantage of this.
- You can share database connections, reducing the amount of communication required to share details of connection information between administrators and all the developers who need to use the databases. Here's an entire article just on this topic.
See my article
for more details on what's available for Informix developers and DBAs. We even have a set of videos
that you can check out that follow along with the article.
for Data Studio Developer 2.1 should be available soon.
This single package contains both the no-charge, perpetual features available and supported for IDS customers with up-to-date maintenance contracts, but also 30-day trial features of the value-added capabilities of the priced Data Studio Developer product. The trial features are optional during installation, but I think you’ll be happy if you give them a test drive.
In the meantime don’t hesitate to send me any feedback or suggestions that you may have. Then let me know either here or on the Data Studio Forum
what you think of the new release.
-- Sonali Surange
It is not uncommon that an inefficient SQL statement causing
overall performance problem. In that scenario, first you need to identify the
problematic SQL, next find out the user and application that executing the SQL.
Here is a glimpse to address the second part of the problem.
There are different possible ways to find out the user and
application running a particular SQL. The ‘onstat -g ses <sessionid>’
command is the simple way figure out the application that running a SQL. Do you
notice any thing new in the following ‘onstat -g ses <session id>’
IBM Informix Dynamic Server Version 11.70.FC5
-- On-Line -- Up 16:41:52 -- 1048576 Kbytes
session effective #RSAM total
user user tty pid
hostname threads memory memory
informix - 1 19387 garion 1
278528 217656 off
name rstcb flags
sqlexec 145629268 B--PX-- 13103
totalsize freesize #allocfrag #freefrag
V 145b99040 274432 60064
V 145b0d040 4096 808 1 1
free used name free used
0 6576 scb 0 144
0 13880 filetable 0
0 600 log 0 16536
scb sqscb optofc pdqpriority optcompind directives
142e211c0 145b53028 0
0 0 1
SQL Current Iso Lock SQL
Stmt type Database Lvl Mode ERR
ERR Vers Explain
INSERT stores NL
Not Wait 0 0
Current statement name : inscur
Current SQL statement (997) :
You are right. There is a new ‘Program’ section in above
onstat output. Now, you can use the ‘onstat -g ses <session id>’ command
to display the full path of the client program that used in a session. Use the
client program information to monitor or control the application access to the
database. One thing you should know that the program information is sent by the
client itself, and name of the program totally depends on the client.
Previously there was no easy way to monitor onbar archiving progress. It always a question, how long onbar process will take to complete an archive or how much time onbar will spend to transfer data between server, storage manager and vice versa.
Informix introduce two new configuration parameters to help onbar monitoring.
The BAR_PROGRESS_FREQ configuration parameter specifies, in minutes, the frequency of the progress messages in the bar activity log for backup and restore operations.
For example, if BAR_PROGRESS_FREQ is set to 5, onbar reports the percentage of the object backed up or restored every five minutes. Following is an excerpt of bar activity log that showing progress of rootdbs dbspace backup:
2007-05-09 16:12:58 13344 13342 /export/home/informix/bin/onbar_d -b -L 0 2007-05-09 16:12:59 13344 13342 Archive started on rootdbs (Requested Level 0). 2007-05-09 16:13:00 13344 13342 Begin level 0 backup rootdbs. 2007-05-09 16:13:00 13344 13342 Successfully connected to Storage Manager. 2007-05-09 16:18:00 13344 13342 1 percent of rootdbs has been backed up. 2007-05-09 16:23:01 13344 13342 6 percent of rootdbs has been backed up. 2007-05-09 16:28:02 13344 13342 11 percent of rootdbs has been backed up. 2007-05-09 16:33:03 13344 13342 16 percent of rootdbs has been backed up. 2007-05-09 16:38:04 13344 13342 21 percent of rootdbs has been backed up. 2007-05-09 16:43:05 13344 13342 26 percent of rootdbs has been backed up.
The default value of BAR_PROGRESS_FREQ is 0. If the value set to 0, onbar does not write any progress messages to the bar activity log.
The BAR_PROGRESS_FREQ value can’t less than five minute for monitoring onbar progress.
If ON–Bar cannot determine the size of the backup or restore object, it reports the number of transfer buffers sent to the database server instead of the percentage of the object backed up or restored.
The BAR_PERFORMANCE configuration parameter specifies the type of performance statistics to report, and write them to the bar activity log for backup and restore operations.
For example, if BAR_PERFORMANCE is set to 3, onbar reports the time spent transferring data between the Informix server and the storage manager, in the bar activity log.
The default value of BAR_PERFORMANCE is 0. If the value set to 0, onbar does not report any performance statistics to the bar activity log.
Valid values of BAR_PERFORMANCE are 0,1,2 or 3.
- 0 - turn performance monitoring off
- 1 - display the time spent transferring data between the server and storage manager
- 2 - display sub-second accuracy in the timestamps
- 3 - display both timestamps and transfer statistics
Both BAR_PROGRESS_FREQ and BAR_PERFORMANCE configuration parameters take effect while onbar process starts.
Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. Ideally, the data requirements are recorded as a conceptual model that is not platform-specific (the logical data model
) with associated definitions for the entities and attributes. This logical model can serve as the ‘information hub’ to help define and enforce consistency across enterprise data. From the logical model, multiple database-specific implementations can be generated (via physical models
InfoSphere Data Architect (IDA) - formerly called Rational Data Architect (RDA) – is a great modeling tool for any shop that includes Informix Dyanmic Server databases in its mix. Even if you don’t have a ‘data architect’, this product can help DBAs communicate with developers and also help you keep tabs on what is and isn’t “real” any more (by comparing the model of what you think you have with what your database actually does have).
I recently updated a tutorial on using IDA
specifically from the perspective of using it with IDS. There are several enhancements to the model elements for IDS in IDA 7.5.1:
- You can set/view extra properties for Table elements (extent size, raw table type, lock level)
- Enhancements to constraints to support modes (enabled/disabled/ filtering)
- Trigger enhancements suited to IDS syntax including support for INSTEAD OF triggers on views
- Generate DDL enhancements for all of the above
Anyway, if you get a chance, download the IDA trial and check out the latest version of the tutorial and let me know what you think. Link to IDA trial download Link to an IDA demo