I will reiterate few existing features in Informix which you
can use for optimize the connection to database. It is always an expensive
operation when establishing a connection to a database. The goal should be that
applications perform as many operations as possible while connected to database.
Following are some connection attributes that can be use to tune application
performance by reducing the network traffic:
insert optimization (SQL_ENABLE_INSERT_CURSOR): Create an insert cursor is a good idea to efficiently insert
rows into a table in bulk. You need to set the SQL_ENABLE_INSERT_CURSOR
attribute with QLSetStmtOption(), then call SQLParamOptions() with the number of rows as a
parameter. An Insert cursor creates a buffer in memory to hold a block of rows.
The buffer receives data from the application program; once buffer is full data
send to the database server in a block. The buffer reduces the amount of
communication between the program and the database server. As a result, the
insertions go faster.
free a cursors (AUTOFREE): In general, when an application uses a cursor, it usually
sends CLOSE and FREE statements to the database server to de-allocate memory.
Execution of these statements involve round trip of message requests between
the application and the database server. The Automatic free feature (AUTOFREE)
reduces the number of round trips by one. You can use the IFX_AUTOFREE
environment variable or SQL statement SET AUTOFREE to enable this feature.
message transfer (OPTMSG):
This feature allows you to minimize message transfers
between the database server and application. Application can optimized message
transfers by chaining messages together and even eliminating some small message
packets. You can enable this feature by setting OPTMSG
environment variable at application side.
close optimization (OPTOFC):
Application uses DECLARE and OPEN statements to execute a
cursor, each statement involves a round trip of message requests between the
application and the database server. The open-fetch-close optimization (OPTOFC)
reduces the number of round trips network traffic. With this feature, the OPEN
statement from application does not open the cursor immediately. Instead, it
saves any input value that was supplied in the USING clause of the OPEN
statement. Next time along with the FETCH statement it sends this input value.
The database server opens the cursor and returns the first value in this
cursor. Once database server reaches the last value of an open cursor, it
automatically closes the cursor after it sends the data to the client
application. You can use the OPTOFC environment variable to enable this feature.
I just provide some touch point here, there are several
other way to improve network and connection to database. Please check the Improve
Application Performance in Informix Infocenter for details.
The fetch buffer is a memory location in a client-server environment where cursor data received from the server and held before transmit to the application. Setting a bigger fetch buffer size can significantly improve query performance by reducing network traffic. It is the amount of data that client accepts from the server without waiting to send a "received" message which make the difference. Starting with Informix version 11.70.xC5, you can set the maximum size of the fetch buffer to 2 GB to increase query performance.
The FET_BUF_SIZE environment is used to set the size of the application fetch buffer. This environment variable can override the default setting for the size of the fetch buffer for all data types except BYTE and TEXT values. An ANSI database needs addition consideration; you must set transactions to READ ONLY mode for the FET_BUF_SIZE environment variable to improve performance, otherwise rows are returned one by one.
A valid FET_BUF_SIZE setting is in effect for the local database server and for any remote database server from which you retrieve rows through a distributed query in which the local server is the coordinator and the remote database is subordinate. The greater the size of the buffer, the more rows can be returned, and the less frequently the client application must wait while the database server returns rows. A large buffer can improve performance by reducing the overhead of filling the client-side buffer.
In case of an invalid FET_BUF_SIZE setting, Informix not generates any error. It uses the default fetch buffer size.
The combination of fetch buffer, communication buffer (the "b" option in the sqlhosts file), network driver size and application cursor can enhance the query performance farther.
We all know how critical it is to keep accurate and up-to-date database server statistics. Update statistics are key part of getting consistent performance. It is a very common question to ask: What UPDATE STATISTICS command one should run?
The following SQL script offers a way to generate list all necessary UPDATE STATISTICS commands for a database. You just need to replace the "<Database Name>" string with an appropriate database name. It will create an output file call 'update_stat_cmd.sql' in the current directory.
----- Beginning of the SQL script -----
SELECT name, value
WHERE name IN ("AUS_AGE","AUS_AUTO_RULES")
INTO TEMP temp_threshold;
SET value = -1
WHERE name = "AUS_AGE";
SET value = 1
WHERE name = "AUS_AUTO_RULES";
SELECT tk_next_execution FROM sysadmin:ph_task
WHERE tk_name = "Auto Update Statistics Evaluation"
INTO TEMP temp_ph_task;
EXECUTE FUNCTION sysadmin:exectask ("Auto Update Statistics Evaluation");
UNLOAD TO 'update_stat_cmd.sql' DELIMITER ';'
WHERE aus_cmd_dbs_partnum = (
WHERE name = "<Database Name>"
AND bitand(flags, 3 ) > 0);
SET value =
( SELECT value FROM temp_threshold
WHERE name = "AUS_AGE" )
WHERE name = "AUS_AGE";
SET value =
( SELECT value FROM temp_threshold
WHERE name = "AUS_AUTO_RULES" )
WHERE name = "AUS_AUTO_RULES";
SET tk_next_execution =
( SELECT tk_next_execution from temp_ph_task
WHERE tk_name = "Auto Update Statistics Evaluation" )
WHERE tk_name = "Auto Update Statistics Evaluation";
----- End of the SQL script -----
You can run the above SQL script against sysadmin database to generate UPDATE STATISTICS commands according to Informix guideline. It goes by the following guideline.
All tables are updated in LOW mode. All the leading index keys are updated in HIGH mode. All non-leading index keys are updated in MEDIUM mode. The minimum resolution for MEDIUM mode is 2.0. The minimum confidence for MEDIUM mode is 0.95. The minimum resolution for HIGH mode is 0.5.
The above SQL script has dependencies on syadmin database and database scheduler. SQL script will fail, if the database scheduler turned off.
- Sanjit Chakraborty
Well - Itsfinally here - the 2009 Informix user group conference got underway in KansasCity today. This has all the makings of another exciting conference and as usual we haveall our Informix gurus under one roof… The conference has always been a greatplace to share your experiences, to learn new skills, to meet the developmentand support folks who strive to give you the best possible product and servicein the industry and this year is no different. We have an impressive line up of key notes and technical sessions that will no doubt challenge you in terms of what to attend.
Todaythere were several tutorial sessions from IDS workshop to Troubleshootingseries to getting ready for the IDS 11 certification exam ( You will be ableto take these certifications free of charge at the conference - a $150 value). There were several engineers from the support organization who presented these tutorials which is always special for me including John Miller, Ron Privett, Mark Jamison, Randy House, David Kolbinger, Suma Vinod, Sanjit Chakraborty and Manjula Panthagani. Hopefully these sessions were very well received and we look forward to your feedback in making this better.
My comments would not be complete without commending the people who make this conference happen. There are several of you who have been tirelessly working to make this a success and I would like to thank each and every one of you for this opportunity. Hopefully you all get to enjoy Kansas City over the next 3 days and above all have a blast at the conference and enjoy all the sessions.
Bye for now !
Informix Enterprise Replication made easy with OpenAdmin Tool
Check out this new developerWorks article to learn how to use OpenAdmin Tool to configure and monitor IDS Enterprise
Replication: Informix Enterprise Replication made easy with OpenAdmin Tool
Erika Von Bargen
There is good news for those who were downloading the Node.1.0 DataBlade
from the developer works
and using it AS-IS. As per policy there is no support for AS-IS software downloaded from the developer works.
Node DataBlade creates an Informix DataBlade with a node opaque type data that addresses the problem of transitive closure/hierarchical data queries.
From IDS V11.10 onwards, Node.2.0 DataBlade (with some added functionality) is being bundled as one of the default DataBlades along with the server. That means moving forward Node.2.0 DataBlade (and subsequent higher versions) would be eligible for support. Anup Nair
IBM hasjust announced TheInformix Warehouse feature whichconsists of a powerful set of tools to build a warehouse infrastructureplatform. Here is a quote from the announcement:
From KevinBrown, lead architect for IBM and Jim Kobielus from Forrester Research:
"This can save weeks of effort into just a fewhours," Brown said. "Inaddition, customers often did without information because of the cost of effortto get the information. The lost opportunity cost savings is harder toquantify, but can be significant once they use their warehouse platform forsmarter decision-making."
TheWarehouse feature includes the following components:
SQW Design Studio - AnEclipse-based common design environment for connecting to source and targetdatabases, creating and reverse-engineering physical data models, and buildingSQL-based data flows and control flows.
SQL Warehousing Tool -a graphical environment that works with the Design Studio to enable you toquickly and easily build in-database data movements and transformations intoyour warehouse.
SQW Administration Console- a web-based application for managing and monitoring the data flows that youdesign using the Design Studio. The Administration Console allows you to:
- Manage common resources such as database connections and machine resources
- Schedule when the execution of control flows
- Monitor the execution status
Pressrelease at: http://www-03.ibm.com/press/us/en/pressrelease/26840.wss
InformixWarehouse page on the ibm site: http://www-01.ibm.com/software/data/informix/warehouse/
IDS 11 provides support for XML publishing thus enabling applications to publish SQL results as XML, query XML dataand return XML. It also provides the framework to map an XML schema into a relational schema and to shred an XML document into relational tables.
The XML functions available in IDS 11 are summarized in the following table:
|genxml,genxmlclob||return SQL results as XML elements|
|genxmlelem, genxmlelemclob||return column values as XML elements|
|genxmlschema,genxmlschemaclob ||return schema as XML|
|genxmlquery, genxmlqueryclob||return result set as XML|
|genxmlqueryhdr, genxmlqueryhdrclob ||return result set as XML with head|
|extract, extractxmlclob ||evaluate XPATH expression|
|extractvalue, extractxmlclobvalue||return value of XML node|
|existsnode||verify whether a node exists in XML doc|
|idsxmlparse||parse XML doc to determine if it is well-formed|
To use the XML functions in IDS 11:
- Start an XML VP by doing one of the following
Add VPCLASS idsxmlvp,num=1 in onconfig file to start the XML VP when the server starts
Use onmode -p +1 idsxmlvp command to add an XML VP dynamically after the server has started
Note: An XML VP is required only if you use XPATH functions such as extract(), extractxmlclob(), extractvalue(),
extractxmlclobvalue(),existsnode(), idsxmlparse(). XML functions starting with gen do not require the XML VP.
- Verify that $INFORMIXDIR/lib/libxml.udr is read-only
- Make sure a default sbspace exists.
Default sbspace is specified by the SBSPACENAME configuration parameter. You can check the output of onstat -d to
verify that the space exists.
I will talk about each of the above functions with examples in Part 2.Suma Vinod
The 'deadlock' mutex ensures
that there is never be more than one session checking for a deadlock situation
at any given time. Whenever some lock is requested due to the LOCK MODE set to
WAIT [seconds] statement used by a session, the code for deadlock checking get executed.
The deadlock checking code
starts with marking all the sessions in the instance with an internal flag
saying 'not in deadlock'. Then it walks through the list of all the locks held
by the current session (including the newly requested one) and for each lock it
identifies whether there are some sessions waiting for it. If such a session
(called waiter) is found, its internal flag changed to 'deadlock candidate' and
the waiter added into the so called 'deadlock list'. Each of the waiters in
this list is then checked in the same way - the code checks whether any of the
waiters holds any lock the current session is waiting for. If such a lock is
found a deadlock is detected.
Each execution of the deadlock checking code depends on a consistent state of
the internal 'deadlock' flag in each session. In other words there can't be
more sessions checking for the deadlock at the same time, as they would
overwrite that internal flag mutually. To ensure this, each session has to
acquire the 'deadlock' mutex before the deadlock checking code can be executed
and release it once the deadlock detection is finished.
- Tomas Zahradnik
luster for H
igh Availability(MACH11) is a code name for the new feature called ContinuousAvailability introduced in Informix Dynamic Server (IDS) 11. This new feature significantly expands the highavailability options in IDS to provide increased failover, capacity,flexibility, and scalability.
Traditionally, IDS has provided multiple robust solutions forsupporting high availability data replication options. Previousreleases of IDS have supported two replication technologies: EnterpriseReplication (ER) and High Availability Data Replication (HDR). By usingthese technologies together, customers are able to achieve very highlevels of data availability. Both replication technologiescan be integrated with each other and coexist with other availabilitysolutions such as disk mirroring.
While HDR and ER have been features of IDS for many years, and haveproven to be highly reliable and low-maintenance technologies. IDS nowadds support for two new types of secondary servers:
- Secondary servers that share the same physical disk. TheShared Disk Secondary (SDS) servers provide increased availability byallowing one or more instances of the IDS server to attach to the samedisk subsystem, providing redundancy for the server in addition to dataredundancy solutions
- Additional remote secondary servers. The Remote StandaloneSecondary (RSS) servers extend HDR to provide multiple local or remotebackup servers that also replicate the data.
Both SDS and RSS servers provide customers a way to obtain increasedcapacity by distributing workload across multiple servers.Customers canchoose any of these solutions on their own. They become even morepowerful by combining all three types of topologies together. AddingEnterprise Replication, a completely customized availability solutioncan be delivered to meet each unique availability requirement.. Theseconfigurations are simple to set up and maintain, and are highlyscalable.
The following image depicts a Continuous Availability solution with 3tier protection. For example, if the Primary in Building-A inNew Orleans went down for some reason the role of the primary caneasily be switched to one of the SDS servers running on the bladeserver in Building-B. This would cause all other secondaryservers to automatically connect to the new primary server. If both the servers in New Orleans died, Memphisbecomes your Primary and Denver can be made HDR secondary and you mayeven add some SDS servers to Memphis blade server for load balance.
To learn more about this feature, please read the following white paperwritten by the architects of this feature:Whitepaper on MACH11 technology.Vijay Lolabattu
A new version of OpenAdmin Tool for IDS
(formerly known as IDSAdmin) is available to download from the IDS 11.10 Open Beta site
(sign-in required). This new version has a simpler installation, new graphics, and many new features.
Please keep in mind that:
- The product is still listed as IDSAdmin on the Beta download site.
- The Readme file downloadable from the Beta site is (at the time of writing) the out of date one from March - do not use it, instead unzip the oatids package and refer to the Readme in there.
Here is a copy of the current Readme:
OpenAdmin Tool For IDS - v2.10- July 2007
OpenAdmin Tool For IDS ( OAT ) , is a PHP based administration consolewhich can be used to administer one or more IBM Informix Dyanamic Server11.10 instances.
DependenciesOAT requires the following products to be installed:
Note: the versions in brackets indicate the versions that OAT has been testedwith.
- A Webserver (Apache 2.2.3)
- IBM I-Connect or CSDK (3.00)
- PHP 5 compiled with PDO, PDO_SQLITE, GD and SOAP enabled. (5.2.2)
- Informix PDO Module.
Installation Instructions1. Install and set up a working web server that has been configured toserve php pages.
(For more information see the NOTES section below.)
2. Update the php configuration file.
3. Install IBM I-Connect or Client SDK.
- edit the php.ini
- add two new lines to the 'extension' section of the configuration fileif they are not present:
- modify the parameter memory_total to 256M.
4. Install the OpenAdmin Tool For IDS package.
5. Change the ownership of the "<OAT>/install" directoryto the user and group that runs the apache ( httpd ) server.
- Extract the OAT package into your web server document root directory.
6. Start the webserver, making sure INFORMIXDIR points to the Client SDKor I-Connect install location in the webserver environment.
- Find the user and group that runs the apache ( httpd ) server from the httpd.conf.
- chown <user>:<group> <OAT>/install ( Unix/Linux )
7. Launch the OAT installer by visiting the web page:
SERVERNAME = the name of your machine
LOCATION = where you extracted the tar file (e.g. oat )
Follow the instructions in the OAT installation screens, including obtainingan optional Google Maps API key for your domain if required.
Once installation is complete, point your browser at the OAT root URL.From there you can click on Admin and add a new IDS 11.10 Connection tothe default group or create a new group. If using an I-Connect or ClientSDK version prior to 3.0 then for each new connection you add, there needsto be a corresponding SQLHOSTS entry for that connection on the webservermachine.
Once a connection is created the "Get Servers" link from theOAT root page will retrieve the list of connections you have created.
Installing an Apache webserver with PHP is not a straight forward task, thankfully there are 3rd party solutions available. A popular choiceis XAMPP.
Using XAMPPXAMPP is an easy to install Apache distribution containing MySQL , PHPand Perl. XAMPP can be obtainedfrom,
The XAMPP version 1.6.2 for Windows contains everything you need to runOAT (with the exception of IBM I-Connect or CSDK ), download and installationinformation is available at
XAMPP ( also called as LAMPP ) is also available for Linux, however you will need to download the 'Development Package' in order to compile the Informix PDO Driver. Download and installation information is available at
Informix PDO DriverThe following developerWorks article contains useful information to assistwith building an Informix PDO driver:
In IDS 11.10, a DBA or user informix can create sysdbopen() and sysdbclose() procedures to be executed when adatabase is opened and closed. These procedures can be used to change the properties of a session without changingthe application that the session executes. Any statements that are valid in a UDR can be executed in these proceduresto change the session behavior.
A DBA or user informix can create the following procedures in a database:
username: Operating System User
Each time a user user1 opens a database using either a DATABASE or CONNECT TO statement, the database server executesuser1.sysdbopen() if such a procedure is defined. If not it will execute public.sysdbopen(). Each time a user user1 closes a database using either a CLOSE DATABASE or DISCONNECT statement, the database server executesuser1.sysdbclose() if such a procedures is defined. If not, it will execute public.sysdbclose(). sysdbclose() will beexecuted even if the application exits without an explicit CLOSE DATABASE or DISCONNECT statement because the serverdoes an implicit close of the current database for such cases.
The owner name is not ignored when you create sysdbopen() and sysdbclose() procedures in non-ANSI databases, so youcan create these procedures for specific users in non-ANSI databases.
The following procedure creates a table oltp_stat, sets the role to oltp and PDQ priority to 10 for user oltp_user ina database:
create procedure oltp_user.sysdbopen()
create table oltp_stat(userid int, connect_time datetime year to second);
set role to oltp;
set pdqpriority 10;
The following procedure creates a table public_stat, sets the role to others and the PDQ priority to 1 for the PUBLICgroup in a database.
create procedure public.sysdbopen()
create table public_stat(userid int, connect_time datetime year to second);
set role to others;
set pdqpriority 1;
You can clean up any activities started by sysdbopen() in sysdbclose(). For example, if you had created some tables in sysdbopen(), you can drop those tables in sysdbclose().
The following procedures drop the tables created by the sysdbopen() procedures given above:
create procedure oltp_user.sysdbclose()
drop table oltp_stat;
create procedure public.sysdbclose()
drop table public_stat;
A DBA or user informix can set the environment variable IFX_NODBPROC to any value, including 0, to prevent the execution of sysdbopen() and sysdbclose() procedures. When you set up sysdbopen() and sysdbclose() procedures, youcan set the environment variable IFX_NODBPROC and execute the procedures to test if the procedures work as expected. You need to unset the environment variable IFX_NODBPROC after testing.
For more information, see information on sysdbopen() and sysdbclose() in the IBM Informix Guide to SQL: SyntaxSuma Vinod
With Cheetah, IDS now allows a DBA to use ontape to point to adirectory instead of just a particular file. This can be handy when youhave multiple backups that you want to keep in the samedirectory.
It involves a little extra administration though, so here’s aquick primer on it.Setting it up for archives
It’s actually pretty easy to set IDS up to do it.Here are the steps:
Setting it up for back upof your logical logs with the log_full.sh $INFORMIXDIR/etc/ALARMPROGRAM
- Create your directory: for instance:/opt/Informix/archives/ as Informix (group Informix) (I have the bestluck putting the ending slash on the path.)
- chmod 777 the directory
- set TAPEDEV in your $ONCONFIG to that directory.
- run ontape to create your archive
- As above, create your directory and set permissions.
- save a copy of your $INFORMIXDIR/etc/log_full.sh tolog_full.org or something like that
- edit log_full.sh with your favorite editor (i.e. vi oremacs)
- Add the –y option to the command (ontape–a –y)
- Recycle the engine.
When you take an archive by default it will use your<HOSTNAME_SERVERNUM> as the standard prefix. For example:my machine name is ryleh and my SERVERNUM=11 When I do an archive andthen look at the file it is this: ryleh_11_L0
If you want to name it something different, you can set theundocumented environmental variable IFX_ONTAPE_FILE_PREFX Example:
You will have to recycle the engine after you set it for it to takeeffect.
This way you can have multiple archives stored in the same directory. Conclusion:
All in all, backing up to a directory can be a very handy utility tohave for an administrator that has multiple instances, especially forsuch things as development and test instances. You can savean archive under the name of say “benchmark” andthen when you need to load it up, you don’t have to look highand low for the file or the tape. It’s right there,and convenient.Joe Baric
database is created in "root dbspace"
at server initialization. This database is required for the Scheduler API and Remote Administration
feature. Until B5 drop, there wasn't any way of moving the sysadmin database safely to any other dbspace. A new SQL Admin API command in B6 drop now simplifies this task by allowing "informix" user to drop and recreate this database to any other dbspace.
If it's determined that root dbspace does not have enough space for storing task properties and command history information, you could move the sysadmin database to a different dbspace by using the "reset sysadmin"
SQL Administration API command. This command drops the sysadmin database from root dbspace and recreates it in the specified dbspace.
Here's an example to move the sysadmin database...
1. Make sure the following message has appeared in the online message log after server startup:
SCHAPI: Started 2 dbWorker threads.
2. Let's create a special dbspace to store sysadmin... admindbs
database sysadmin;execute function task("create dbspace", "admindbs", "/vol/dbspaces/admindbs", "1 GB", "0");
onspaces -c -d admindbs -p /vol/dbspaces/admindbs -s 1000000 -o 0
3. As user informix
, run the following commands:
dbaccess sysadmin -execute function task("reset sysadmin", "admindbs");
The command will return the following message:
SCHAPI: 'sysadmin' database will be moved to 'admindbs'. See online message log.
The internal thread, bld_sysadmin
(seen via onstat -g ath), waits up to five
minutes to obtain exclusive access to the sysadmin database. The progress of the bld_sysadmin thread is logged in the online message log.
4. Our own session has sysadmin database open. So, terminate the dbaccess session with the "close database
On successful completion, the sysadmin database is dropped and recreated in the new dbspace. The dbScheduler and dbWorker threads are restarted automatically.Mirav Kapadia
Previous version of JDBC 3.50 could not be installed on Windows 64-bit using 64-bit JRE. Installing JDBC 3.50 gave an error "Directory not writable" for all directories. Same error message was received, even after running all the process as administrator by turning UAC (User Account Control) off. Workaround for this problem was to use 32-bit JRE on Windows 64-bit to install JDBC 3.50.
This problem was solved in IDS 11.50. Now JDBC 3.50 uses newer version of Install Shield, which allows for JDBC 3.50 to be installed on 64-bit Windows using 64-bit JRE. Now, users no longer have to use 32-bit JRE to install JDBC 3.50 on Windows 64-bit. This makes users life easier, since users do not have to set up 32-bit JRE on 64-bit Windows. Bhadrik Patel
Upgrading Informix Dynamic Server Version 10.00 to Versions 11.10 or 11.50:
When installing IDS 11.10 or 11.50, if you choose the option "Upgrade from the previous version", all the server binaries will be upgraded to the newer version(s) automatically.
Upgrading Informix Dynamic Server from Version 11.10 to 11.50:
When installing IDS 11.50 on Windows, direct upgrade from IDS 11.10 is not supported. If “Upgrade from the previous version” is selected the following message pops up. Installer stops beyond this point.
Since the support to install both 11.10 and 11.50 on the same machine exists, it is recommended to choose the option “Install into a default/different directory” (shown in the panel below) to install IDS 11.50. Note: This will not upgrade IDS 11.10 to IDS 11.50.
As part of providing a folder name to install the product, a completely new path must be supplied. If the folder selected already contains binaries from IDS 11.10, then the following message pops up.
User can respond to the question “Do you want to select another folder? “. Clicking “yes”, the installer returns to the destination panel where they can provide a different folder. Clicking No, the installer goes to finish panel
The recommended approach to upgrade to 11.50 is
- Uninstall IDS 11.10 using "Retain database, but remove server binaries option"
- Save all the Registry Keys in HKEY_LOCAL_MACHINE\SOFTWARE\Informix\OnLine\
- Install the later version (IDS 11.50) to the same path.
- Make sure that you do not initialize the server when installing.
- Copy the ONCONFIG file to the target and set parameters that are new for the current release.
- Bring the server up using Control Panel->Services or any other method without initializing.
OAT 2.21 has incorporated IDS Enterprise Replication monitoring, a plugin manager to allow customization of OAT functionality and an automated installer on Mac OS X. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.
New feature highlights of OpenAdmin Tool for IDS version 2.21 include:
- Mac OS X Automated Installer: Automatically setup Apache, PHP, I-Connect and OAT on Mac OS X. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.
- IDS Enterprise Replication Monitoring: Monitors Spool Disk usage, Send and Receive queues, Receiving/Apply statistics, Routing Topology, node details and much more. OAT Enterprise Replication is provided as a separate plugin. The latest automated installer will install the ER plugin for you, you can also manually place the ER plugin zip file under the OAT plugin_install directory and install it with the plugin manager (requires php zip extension).
- Plugin manager: A simple way to customize OAT functionality. You can download customized OAT plugins and install it with the plugin manager. Sample plugin code is also provided and you can follow the sample in creating your own plugins. Note that the plugin manager requires the php zip extension. The latest OAT2.21 automated installer includes the php zip extension, but OAT2.20 installer does not.
Download OAT version 2.21 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd .
Click here to see OAT Enterprise Replication at work!Leo Chan
Note if you have missed any articles in this series, check Introduction and Table of Contents
Setting up yourworkspace and connecting to IDS
Now that you have installed the Developer Workbench it is now time torun it. ON the first run, you should be asked to set up yourworkspace. This should give you a screen like the following:
Once you select this, the Workbench takes a little while to build yourworkspace. Once done however you get the classic EclipseWelcome screen. I have highlighted your workspace in the followingscreen shot , because that is the next place you want to go:
Once you click on the workbench you get the generic workbench. The nextstep at that point is to start a project , and establish a connection:
Setting up a new project to handle queries is fairly straightforward,and can be reached very quickly , all you have to do is select the DataDevelopment Project option as in the following screen shot:
Selecting the above puts you to the new Data Development projectscreen, as shown below:
As you can see, there are some DB2 flavored words here.Schema in DB2, is the IDS equivalent of a database, and as such, keepthe default radio button, as seen in the screen shot above, set. I would recommend that you giver your project a meaningfulname, but that is completely under you control. In my case I listed itas "Mark's Test project for Blog".
This brings us to the next page which assigns an existing databaseconnection, or creates a new one. Since this is our first time through,just hit the next button, as we will need to create a new connection.
The Connection parameters gives you several options for connections, aswell as several data server choices. Take a look at the below screenshot:
This is a standard Informix JDBC connection string at this point. Thescreen shot above differs slightly from what you see as the defaultsettings, as I have selected the "Informix JDBC driver" instead of the"IBM Data Server Driver for JDBC and SQLJ." In short I took the JDBCdriver instead of the JCC driver. The main reason is because JCC itselfis in beta, and I wanted to use something I knew how to connect withand trace. JCC also requires a DRDA listener thread, and I don't have one set up for the instance I am using. Update : Check out Establishing JCC connections in 11.10 if you want to use JCC connections.
Fill the remaining fields above with the normal Informix connectionsettings, and test you connection. Once you connection is working,click on the next button.
This final screen allows you to filter your schema. Here is the screenshot :
Please note: The default is only for users who have IDS on a windowsplatform. Since schema id is actually a user id in IDS, filtering onINFORMIX means that no information of any kind will return for anydatabase objects if your IDS instance is on a Unix platform. The reasonof course is the user id is case sensitive, and informix != INFORMIX.Quite honestly I suggest you just disable filtering for now, when Icover the the Database explorer I will show you how to change thefilter how you want to.
As soon as you have selected your filter objects, you will finally seethe workbench laid out, and ready to be used for a Data Developmentproject. Here is mine :
In the next article I will discuss the Database Explorer Window in thelower left hand corner. Mark Jamison
CombiningDay 1 and Day 2 activities. It has been a busy 2 days with some greatannouncements. We started off with Arvind announcing xC4 and most importantlyour storage optimization i.e. compression feature and warehouse feature.
Compressioncan save you as much as 80% of disk space and the demo at the upgrade kioskshowed a compression rate of 73 %. Check out our compression site at: http://www-01.ibm.com/software/data/informix/compression/
This sitefeatures a ROI tool and a white paper that is a must read for all. It clearlyarticulates to the value proposition of compression and how easy it is to setupand use. The OAT sessions and the demos showcased this feature and ease of usevery well and these sessions were amongst the popular ones.
Thewarehouse announcement is another exciting announcement that went out and thisincludes tooling that greatly simplify your warehouse design and deployment andaids in your BI decisions. The demo sessions highlighting the warehouse featurewas also a great attraction. Check out our warehouse site at: http://www-01.ibm.com/software/data/informix/warehouse/
The sitefeatures a white paper that is a great read…. We had another great keynote withJerry and Kevin who walked us through the Informix roadmap. I am sure thissession gave everyone an idea of where Informix is heading and how each of youcan help with driving key features within the product as well. The rest of theday as usual had some great sessions and I just did not want to highlight oneover the other. Let me just state that there were almost 25 great sessions thattook place through the day.
OnTuesday, Ananth gave his keynote on cloud computing and how it can be acritical success factor in your IT plans. This is an exciting space that IBM isin and the prospects and the potential seems endless in terms of its useespecially with its cost effectiveness and ease of deployment. Guy's blog hassome great entries on this and I would encourage each and every one to read upon the same http://www.ibm.com/developerworks/blogs/page/gbowerman
Thesessions again were great and the feedback has been astounding in terms of thequality of the sessions. There is a reason why the IIUG conference is anattractive proposition and why you should make this a yearly event on yourcalendar.
Oh btw,there is an interesting contest going on that Advanced Data Tools is sponsoringcalled the Fastest DBA where the goal is to take an SQL statement that runs inabout 30 min and make it run faster. The last I checked the time to beat was0.54 sec or somewhere around this … We definitely need more of these at futureconferences …
Thecertification room was receiving its regular stream of test takers as well andatleast the ones I saw came out smiling which is good :)
Tomorrowis the last day of the conference and has been filled with some excitingsessions. The key note is a Q&A session that will raise some gooddiscussions about marketing, support, sales and dev … will sum up theconference tomorrow… enjoy the rest of the conference….
Suppose you have an application which depends on IDS. Your business requirement is application must starts after a successful IDS memory initialization. Previously there was no way you can validate the return code of oninit process to make a decision whether or not IDS initialize successfully. So, if you have a script that automatically starts IDS and the application respectively, it possible the application may start even though IDS failed to initialize.
A new oninit option has introduced to IDS 11.50 that generate a return code. Based on the return code you can customize the script and automate startup process.
The 'oninit -w' command generates following return codes:
- 0 - when success
- 1 - when initialization fails or exceed the timeout value
The 'oninit -w' command forces IDS to wait until it successfully initializes before returning to a shell prompt. You can also provide an addition argument for timeout value with '-w' option. Without any timeout value with '-w' option, IDS will use the default value e.g. 10 minutes. If IDS cannot initialize within the timeout period, oninit generates return code 1 and writes following error message to the online.log file:
Warning: wait time expired
The syntax of new command as follows:
You can use the '-w' option with combination of any other oninit initialization options.
Couple of points to remember:
- In a high-availability environment, you can only use the 'oninit -w' command on primary server; it is not valid on secondary servers.
- The oninit command returns success when sysmaster, sysutils, sysuser and sysadmin are successfully created.
Sanjit Chakraborty[Read More]
Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. Ideally, the data requirements are recorded as a conceptual model that is not platform-specific (the logical data model
) with associated definitions for the entities and attributes. This logical model can serve as the ‘information hub’ to help define and enforce consistency across enterprise data. From the logical model, multiple database-specific implementations can be generated (via physical models
InfoSphere Data Architect (IDA) - formerly called Rational Data Architect (RDA) – is a great modeling tool for any shop that includes Informix Dyanmic Server databases in its mix. Even if you don’t have a ‘data architect’, this product can help DBAs communicate with developers and also help you keep tabs on what is and isn’t “real” any more (by comparing the model of what you think you have with what your database actually does have).
I recently updated a tutorial on using IDA
specifically from the perspective of using it with IDS. There are several enhancements to the model elements for IDS in IDA 7.5.1:
- You can set/view extra properties for Table elements (extent size, raw table type, lock level)
- Enhancements to constraints to support modes (enabled/disabled/ filtering)
- Trigger enhancements suited to IDS syntax including support for INSTEAD OF triggers on views
- Generate DDL enhancements for all of the above
Anyway, if you get a chance, download the IDA trial and check out the latest version of the tutorial and let me know what you think. Link to IDA trial download Link to an IDA demo
We would appreciate your help in building a more robust, efficient, and client-focused IBM Electronic Support system by filling out this questionnaire:
It should take only approximately five to ten minutes to complete, but the results will help us make your IBM Support experience the best it can be.
Thank you for your valuable input!
The Enterprise IBM Electronic Support team
Onbar backup order on Cheetah
1> Root dbsapce , SBLOBspace, BLOB space, and normal dbspace onbar backup order.
Rootdbspace is always first, then SBLOB space, BLOB space, normal dbspace is the last one.
Onstat –d output as following:
IBM Informix Dynamic Server Version 11.10.UC1 -- On-Line -- Up 00:19:47 -- 44672 Kbytes
address number flags fchunk nchunks pgsize flags owner name
44cdf7f0 1 0x60001 1 1 2048 N B informix rootdbs
45ca6ea8 2 0x40001 2 1 2048 N B informix ddbs1
45c26ea8 3 0x40001 3 1 2048 N B informix ddbs2
45ca7a00 4 0x48001 4 1 2048 N SB informix sbdbs
45bbeeb0 5 0x60011 5 1 8192 N BB informix bdbs
5 active, 2047 maximum
Note: For BLOB chunks, the number of free pages shown is out of date.
Run 'onstat -d update' for current stats.
address chunk/dbs offset size free bpages flags pathname
44cdf950 1 1 0 50000 15421 PO-B /home/informix/cheetah/dsk/rootdbs
45c98e38 2 2 0 5000 4947 PO-B ./dsk/ddbs1
45ca73b0 3 3 0 5000 3449 PO-B ./dsk/ddbs2
45ca7b60 4 4 0 5000 4587 4587 POSB ./dsk/sbdbs
Metadata 360 268 360
45bb0e48 5 5 0 5000 ~1250 1250 POBB ./dsk/bdbs
5 active, 32766 maximum
After onbar –b –w ,and the backup order can be showed in the bar_act.log infor as following:
2007-06-15 04:41:10 5218 5216 /home/informix/cheetah/bin/onbar_d -b -w
2007-06-15 04:41:10 5218 5216 Archive started on rootdbs, sbdbs, bdbs, ddbs2, ddbs1 (Requested Level 0).
2007-06-15 04:41:11 5218 5216 Begin level 0 backup rootdbs.
2007-06-15 04:41:11 5218 5216 Successfully connected to Storage Manager.
2007-06-15 04:41:11 5218 5216 Completed level 0 backup rootdbs (Storage Manager copy ID: 751421824 0).
2007-06-15 04:41:12 5223 5218 Process 5223 5218 successfully forked.
2007-06-15 04:41:12 5223 5218 Begin level 0 backup sbdbs.
2007-06-15 04:41:12 5223 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5224 5218 Process 5224 5218 successfully forked.
2007-06-15 04:41:12 5224 5218 Begin level 0 backup bdbs.
2007-06-15 04:41:12 5224 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5225 5218 Process 5225 5218 successfully forked.
2007-06-15 04:41:12 5225 5218 Begin level 0 backup ddbs2.
2007-06-15 04:41:12 5226 5218 Process 5226 5218 successfully forked.
2007-06-15 04:41:12 5226 5218 Begin level 0 backup ddbs1.
2> Normal dbspace backup order
The onbar –b –w backup on Normal dbsapces doesn’t depend on the order which the dbspace is created but depend on the size is used on that dbsapce.
In the above example, the ddbs2’s usage is bigger than the ddbs1’s usage, the onbar backup’s order is the ddbs2 first and then the ddbs1 without following the create order.
Here we will look at each of the XML functions given in Part 1 with examples.genxml()/genxmlclob()
You can use these functions to create an XML row element for each row of a SQL query result. Each column in the rowis an attribute of the row element. genxml() is used for returned row values that are LVARCHAR(32739) or less. Forlarger values, you should use genxmlclob(), which returns a CLOB.
These functions process the rows without any specific order. If the order of the rows is important, you can use thederived table queries to get the result set in the correct order, and then apply the functions on the result set.
select genxml(row (customer_num, fname), "row") from customer;
Output from the above sql:
<row customer_num="101" fname="Ludwig "/><row customer_num="102" fname="Carole "/><row customer_num="103" fname="Philip "/><row customer_num="104" fname="Anthony "/>
To order the results on fname, use the following sql:
select genxml(row(num,name),"row") from (select customer_num, fname from customer order by fname) as vt(num, name);
To select all columns in the table, use the following sql:
select genxml(customer, "row") from customer;genxmlelem()/genxmlelemclob()
These functions return each column value as separate elements, in contrast to genxml(), which returns column valuesas attributes of the row element.
select genxmlelem(row(customer_num,fname), "cust") from customer;
Output from the above sql:
<cust><row><customer_num>101</customer_num><fname>Ludwig </fname></row><row><customer_num>102</customer_num><fname>Carole </fname></row><row><customer_num>103</customer_num><fname>Philip </fname></row><row><customer_num>104</customer_num><fname>Anthony </fname></row></cust>
You can select all columns in a table by passing the table name as the first argument as shown below:
select genxmlelemclob(customer, "cust") from customer;genxmlschema() & genxmlschemaclob()
They are like genxml() but generate full XML schema including XML header and data. An XML header specifies documentproperties such as the document encoding, the document type definition(DTD), and XML stylesheet(XSL). Thefollowing example shows a select using genxmlschema and its output:
select genxmlschema(customer, "cust") from customer;
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?> xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.ibm.com" xmlns="http://www.ibm.com" ElementFormDefault="qualified"> <xs:element name="cust"> <xs:complexType> <xs:sequence> <xs:element name="customer_num" type="xs:serial"/> <xs:element name="fname" type="xs:char(15)"/> <xs:element name="lname" type="xs:char(15)"/> <xs:element name="company" type="xs:char(20)"/> ...genxmlquery() & genxmlqueryclob()
They are versatile functions that take a SQL query as argument and return the result set in XML.
execute function genxmlquery('cust','SELECT customer_num FROM customer');
Output from the above sql:
<cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row></cust>genxmlqueryhdr() & genxmlqueryhdrclob()
These functions return the same data as genxmlquery() but with an XML header.
execute function genxmlqueryhdr('cust','SELECT customer_num FROM customer');
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?><xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"><cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row>extract() and extractxmlclob()
These functions evaluate an XPATH expression on a XML column, document, or string. These functions are identicalexcept that extractxmlclob() returns a CLOB instead of LVARCHAR. They are used to return an XML fragment of theevaluated XML column, document, or string. They are compatible with the Oracle extract()function.
execute function extract('<name><first>fred</first></name>','/name/first');
Output from the above sql:
(expression) <first>fred</first>extractvalue() and extractxmlclobvalue()
These functions return the value of the XML node in contrast to extract(), which returns the XML node. They are compatible with the Oracle extractvalue() function.
execute function extractvalue('<name><first>fred</first></name>','/name/first');
Output from the above sql is just the name without the XML tags:
This function returns 1 if the specified XML node exists in an XML document. It is compatible with Oracle exists() function
Ouput from the above sql:
Parse an XML document or fragment to determine whether it is well formed. This function returns an XML document orfragment if the input XML is well formed else an error is returned.
SELECT idsxmlparse('<purchaseOrder poNo="124356"><customerName>ABC Enterprises</customerName><itemNo>F123456</itemNo></purchaseOrder>') AS PO FROM systables where tabid = 1;
Output from the above sql:
<purchaseOrder poNo="124356"><customerName>ABC Enterprises<itemNo>F123456</purchaseOrder>
Note that the string in the quotes should not have any carriage returns.[Read More
Derived tables in the FROM Clauseof Queries
TheSELECT statement can now include syntax that complies with ISO/IEC9075:1992, the SQL-92 standard, to specify a full select subquery inthe FROM clause as a data source for the query. These subqueries arecalled derived tables or table expressions, they can be simple, UNION,or joined subqueries, including OUTER joins, and can include the ORDERBY clause. In addition, AS correlation specifications in the FROMclause can declare temporary names for columns within the query.Informix-extension syntax, such as the FUNCTION keyword with iteratorfunctions or the TABLE (MULTISET (SELECT ...)) keywords forcollection-derived tables, can now be replaced in the FROM clause bySQL-92 syntax. This feature expands the capability of Informix DynamicServer to run without modification queries that are interoperable onother database servers that support industry-standard SQL syntax.
You can find examples using derived tables in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/sel_sql99.sql.
Optimizer Directives in ANSI-Compliant Joined Queries
Earlier IDS versions supported optimizer directives inInformix-extension joined queries, but not in queries that usedANSI/ISO syntax to specify joins. For both inline directives andexternal directives, this release extends support in ANSI/ISO joinedqueries to the following classes of optimizer directives:
- Access-method directives (FULL, AVOID_FULL, INDEX,AVOID_INDEX, INDEX_SJ, AVOID_INDEX_SJ)
- Explain-mode directives (EXPLAIN, AVOID_EXECUTE)
- Optimization-goal directives (ALL_ROWS, FIRST_ROWS).
The join-order directive (ORDERED) is supported only inANSI/ISO-compliant LEFT OUTER joins and INNER joins. Because ofordering requirements for OUTER joins, in ANSI-compliant joined queriesthat specify the RIGHT OUTER JOIN or FULL OUTER JOIN keywords, theORDERED join-order directive is ignored, but it is listed underDirectives Not Followed in the sqexplain.out file.
This feature does the not support the join-method directives (USE_NL,AVOID_NL, USE_HASH, AVOID_HASH, /BUILD, and /PROBE) in ANSI/ISO joinedqueries, except in cases where the optimizer rewrites the query so thatit is no longer uses the ANSI/ISO syntax.Trigger Enhancements
Several new features expand the syntax and the functionality oftriggers on tables and on views:
- You can now define multiple INSERT, DELETE, UPDATE, andSELECT triggers on a table and multiple INSTEAD OF triggers for theview.
- When a table, view, or column list has multiple triggersfor a DML event type, Informix Dynamic Server executes all BEFOREtriggered actions before the FOR EACH ROW actions, and executes all FOREACH ROW actions before the AFTER actions.
- You can create SPL procedures that refer to applicable OLDand NEW trigger correlated values. Within the procedure you can accessapplicable OLD and NEW values and modify the NEW values: e.g. using LETstatements. From a FOR EACH ROW trigger action, you can execute thisSPL procedure [syntax: execute procedure foo() with trigger references].
- New Boolean operators (DELETING, INSERTING, SELECTING, andUPDATING) can be used in procedures executed from trigger actionstatements. These test whether the currently executing triggered actionwas triggered by the specified type of DML event and return a booleanvalue. The IF statement of SPL and the CASE expression of SQL canspecify these operators as the condition in a trigger routine.
These features make it easier to incorporate IDS triggers on tables andon views within a heterogeneous information management system wheremultiple applications need to share the table or view.
You can find examples using multiple triggers in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/cr_trig.sql.Index Self-Join Query Plans
Inearlier Informix Dynamic Server versions, queries of tables withcomposite indexes performed inefficiently if the ratio of duplicatevalues to the number of distinct values was much higher for the leadingcolumns than for subsequent columns of the index. A new feature of thequery optimizer supports a new type of index scan, called an indexself-join path, that uses only subsets of the full range of a compositeindex. The table is logically joined to itself, and the more selectivenon-leading index keys are applied as index bound filters to eachunique combination of the leading key values. By default, the optimizerconsiders this type of scan.
The optimizer also supports two newjoin-method directives, INDEX_SJ and AVOID_INDEX_SJ. The INDEX_SJdirective forces an index self-join path using the specified index, orchoosing the least costly index in a list of indexes, even if datadistribution statistics are not available for the leading index keycolumns. The AVOID_INDEX_SJ directive prevents a self-join path for thespecified index or indexes. This feature can improve query performanceon tables with composite indexes.Enhanced Concurrency withCommitted Read Isolation
In Committed Read isolation level, exclusive row-level locks held byother sessions can cause SQL operations to fail when attempting to readdata in the locked rows. This release introduces a new LASTCOMMITTED keyword option to the SET ISOLATION COMMITTED READ statementto reduce the risk of locking conflicts when attempting to read atable. This new syntax instructs IDS to return the most recentlycommitted version of the rows, even if another concurrent session holdsan exclusive row-level lock. This behavior can be extended to the DirtyRead, Read Uncommitted, and Read Committed isolation levels by settingthe new USELASTCOMMITTED configuration parameter or through new optionsto the SET ENVIRONMENT statement.
This feature supports B-tree indexes and functional indexes, but notR-tree indexes. It does not support tables that are being accessed byDataBlade modules, tables with columns of collection data types, tablescreated using a Virtual Table Interface, tables with page-levellocking, tables with exclusive table-level locks, unlogged tables, ortables in databases with no transaction logging.
Enhanced Data Types andUDR Support in Cross-Server Distributed Queries
Earlier releases of IDS restricted the remote execution of UDRs indatabases of other IDS instances to SPL routines that the EXECUTEFUNCTION or EXECUTE PROCEDURE statement invoke explicitly, and to SPLroutines that queries and other DML operations invoked implicitly.
This release extends support for UDRs in cross-database andcross-server distributed operations to most contexts where a UDR isvalid in the local database. In addition, external routines written inthe C or Java languages are now valid in any distributed operationwhere an SPL routine is valid. This release also extends thedata types that are valid as parameters or return values ofcross-server UDRs, which were formerly restricted to non-opaquebuilt-in SQL data types, by supporting these additional data types:
– DISTINCT of built-in types that are not opaque
– DISTINCT of BOOLEAN
– DISTINCT of LVARCHAR
– DISTINCT of the DISTINCT types listed above.
These data types can be returned by SPL, C, or Java language UDRs thatuse these data types as parameters or as return values, if the UDRs aredefined in all the participating databases. Any implicit or explicitcasts defined over these data types must be duplicated across all theparticipating Dynamic Server instances. The DISTINCT data types musthave exactly the same data type hierarchy defined in all databases thatparticipate in the distributed query.
This feature does not relax existing restrictions on other opaque andDISTINCT types or on large-object, serial, and collection data types inlocally or remotely executed SPL routines or external routines.
To All Informix users, I would like to encourage you to take a look at our new IBM Data Studio Developer 2.1
offering. There are significant enhancements we think that IDS developers will like.
We now have new tools to view update statistics and perform object management tasks easily, such as specifying data partitions (either round robin or expression-based) and raw tables. Also, trigger support has been greatly improved to take advantage of the ability in IDS to have before, after, and ‘for each’ triggers, whereas only ‘for each’ was supported previously. These capabilities are available at no charge. Guy Bowerman talks a bit more about these features in his blog
In Data Studio Developer, which includes the base tooling and all the value-add capabilities, we also have many new features that IDS developers can take advantage of.
Using our previous releases
, IDS developers could improve productivity by using the SQL content assist and validation capabilities in the Java editor and to generate applications quickly using the pureQuery API. You could capture SQL from any Java application and correlate which SQL was issued to the database and from where in the application. Using Data Studio, you could gain insight into which database tables and columns were used by the SQL. Developers and DBAs could easily isolate poorly performing queries in the application and work better together, to assess changes needed in the application as a result of database schema changes.
Data Studio Developer 2.1 has even more to offer for Informix developers and DBAs:.
- You can now understand more about how the SQL in your application is performing using visual displays of elapsed time per SQL statement, and then you can change the SQL that is issued without having to change the application. Use these features from Data Studio and pureQuery Runtime together to help develop enterprise-ready applications before throwing it over the wall into production.
- You can reduce or eliminate the risk of SQL injection because now it is possible to ensure that only SQL that was previously captured and approved is allowed to execute. This is available for dynamic SQL not just static, so by using with pureQuery Runtime, Informix applications can take advantage of this.
- You can share database connections, reducing the amount of communication required to share details of connection information between administrators and all the developers who need to use the databases. Here's an entire article just on this topic.
See my article
for more details on what's available for Informix developers and DBAs. We even have a set of videos
that you can check out that follow along with the article.
for Data Studio Developer 2.1 should be available soon.
This single package contains both the no-charge, perpetual features available and supported for IDS customers with up-to-date maintenance contracts, but also 30-day trial features of the value-added capabilities of the priced Data Studio Developer product. The trial features are optional during installation, but I think you’ll be happy if you give them a test drive.
In the meantime don’t hesitate to send me any feedback or suggestions that you may have. Then let me know either here or on the Data Studio Forum
what you think of the new release.
-- Sonali Surange
The Roles Folder, as it's name implies, gives you a view of all theroles created for the table, and a visual means to create new roles oralter existing roles. To get a quick look at how this folder works,let's create a test role that will have complete access to the customertable.
- Select create rule from the folder, by right clicking onthe folder, and following the menu options. It should look like thefollowing:
- This should now bring up the Data Object Editor window inthe center portion of your workspace. You will start in the Generaltab, and you will see that the Role name has been pre-filled, Let's change that to customer_role, so now you should seesomething like the following:
- Let's now make sure we grant privileges to customer forthis role, to do this move down to the Privilege tab, clickon the table tab, and the click on the "Grant New Privileges"which looks like a yellow star. That will bring up the Gant NewPrivileges window. While it looks like you can just type a table namehere, you actually have to press the "browse" button and select the customer table fromthe browser. Once you do that you should see the window as shown below:
Go ahead and check all the boxes listed, and then check OK.
- You will now be back in the main window. Just click on the Run DDL button, andthe customer_role role will be created
Now that you have a role in your Roles folder, let's look at what youcan actually do. You should see customer_role in the folder so rightclick on it, you should see the following:
As you can see you can do the following:
- Alter the role
- Drop the role
- Generate the DDL for the role.
- View Membership
The two not listed above (AnalyzeImpact
, and CompareWith
) are not actually used by roles. As I'm sure youhave noticed, you cannot add users to roles from the Role
Folder, thatwill come with our discussion of the Users
IDS 11.10 supports some of the PL/SQL syntax thus expanding the capability of IDS. It also facilitates easy migration of applications that use a lot of stored procedures and triggers written in PL/SQL.
This version supports the following syntax:GOTO statement and Label syntax
Restrictions on GOTO:
Loop .. End Loop syntax
- A GOTO statement cannot be used in an EXCEPTION block.
- Labels cannot be defined within an EXCEPTION BLOCK.
- A label must be unique within a stored procedure.
Example:WHILE LOOP… END LOOP
<> WHILE LOOP… END LOOP whloop_label
You can re-write the above while loop using a label like shown below:FOR LOOP… END LOOP
<> FOR LOOP… END LOOP for_label
Example:Expression based exit statement.
- EXIT WHEN <expr>;
- EXIT loop_label WHEN <expr>; -- Label Based Exit.
This is especially useful in nested loop statments. Here is an example of a nested loop statement using a label and exitwhen statements:
Please note that for while and for loops you can use both the old and new syntax.Suma Vinod
One of the big ideas recently proposed by IBM, and further proof of our commitment to IDS, has been the idea of a common setof clients tools (like a Java Common Client (JCC), for example). If you aregoing to have a common set of clients, or programming interfaces, itmight not be a bad idea to have a common development tool that willallow you to use these common clients, for both DB2 and IDS. Well IBMhas released in beta an Eclipse based common tool that allows that. Forthose who want to jump right in here is the link:IBMData Server Developer Workbench 9.5
As with many things IBM , since this is an open beta the actual name atthe time of release may change. I will be blogging on thisproduct over the next few weeks covering its capabilities from the IDSside of the tool. So far there will be 5 parts to this report, notcounting this introduction.
Part 1 - Installation for IDS
Part 2 - Setting up your workspace and connecting to IDS.
Part 3 - The Database Explorer window.
Part 4 - ER Diagramming
Part 5 - The SQL Builder and The SQL Editor Windows.
Part 6 - PureQuery and IDS Mark Jamison