One of the new features in Cheetah is the ability to enable tracing for SQL. While this has been available in limited method though the use of the IBM product I-SPY, the drawback before has always been the performance hit to actually monitor this information. That limitation is effectively eliminated with this feature in Cheetah. So how is SQL Tracing enabled? It's actually pretty simple, one way is to just put SQLTRACE in your ONCONFIG. One configuration I use most often is:
This allows me to keep track of the last 1000 sql statements on the instance.
However ONCONFIG is not the only way to enable tracing, you can also dothis through the sysadmin database by running the following:
EXECUTE FUNCTION task("set sql tracing on", 1000, 2,"low","global");
So by using either of the above methods you now have enabled SQLRACE, buthow do you validate it. The easiest method is by running :
onstat -g his
Below is a sample of onstat -g his output.
In Part 2 we will look at how to read this in SQL, and possible optionsfor the data.
IDS 11 provides support for XML publishing thus enabling applications to publish SQL results as XML, query XML dataand return XML. It also provides the framework to map an XML schema into a relational schema and to shred an XML document into relational tables.
The XML functions available in IDS 11 are summarized in the following table:
|genxml,genxmlclob||return SQL results as XML elements|
|genxmlelem, genxmlelemclob||return column values as XML elements|
|genxmlschema,genxmlschemaclob ||return schema as XML|
|genxmlquery, genxmlqueryclob||return result set as XML|
|genxmlqueryhdr, genxmlqueryhdrclob ||return result set as XML with head|
|extract, extractxmlclob ||evaluate XPATH expression|
|extractvalue, extractxmlclobvalue||return value of XML node|
|existsnode||verify whether a node exists in XML doc|
|idsxmlparse||parse XML doc to determine if it is well-formed|
To use the XML functions in IDS 11:
- Start an XML VP by doing one of the following
Add VPCLASS idsxmlvp,num=1 in onconfig file to start the XML VP when the server starts
Use onmode -p +1 idsxmlvp command to add an XML VP dynamically after the server has started
Note: An XML VP is required only if you use XPATH functions such as extract(), extractxmlclob(), extractvalue(),
extractxmlclobvalue(),existsnode(), idsxmlparse(). XML functions starting with gen do not require the XML VP.
- Verify that $INFORMIXDIR/lib/libxml.udr is read-only
- Make sure a default sbspace exists.
Default sbspace is specified by the SBSPACENAME configuration parameter. You can check the output of onstat -d to
verify that the space exists.
I will talk about each of the above functions with examples in Part 2.Suma Vinod
luster for H
igh Availability(MACH11) is a code name for the new feature called ContinuousAvailability introduced in Informix Dynamic Server (IDS) 11. This new feature significantly expands the highavailability options in IDS to provide increased failover, capacity,flexibility, and scalability.
Traditionally, IDS has provided multiple robust solutions forsupporting high availability data replication options. Previousreleases of IDS have supported two replication technologies: EnterpriseReplication (ER) and High Availability Data Replication (HDR). By usingthese technologies together, customers are able to achieve very highlevels of data availability. Both replication technologiescan be integrated with each other and coexist with other availabilitysolutions such as disk mirroring.
While HDR and ER have been features of IDS for many years, and haveproven to be highly reliable and low-maintenance technologies. IDS nowadds support for two new types of secondary servers:
- Secondary servers that share the same physical disk. TheShared Disk Secondary (SDS) servers provide increased availability byallowing one or more instances of the IDS server to attach to the samedisk subsystem, providing redundancy for the server in addition to dataredundancy solutions
- Additional remote secondary servers. The Remote StandaloneSecondary (RSS) servers extend HDR to provide multiple local or remotebackup servers that also replicate the data.
Both SDS and RSS servers provide customers a way to obtain increasedcapacity by distributing workload across multiple servers.Customers canchoose any of these solutions on their own. They become even morepowerful by combining all three types of topologies together. AddingEnterprise Replication, a completely customized availability solutioncan be delivered to meet each unique availability requirement.. Theseconfigurations are simple to set up and maintain, and are highlyscalable.
The following image depicts a Continuous Availability solution with 3tier protection. For example, if the Primary in Building-A inNew Orleans went down for some reason the role of the primary caneasily be switched to one of the SDS servers running on the bladeserver in Building-B. This would cause all other secondaryservers to automatically connect to the new primary server. If both the servers in New Orleans died, Memphisbecomes your Primary and Denver can be made HDR secondary and you mayeven add some SDS servers to Memphis blade server for load balance.
To learn more about this feature, please read the following white paperwritten by the architects of this feature:Whitepaper on MACH11 technology.Vijay Lolabattu
In previous IDS versions, the logging mode of the temp table matches the logging mode of the database in which theyare created. In other words, temp tables created in a logged database will be logging by default and temp tables created in a non-logged database will be non-logging by default. If you do not want your temp tables to be loggingin your logged database you have to use the "with no log" clause when you create it.
For example the following SQL creates the temp table temptab1 with no logging.
create temp table temptab1 with no log;
In IDS 11, you can control the default logging mode of temporary tables. If you set the configuration parameter TEMPTAB_NOLOG to 1, temp tables created in a logged database will be non-logging by default. This is useful inreplication environments where you create your databases logged but do not want to replicate your temp tables.
If TEMPTAB_NOLOG is set to 0, this feature is disabled.Suma Vinod
On IDS version 11.10 a significant amount of focus has been given to automatically keep administrators up-to-date on database performance. Several performance advisory triggers added to Informix server to flag administrator once a threshold reached. The IDS will automatically calculate different thresholds based on current database activities and generate an appropriate performance advisory in message log (MSGPATH).
Following is a list of performance advisory available in IDS version 11.10:
- Physical log too small for RTO_SERVER_RESTART
- IDS server boot time too long
- Physical log too small
- Logical log too small
- Long transactions blocking checkpoints
- Physical log too small to accommodate bufferpool flushing
- Logical log too small to accommodate bufferpool flushing
- Physical log too small to accommodate automatic checkpoints
- Logical log too small to accommodate automatic checkpoints
- Bufferpool flushing
In this article we will discuss about the checkpoint related advisory. The ‘onstat -g ckp’ command has introduced with IDS version 11.10 to get checkpoint information and checkpoint related advisory. This command will display checkpoint related configuration parameters recommendation, once a sub-optimal configuration detected.
Following configuration parameters can significantly impact performance of IDS:
During checkpoint IDS will evaluate checkpoint related configuration parameters and produce a performance advisory once a sub-optimal configuration detected. If configuration parameters is not tune properly, database server can encounter block during checkpoint.
Following is an example of ‘onstat -g ckp’ command output:
The description of ‘onstat -g ckp’ command output:
- The first section of the display describes checkpoint information
- The second section of ‘onstat -g ckp’ output describe the checkpoint related advisory
In the above example physical log file size (PHYSFILE) was set to 15000 Kbytes. During checkpoint IDS evaluate the configuration parameters and found out physical log size was small, which can cause checkpoint block, and suggested PHYSFILE value as 16800 Kbytes.
The checkpoint performance advisory also available in the message log file. Following is an example of message log with a checkpoint performance advisory:
Note if you have missed any articles in this series, check Introductionand Table of Contents
TheDatabase Explorer Window
Now that you have a workspace, and a connection established, it is timeto take a quick look around the options available to you in theDatabase Explorer window. By default this window is located in thelower left hand corner, and on start up looks like this:
As you can see this has a tool bar as well as the Connections
folder.Let's take a quick look at the options you have on the tool bar.Traveling from Left to right, you have:
- Collapse all:This allows you to quickly collapse all the folders you have opened inyour explorer, as shown in the above dialog.
- Link:This allows you to link data in the explorer to an open editor.
- SQL Scrapbook:This lets you open up a file that saves the SQL you aretesting.
- Disconnect:Will disconnect any connection you have highlighter.
- New Connection:Will take you to the connection tool, so you can add more connectionsand Databases to your Database Explorer.
- SQL Editor:This lets you go straight to the SQL Editor Window.
- ExportConnection: This lets you export an existing connection toXML. Makes it easy to deploy connections to multiple machines.
- ImportConnection: Allows you to import the connection settingsfrom an XML file. So if, for instance, someone were to export theconnection settings for a development database, rather than having toget all the information to fill the connection string yourself, all youhave to do is import the connection.
Now let's open up our connections
folder in this window. We willbe looking at the stores_demo database. So first we open theConnections folder
, and then the stores_demo
connection folder, andfinally the stores_demo
database folder. And when the folders are open you will see something like the following:
As you can see, each connection tells you what type of instance you areconnecting too, in my case all of my instances are 11.10, thoughDeveloper Workbench reports them as 11.0. You can see thatthe Database
folderhas three sub-folders. Those Folders are:
Click on each link to get a detailed overview, and examples, on whateach of these folders can do.Mark Jamison
The newest version of OpenAdmin Tool for IDS, Version 2.22, is available now! The latest features center on a Enterprise Replication (ER) monitoring and security.
ER Plug-in Version 1.1: Version 1.1 of OAT’s ER plug-in greatly enhances OAT’s Enterprise Replication monitoring capability.
- The ER Routing Topology page has been transformed to allow monitoring of all nodes in the ER domain from a single page without having to drill-down on each node. Users can set thresholds for key ER statistics and then use the Routing Topology page to monitor alerts and profile data for each node in their domain. (Requires IDS server version 11.50xC2.)
- The ER Node Details pages have been expanded to show errors for the current node or the entire ER domain (Errors tab) and to list current values of the ER configuration parameters (Configuration tab).
- Check out a demo of the newest ER monitoring features here: ER Monitoring with Alerts Demo
Secure SQLToolbox: OAT admins can now choose to turn on an additional level of security for the SQL Toolbox pages. If “Secure SQLToolbox” is turned on, OAT users will have to re-authenticate in order to view schema data or use the SQL Editor. This additional layer of security can be used to ensure that OAT users are not automatically allowed free access to databases or tables as the user informix.
Download OAT Version 2.22 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd
For additional information on OpenAdmin Tool for IDS, including feature details, screenshots and demos, go to www.openadmintool.org.
Also check out the new DeveloperWorks article on writing custom plug-ins for OAT: www.ibm.com/developerworks/db2/library/techarticle/dm-0808vonbargen Erika Von Bargen
The Roles Folder, as it's name implies, gives you a view of all theroles created for the table, and a visual means to create new roles oralter existing roles. To get a quick look at how this folder works,let's create a test role that will have complete access to the customertable.
- Select create rule from the folder, by right clicking onthe folder, and following the menu options. It should look like thefollowing:
- This should now bring up the Data Object Editor window inthe center portion of your workspace. You will start in the Generaltab, and you will see that the Role name has been pre-filled, Let's change that to customer_role, so now you should seesomething like the following:
- Let's now make sure we grant privileges to customer forthis role, to do this move down to the Privilege tab, clickon the table tab, and the click on the "Grant New Privileges"which looks like a yellow star. That will bring up the Gant NewPrivileges window. While it looks like you can just type a table namehere, you actually have to press the "browse" button and select the customer table fromthe browser. Once you do that you should see the window as shown below:
Go ahead and check all the boxes listed, and then check OK.
- You will now be back in the main window. Just click on the Run DDL button, andthe customer_role role will be created
Now that you have a role in your Roles folder, let's look at what youcan actually do. You should see customer_role in the folder so rightclick on it, you should see the following:
As you can see you can do the following:
- Alter the role
- Drop the role
- Generate the DDL for the role.
- View Membership
The two not listed above (AnalyzeImpact
, and CompareWith
) are not actually used by roles. As I'm sure youhave noticed, you cannot add users to roles from the Role
Folder, thatwill come with our discussion of the Users
Suppose you have an application which depends on IDS. Your business requirement is application must starts after a successful IDS memory initialization. Previously there was no way you can validate the return code of oninit process to make a decision whether or not IDS initialize successfully. So, if you have a script that automatically starts IDS and the application respectively, it possible the application may start even though IDS failed to initialize.
A new oninit option has introduced to IDS 11.50 that generate a return code. Based on the return code you can customize the script and automate startup process.
The 'oninit -w' command generates following return codes:
- 0 - when success
- 1 - when initialization fails or exceed the timeout value
The 'oninit -w' command forces IDS to wait until it successfully initializes before returning to a shell prompt. You can also provide an addition argument for timeout value with '-w' option. Without any timeout value with '-w' option, IDS will use the default value e.g. 10 minutes. If IDS cannot initialize within the timeout period, oninit generates return code 1 and writes following error message to the online.log file:
Warning: wait time expired
The syntax of new command as follows:
You can use the '-w' option with combination of any other oninit initialization options.
Couple of points to remember:
- In a high-availability environment, you can only use the 'oninit -w' command on primary server; it is not valid on secondary servers.
- The oninit command returns success when sysmaster, sysutils, sysuser and sysadmin are successfully created.
Sanjit Chakraborty[Read More]
EstablishingJCC connections for IDS
While working with the open beta of IBMData Server Developer Workbench 9.5
, I noticed that inaddition to that product you also become open beta participants for theJava Common Client (JCC) driver. In fact, the Workbench uses the JCCdriver by default. On the one hand, this is absolutely great, because ithelps you see where IBM's long range direction is with clients. On theother hand, JCC requires some special setup on the IDS instance level.This article will detail how to setup you instance to handle JCCconnections, and then will use the Developer Workbench to show how youconnect using JCC.
Whathappened to my SQLI?
The reason why JCC connections require engine configuration changes isthe default means by which IDS instance communicate is through the SQLIprotocol, whereas JCC needs the DRDA protocol. Currently IDS does notallow for both SQLI and DRDA to connect through the same listenerthread, so this means is we need to configure a listener thread andport to accept DRDA calls.
This is done by making a simple change to the SQLHOSTS file as shownbelow:
<instance name> drsoctcp
<instance name> drsipcshm
As you can see the protocols look the same as our old ones, exceptinstead of the prefix ol
,you use dr
And full example is below:
cheetah_dr drsoctcp gofishsqlexec2
After changing the SQLHOST all you need to do is place this entry in youonconfig file, in the DBSERVERALIASES variable. So if you use theexample SQLHOSTS entry above and had no aliases currently you wouldhave the following in you onconfig file:
Now just bring your instance off-line and then back on-line, and then weare ready to use the Developer Workbench.
Setting up yourJCC Driver for 11.10 in Developer Workbench 9.5
Now that you have set your IDS instance up to accept JCC driverconnections, let's establish one using the Developer Workbench 9.5
In your Database Explorer window, select Connection->New Connection
This will bring you to the Connection Settings window. Below is anexample of the JCC driver with all field filled in:
As you can see, you do not need an INFORMIXSERVER for your connection,just the Database, Host and Port Number. Once you configure this foryour server, just click the TestConnection
button. After that just click the Finish
button. Younow have a JCC connection that you can use for your Applicationrunning against IDS 11.10 Mark Jamison[Read More]
Note if you have missed any articles in this series, check Introduction and Table of Contents
Onceyou have downloaded the Developer Workbench, you need to install it. Iwill be discussing this from a Windows XP perspective, although Linuxis also straightforward. So now that you have downloaded the client youshould have a single .zip file called:
Thefirst step is to unzip this file into a subdirectory of you choosing.After that run install_win32.exe from that subdirectory. Once throughthe splash screen you should seen a screen like the following:
Pleasenote that this install includes a fair amount of Java, so final installsize will be around 2.8 Gigabytes. The next screen is just the basiclicense agreement. Since DataServer Developer Workbench
is built upon the eclipse framework, and includes several plugins,their are two licenses listed. If you want to use the product, read theagreement and the select the yes radio button, and then click next. Atthis point you should see the next screen which should be the following:
Fromthe above you can specify where you shared resources go. I usually gowith the default, but you may wish to make the name more meaningful.
Afterthis screen you are taken to another screen and given the option toeither use an existing package group, or create a new one. When I wassetting up I just created a new package group, and took the defaultsettings. Finally the last install screen allows you to install thisinto an existing copy of Eclipse, extending it. Please note that youversion of eclipse must have everything in the 3rd party license fileinstalled and up to date for this to work. I wen ahead and justinstalled a new version for the purposes of this review.
We nowmove on to the "Features" tab that is shown above, the first screenlet's you choose what spoken languages you want installed. Afterselecting those languages, I only selected English, you move to thefollowing page:
I went ahead from this page and added the two Websphere pieces. Thefinal page is a summary page, showing everything you are installing,and makes the "install" button active. From there just hit install. Ittook abut 5 minutes to install on my trusty old T42p laptop. Mark Jamison
IDS 11.10 supports some of the PL/SQL syntax thus expanding the capability of IDS. It also facilitates easy migration of applications that use a lot of stored procedures and triggers written in PL/SQL.
This version supports the following syntax:GOTO statement and Label syntax
Restrictions on GOTO:
Loop .. End Loop syntax
- A GOTO statement cannot be used in an EXCEPTION block.
- Labels cannot be defined within an EXCEPTION BLOCK.
- A label must be unique within a stored procedure.
Example:WHILE LOOP… END LOOP
<> WHILE LOOP… END LOOP whloop_label
You can re-write the above while loop using a label like shown below:FOR LOOP… END LOOP
<> FOR LOOP… END LOOP for_label
Example:Expression based exit statement.
- EXIT WHEN <expr>;
- EXIT loop_label WHEN <expr>; -- Label Based Exit.
This is especially useful in nested loop statments. Here is an example of a nested loop statement using a label and exitwhen statements:
Please note that for while and for loops you can use both the old and new syntax.Suma Vinod
Other day I was setting up a high availability cluster
environment and ran into to an interesting problem. I followed all necessary
instruction for setup a RSS server. However, RSS stuck in recovery process and
message log on primary serer reported error that could not send log. For
example, when executed following command on RSS server to set data replication
onmode -d RSS <primary server name>
RSS server stuck in recovery mode and message log on primary server showing
RSS Server <RSS server name> -
state is now connected
Can not send log <log
number mentioned in error message was not close to the current log on primary
or RSS server. For example, current log on primary was 7438 and
on RSS 7436 but message log stating ‘Can not send log 825241904’. So, from
where server getting a out of sequence log number?
Initially I though it some kind of corruption. However, after some investigation
figured out, I was using delayed
application (DELAY_APPLY) on RSS server
and the directory specified with LOG_STAGING_DIR configuration parameter holding
some unwanted file(s). For example, file 'ifmxUniqueLog_825241904' in LOG_STAGING_DIR.
So, during recovery RSS server requested to primary for log number 825241904
but that log not exists on primary server.
Once I removed all files from the LOG_STAGING_DIR
directory on RSS server
able to successfully set the high availability cluster
environment. Conclusion, next
time you try to setup a RSS with DELAY_APPLY, make sure nothing is in the LOG_STAGING_DIR on RSS server.
There is good news for those who were downloading the Node.1.0 DataBlade
from the developer works
and using it AS-IS. As per policy there is no support for AS-IS software downloaded from the developer works.
Node DataBlade creates an Informix DataBlade with a node opaque type data that addresses the problem of transitive closure/hierarchical data queries.
From IDS V11.10 onwards, Node.2.0 DataBlade (with some added functionality) is being bundled as one of the default DataBlades along with the server. That means moving forward Node.2.0 DataBlade (and subsequent higher versions) would be eligible for support. Anup Nair
Some customers find the single user mode in IDS 10 limited because only informix and the DBSA group have access tothe database server. In IDS 11.10, you can give access to other DBAs and specific users in addition to informix andthe DBSA group members when the server is in single user mode. I have listed the steps to give access below:
1. Make sure that the onconfig variable ADMIN_USER_MODE_WITH_DBSA is set to 1.
2. Set the onconfig variable ADMIN_MODE_USERS to a comma separated user list.
When the database server switches to single user mode, informix, the DBSA group, tom and don will have access to
3. Start the server in single user mode.
- oninit -j
user informix, the DBSA group and ADMIN_MODE_USERS have access
- oninit -j -U fred, bill
user informix, DBSA Group and users fred and bill have access
You can use this command if you do not want to use ADMIN_MODE_USERS or if you want to override its value.
4. Dynamically change the server mode using onmode.
- onmode -j
user informix, DBSA group and ADMIN_MODE_USERS have access
- onmode -j -U fred, bill
user informix, the DBSA Group, users fred and bill have access
You can use this command if you do not want to use ADMIN_MODE_USERS or if you want to override its value.
- onmode -j -U ' '
user informix and the DBSA group have access
You can use this command to clear the user list.
Here we will look at each of the XML functions given in Part 1 with examples.genxml()/genxmlclob()
You can use these functions to create an XML row element for each row of a SQL query result. Each column in the rowis an attribute of the row element. genxml() is used for returned row values that are LVARCHAR(32739) or less. Forlarger values, you should use genxmlclob(), which returns a CLOB.
These functions process the rows without any specific order. If the order of the rows is important, you can use thederived table queries to get the result set in the correct order, and then apply the functions on the result set.
select genxml(row (customer_num, fname), "row") from customer;
Output from the above sql:
<row customer_num="101" fname="Ludwig "/><row customer_num="102" fname="Carole "/><row customer_num="103" fname="Philip "/><row customer_num="104" fname="Anthony "/>
To order the results on fname, use the following sql:
select genxml(row(num,name),"row") from (select customer_num, fname from customer order by fname) as vt(num, name);
To select all columns in the table, use the following sql:
select genxml(customer, "row") from customer;genxmlelem()/genxmlelemclob()
These functions return each column value as separate elements, in contrast to genxml(), which returns column valuesas attributes of the row element.
select genxmlelem(row(customer_num,fname), "cust") from customer;
Output from the above sql:
<cust><row><customer_num>101</customer_num><fname>Ludwig </fname></row><row><customer_num>102</customer_num><fname>Carole </fname></row><row><customer_num>103</customer_num><fname>Philip </fname></row><row><customer_num>104</customer_num><fname>Anthony </fname></row></cust>
You can select all columns in a table by passing the table name as the first argument as shown below:
select genxmlelemclob(customer, "cust") from customer;genxmlschema() & genxmlschemaclob()
They are like genxml() but generate full XML schema including XML header and data. An XML header specifies documentproperties such as the document encoding, the document type definition(DTD), and XML stylesheet(XSL). Thefollowing example shows a select using genxmlschema and its output:
select genxmlschema(customer, "cust") from customer;
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?> xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.ibm.com" xmlns="http://www.ibm.com" ElementFormDefault="qualified"> <xs:element name="cust"> <xs:complexType> <xs:sequence> <xs:element name="customer_num" type="xs:serial"/> <xs:element name="fname" type="xs:char(15)"/> <xs:element name="lname" type="xs:char(15)"/> <xs:element name="company" type="xs:char(20)"/> ...genxmlquery() & genxmlqueryclob()
They are versatile functions that take a SQL query as argument and return the result set in XML.
execute function genxmlquery('cust','SELECT customer_num FROM customer');
Output from the above sql:
<cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row></cust>genxmlqueryhdr() & genxmlqueryhdrclob()
These functions return the same data as genxmlquery() but with an XML header.
execute function genxmlqueryhdr('cust','SELECT customer_num FROM customer');
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?><xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"><cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row>extract() and extractxmlclob()
These functions evaluate an XPATH expression on a XML column, document, or string. These functions are identicalexcept that extractxmlclob() returns a CLOB instead of LVARCHAR. They are used to return an XML fragment of theevaluated XML column, document, or string. They are compatible with the Oracle extract()function.
execute function extract('<name><first>fred</first></name>','/name/first');
Output from the above sql:
(expression) <first>fred</first>extractvalue() and extractxmlclobvalue()
These functions return the value of the XML node in contrast to extract(), which returns the XML node. They are compatible with the Oracle extractvalue() function.
execute function extractvalue('<name><first>fred</first></name>','/name/first');
Output from the above sql is just the name without the XML tags:
This function returns 1 if the specified XML node exists in an XML document. It is compatible with Oracle exists() function
Ouput from the above sql:
Parse an XML document or fragment to determine whether it is well formed. This function returns an XML document orfragment if the input XML is well formed else an error is returned.
SELECT idsxmlparse('<purchaseOrder poNo="124356"><customerName>ABC Enterprises</customerName><itemNo>F123456</itemNo></purchaseOrder>') AS PO FROM systables where tabid = 1;
Output from the above sql:
<purchaseOrder poNo="124356"><customerName>ABC Enterprises<itemNo>F123456</purchaseOrder>
Note that the string in the quotes should not have any carriage returns.[Read More
Modified by cheetahblog
(From the introduction)
The domain of NoSQL database management systems is very broad. It encompasses NoSQL systems which do not manage relationships between sets of information at all and those not using the SQL language for such relationships (“no SQL” systems), and those with the capability to use both SQL and relational data, and non-relational functions for accessing structured information (“not-only SQL” systems). Informix is now one of the “not-only SQL” DBMS.
A large percentage of the NoSQL systems have been developed out of a need to work in the Web 2.0 environment. The JSON document format is a way to transfer object information in a way that is language neutral and is similar to XML in that respect. Language-neutral data transmission is crucial to being able to work in a web application environment, where it is common for an application to work with data from a variety of sources and pass data to and from software possibly written in different languages. The reduction in coordination required between application developers and database administrators enables more rapid development and deployment cycles.
Informix provides a unique platform where systems for engaging with customers can be easily integrated with systems for recording transactions.
Download the full document.
OAT 2.21 has incorporated IDS Enterprise Replication monitoring, a plugin manager to allow customization of OAT functionality and an automated installer on Mac OS X. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.
New feature highlights of OpenAdmin Tool for IDS version 2.21 include:
- Mac OS X Automated Installer: Automatically setup Apache, PHP, I-Connect and OAT on Mac OS X. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.
- IDS Enterprise Replication Monitoring: Monitors Spool Disk usage, Send and Receive queues, Receiving/Apply statistics, Routing Topology, node details and much more. OAT Enterprise Replication is provided as a separate plugin. The latest automated installer will install the ER plugin for you, you can also manually place the ER plugin zip file under the OAT plugin_install directory and install it with the plugin manager (requires php zip extension).
- Plugin manager: A simple way to customize OAT functionality. You can download customized OAT plugins and install it with the plugin manager. Sample plugin code is also provided and you can follow the sample in creating your own plugins. Note that the plugin manager requires the php zip extension. The latest OAT2.21 automated installer includes the php zip extension, but OAT2.20 installer does not.
Download OAT version 2.21 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd .
Click here to see OAT Enterprise Replication at work!Leo Chan
Derived tables in the FROM Clauseof Queries
TheSELECT statement can now include syntax that complies with ISO/IEC9075:1992, the SQL-92 standard, to specify a full select subquery inthe FROM clause as a data source for the query. These subqueries arecalled derived tables or table expressions, they can be simple, UNION,or joined subqueries, including OUTER joins, and can include the ORDERBY clause. In addition, AS correlation specifications in the FROMclause can declare temporary names for columns within the query.Informix-extension syntax, such as the FUNCTION keyword with iteratorfunctions or the TABLE (MULTISET (SELECT ...)) keywords forcollection-derived tables, can now be replaced in the FROM clause bySQL-92 syntax. This feature expands the capability of Informix DynamicServer to run without modification queries that are interoperable onother database servers that support industry-standard SQL syntax.
You can find examples using derived tables in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/sel_sql99.sql.
Optimizer Directives in ANSI-Compliant Joined Queries
Earlier IDS versions supported optimizer directives inInformix-extension joined queries, but not in queries that usedANSI/ISO syntax to specify joins. For both inline directives andexternal directives, this release extends support in ANSI/ISO joinedqueries to the following classes of optimizer directives:
- Access-method directives (FULL, AVOID_FULL, INDEX,AVOID_INDEX, INDEX_SJ, AVOID_INDEX_SJ)
- Explain-mode directives (EXPLAIN, AVOID_EXECUTE)
- Optimization-goal directives (ALL_ROWS, FIRST_ROWS).
The join-order directive (ORDERED) is supported only inANSI/ISO-compliant LEFT OUTER joins and INNER joins. Because ofordering requirements for OUTER joins, in ANSI-compliant joined queriesthat specify the RIGHT OUTER JOIN or FULL OUTER JOIN keywords, theORDERED join-order directive is ignored, but it is listed underDirectives Not Followed in the sqexplain.out file.
This feature does the not support the join-method directives (USE_NL,AVOID_NL, USE_HASH, AVOID_HASH, /BUILD, and /PROBE) in ANSI/ISO joinedqueries, except in cases where the optimizer rewrites the query so thatit is no longer uses the ANSI/ISO syntax.Trigger Enhancements
Several new features expand the syntax and the functionality oftriggers on tables and on views:
- You can now define multiple INSERT, DELETE, UPDATE, andSELECT triggers on a table and multiple INSTEAD OF triggers for theview.
- When a table, view, or column list has multiple triggersfor a DML event type, Informix Dynamic Server executes all BEFOREtriggered actions before the FOR EACH ROW actions, and executes all FOREACH ROW actions before the AFTER actions.
- You can create SPL procedures that refer to applicable OLDand NEW trigger correlated values. Within the procedure you can accessapplicable OLD and NEW values and modify the NEW values: e.g. using LETstatements. From a FOR EACH ROW trigger action, you can execute thisSPL procedure [syntax: execute procedure foo() with trigger references].
- New Boolean operators (DELETING, INSERTING, SELECTING, andUPDATING) can be used in procedures executed from trigger actionstatements. These test whether the currently executing triggered actionwas triggered by the specified type of DML event and return a booleanvalue. The IF statement of SPL and the CASE expression of SQL canspecify these operators as the condition in a trigger routine.
These features make it easier to incorporate IDS triggers on tables andon views within a heterogeneous information management system wheremultiple applications need to share the table or view.
You can find examples using multiple triggers in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/cr_trig.sql.Index Self-Join Query Plans
Inearlier Informix Dynamic Server versions, queries of tables withcomposite indexes performed inefficiently if the ratio of duplicatevalues to the number of distinct values was much higher for the leadingcolumns than for subsequent columns of the index. A new feature of thequery optimizer supports a new type of index scan, called an indexself-join path, that uses only subsets of the full range of a compositeindex. The table is logically joined to itself, and the more selectivenon-leading index keys are applied as index bound filters to eachunique combination of the leading key values. By default, the optimizerconsiders this type of scan.
The optimizer also supports two newjoin-method directives, INDEX_SJ and AVOID_INDEX_SJ. The INDEX_SJdirective forces an index self-join path using the specified index, orchoosing the least costly index in a list of indexes, even if datadistribution statistics are not available for the leading index keycolumns. The AVOID_INDEX_SJ directive prevents a self-join path for thespecified index or indexes. This feature can improve query performanceon tables with composite indexes.Enhanced Concurrency withCommitted Read Isolation
In Committed Read isolation level, exclusive row-level locks held byother sessions can cause SQL operations to fail when attempting to readdata in the locked rows. This release introduces a new LASTCOMMITTED keyword option to the SET ISOLATION COMMITTED READ statementto reduce the risk of locking conflicts when attempting to read atable. This new syntax instructs IDS to return the most recentlycommitted version of the rows, even if another concurrent session holdsan exclusive row-level lock. This behavior can be extended to the DirtyRead, Read Uncommitted, and Read Committed isolation levels by settingthe new USELASTCOMMITTED configuration parameter or through new optionsto the SET ENVIRONMENT statement.
This feature supports B-tree indexes and functional indexes, but notR-tree indexes. It does not support tables that are being accessed byDataBlade modules, tables with columns of collection data types, tablescreated using a Virtual Table Interface, tables with page-levellocking, tables with exclusive table-level locks, unlogged tables, ortables in databases with no transaction logging.
Enhanced Data Types andUDR Support in Cross-Server Distributed Queries
Earlier releases of IDS restricted the remote execution of UDRs indatabases of other IDS instances to SPL routines that the EXECUTEFUNCTION or EXECUTE PROCEDURE statement invoke explicitly, and to SPLroutines that queries and other DML operations invoked implicitly.
This release extends support for UDRs in cross-database andcross-server distributed operations to most contexts where a UDR isvalid in the local database. In addition, external routines written inthe C or Java languages are now valid in any distributed operationwhere an SPL routine is valid. This release also extends thedata types that are valid as parameters or return values ofcross-server UDRs, which were formerly restricted to non-opaquebuilt-in SQL data types, by supporting these additional data types:
– DISTINCT of built-in types that are not opaque
– DISTINCT of BOOLEAN
– DISTINCT of LVARCHAR
– DISTINCT of the DISTINCT types listed above.
These data types can be returned by SPL, C, or Java language UDRs thatuse these data types as parameters or as return values, if the UDRs aredefined in all the participating databases. Any implicit or explicitcasts defined over these data types must be duplicated across all theparticipating Dynamic Server instances. The DISTINCT data types musthave exactly the same data type hierarchy defined in all databases thatparticipate in the distributed query.
This feature does not relax existing restrictions on other opaque andDISTINCT types or on large-object, serial, and collection data types inlocally or remotely executed SPL routines or external routines.
Scope: This article covers redistributing ESQL/C based demos and application. The steps required to redistribute other Informix client applications by copying files are being investigated.
Depending on how you deploy your Informix applications there is sometimes a need to bypass the Informix Client SDK or I-Connect installation process and copy the Informix library and API files directly to a target computer. Though the officially recommended and supported approach is to use the supplied CSDK/I-Connect installer, these instructions are provided as an alternative approach involving copying files for scenarios where using the installer is not possible. This article demonstrates how and where to copy all the required CSDK files and Microsoft Windows DLLs to a target computer in order to deploy applications.
Note that while this is not an officially recommended approach, these instructions have been tested by IBM and demonstrated to work. If you encounter problems with this method and need to talk to IBM technical support you may be asked to try installing via the installer to rule out other problems with your configuration.
- Install Client SDK 3.50 or I-Connect on your Windows development computer. After successful installation, verify that all the shortcuts in the programs groups are created and registry keys are updated.
- Make a copy of the entire CSDK installation folder (INFORMIXDIR) and transfer those to the target computer (For example by zipping the files and unzipping on the target computer). Choose any location on target computer for copying files for example c:\informix.
- Copy the required Microsoft Windows runtime DLLs.
Since the Informix product is not being installed via the regular installer the required runtime DLLs may not be present on the target computer. As a result applications such as setnet32.exe, ilogin.exe and finderr may not run correctly.
If a manifest is present in your application but a required Visual C++ library is not installed in the WinSxS folder, you may get one of the following error messages depending on the version of Windows on which you try to run your application:
- The application failed to initialize properly (0xc0000135).
- This application has failed to start because the application configuration is incorrect. Reinstalling application may fix this problem.
- The system cannot execute the specified program.
Recommended approach: The Visual C++ Redistributable Package (VCRedist_x86.exe, VCRedist_x64.exe,VCRedist_ia64.exe) has to be executed on the target system as a prerequisite to deployment of the application. The Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) (http://www.microsoft.com/downloads/details.aspx?familyid=32bc1bee-a3f9-4c13-9c99-220b62a191ee&displaylang=en) installs runtime components of Visual C++ Libraries required to run applications developed with Visual C++ on a computer that does not have Visual C++ 2005 installed.
Alternative approach: If your deployment requirements prevent you from installing the Visual C++ Redistributable Package directly, from the development computer, copy the %WINDIR%\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.762_x-ww_6b128700 directory to same location on the target computer. (Create the same directory structure on the target computer as the development computer if it does not exist.)
Also copy the policy files from the development computer %WINDIR%\WinSxS\Policies\x86_policy.8.0.Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_x-ww_77c24773 to the same location on target computer. (Again, create the same directory structure on target computer as the development computer if it doesn’t exist.)
Note: This workaround is only applicable to Microsoft Visual C++ 2005 SP1 runtime components. If later versions of the Client SDK are built with a later version of Visual Studio then the corresponding version of the runtime components would be required. Check your release notes to see which version of Visual Studio is required.
Running ESQL/C demos:
If you do not have the required Visual C++ libraries installed in the WinSxS folder then while running the demo1 program the following error pops up.
Once you install Visual C++ libraries or copy the runtime DLLs and policy files in C:\WINDOWS\WinSxS folder you should see the ESQL/C demos successfully as shown below.