In IDS 11.10, a DBA or user informix can create sysdbopen() and sysdbclose() procedures to be executed when adatabase is opened and closed. These procedures can be used to change the properties of a session without changingthe application that the session executes. Any statements that are valid in a UDR can be executed in these proceduresto change the session behavior.
A DBA or user informix can create the following procedures in a database:
username: Operating System User
Each time a user user1 opens a database using either a DATABASE or CONNECT TO statement, the database server executesuser1.sysdbopen() if such a procedure is defined. If not it will execute public.sysdbopen(). Each time a user user1 closes a database using either a CLOSE DATABASE or DISCONNECT statement, the database server executesuser1.sysdbclose() if such a procedures is defined. If not, it will execute public.sysdbclose(). sysdbclose() will beexecuted even if the application exits without an explicit CLOSE DATABASE or DISCONNECT statement because the serverdoes an implicit close of the current database for such cases.
The owner name is not ignored when you create sysdbopen() and sysdbclose() procedures in non-ANSI databases, so youcan create these procedures for specific users in non-ANSI databases.
The following procedure creates a table oltp_stat, sets the role to oltp and PDQ priority to 10 for user oltp_user ina database:
create procedure oltp_user.sysdbopen()
create table oltp_stat(userid int, connect_time datetime year to second);
set role to oltp;
set pdqpriority 10;
The following procedure creates a table public_stat, sets the role to others and the PDQ priority to 1 for the PUBLICgroup in a database.
create procedure public.sysdbopen()
create table public_stat(userid int, connect_time datetime year to second);
set role to others;
set pdqpriority 1;
You can clean up any activities started by sysdbopen() in sysdbclose(). For example, if you had created some tables in sysdbopen(), you can drop those tables in sysdbclose().
The following procedures drop the tables created by the sysdbopen() procedures given above:
create procedure oltp_user.sysdbclose()
drop table oltp_stat;
create procedure public.sysdbclose()
drop table public_stat;
A DBA or user informix can set the environment variable IFX_NODBPROC to any value, including 0, to prevent the execution of sysdbopen() and sysdbclose() procedures. When you set up sysdbopen() and sysdbclose() procedures, youcan set the environment variable IFX_NODBPROC and execute the procedures to test if the procedures work as expected. You need to unset the environment variable IFX_NODBPROC after testing.
For more information, see information on sysdbopen() and sysdbclose() in the IBM Informix Guide to SQL: SyntaxSuma Vinod
IDS 11 provides support for XML publishing thus enabling applications to publish SQL results as XML, query XML dataand return XML. It also provides the framework to map an XML schema into a relational schema and to shred an XML document into relational tables.
The XML functions available in IDS 11 are summarized in the following table:
|genxml,genxmlclob||return SQL results as XML elements|
|genxmlelem, genxmlelemclob||return column values as XML elements|
|genxmlschema,genxmlschemaclob ||return schema as XML|
|genxmlquery, genxmlqueryclob||return result set as XML|
|genxmlqueryhdr, genxmlqueryhdrclob ||return result set as XML with head|
|extract, extractxmlclob ||evaluate XPATH expression|
|extractvalue, extractxmlclobvalue||return value of XML node|
|existsnode||verify whether a node exists in XML doc|
|idsxmlparse||parse XML doc to determine if it is well-formed|
To use the XML functions in IDS 11:
- Start an XML VP by doing one of the following
Add VPCLASS idsxmlvp,num=1 in onconfig file to start the XML VP when the server starts
Use onmode -p +1 idsxmlvp command to add an XML VP dynamically after the server has started
Note: An XML VP is required only if you use XPATH functions such as extract(), extractxmlclob(), extractvalue(),
extractxmlclobvalue(),existsnode(), idsxmlparse(). XML functions starting with gen do not require the XML VP.
- Verify that $INFORMIXDIR/lib/libxml.udr is read-only
- Make sure a default sbspace exists.
Default sbspace is specified by the SBSPACENAME configuration parameter. You can check the output of onstat -d to
verify that the space exists.
I will talk about each of the above functions with examples in Part 2.Suma Vinod
database is created in "root dbspace"
at server initialization. This database is required for the Scheduler API and Remote Administration
feature. Until B5 drop, there wasn't any way of moving the sysadmin database safely to any other dbspace. A new SQL Admin API command in B6 drop now simplifies this task by allowing "informix" user to drop and recreate this database to any other dbspace.
If it's determined that root dbspace does not have enough space for storing task properties and command history information, you could move the sysadmin database to a different dbspace by using the "reset sysadmin"
SQL Administration API command. This command drops the sysadmin database from root dbspace and recreates it in the specified dbspace.
Here's an example to move the sysadmin database...
1. Make sure the following message has appeared in the online message log after server startup:
SCHAPI: Started 2 dbWorker threads.
2. Let's create a special dbspace to store sysadmin... admindbs
database sysadmin;execute function task("create dbspace", "admindbs", "/vol/dbspaces/admindbs", "1 GB", "0");
onspaces -c -d admindbs -p /vol/dbspaces/admindbs -s 1000000 -o 0
3. As user informix
, run the following commands:
dbaccess sysadmin -execute function task("reset sysadmin", "admindbs");
The command will return the following message:
SCHAPI: 'sysadmin' database will be moved to 'admindbs'. See online message log.
The internal thread, bld_sysadmin
(seen via onstat -g ath), waits up to five
minutes to obtain exclusive access to the sysadmin database. The progress of the bld_sysadmin thread is logged in the online message log.
4. Our own session has sysadmin database open. So, terminate the dbaccess session with the "close database
On successful completion, the sysadmin database is dropped and recreated in the new dbspace. The dbScheduler and dbWorker threads are restarted automatically.Mirav Kapadia
With Cheetah, IDS now allows a DBA to use ontape to point to adirectory instead of just a particular file. This can be handy when youhave multiple backups that you want to keep in the samedirectory.
It involves a little extra administration though, so here’s aquick primer on it.Setting it up for archives
It’s actually pretty easy to set IDS up to do it.Here are the steps:
Setting it up for back upof your logical logs with the log_full.sh $INFORMIXDIR/etc/ALARMPROGRAM
- Create your directory: for instance:/opt/Informix/archives/ as Informix (group Informix) (I have the bestluck putting the ending slash on the path.)
- chmod 777 the directory
- set TAPEDEV in your $ONCONFIG to that directory.
- run ontape to create your archive
- As above, create your directory and set permissions.
- save a copy of your $INFORMIXDIR/etc/log_full.sh tolog_full.org or something like that
- edit log_full.sh with your favorite editor (i.e. vi oremacs)
- Add the –y option to the command (ontape–a –y)
- Recycle the engine.
When you take an archive by default it will use your<HOSTNAME_SERVERNUM> as the standard prefix. For example:my machine name is ryleh and my SERVERNUM=11 When I do an archive andthen look at the file it is this: ryleh_11_L0
If you want to name it something different, you can set theundocumented environmental variable IFX_ONTAPE_FILE_PREFX Example:
You will have to recycle the engine after you set it for it to takeeffect.
This way you can have multiple archives stored in the same directory. Conclusion:
All in all, backing up to a directory can be a very handy utility tohave for an administrator that has multiple instances, especially forsuch things as development and test instances. You can savean archive under the name of say “benchmark” andthen when you need to load it up, you don’t have to look highand low for the file or the tape. It’s right there,and convenient.Joe Baric
Previous version of JDBC 3.50 could not be installed on Windows 64-bit using 64-bit JRE. Installing JDBC 3.50 gave an error "Directory not writable" for all directories. Same error message was received, even after running all the process as administrator by turning UAC (User Account Control) off. Workaround for this problem was to use 32-bit JRE on Windows 64-bit to install JDBC 3.50.
This problem was solved in IDS 11.50. Now JDBC 3.50 uses newer version of Install Shield, which allows for JDBC 3.50 to be installed on 64-bit Windows using 64-bit JRE. Now, users no longer have to use 32-bit JRE to install JDBC 3.50 on Windows 64-bit. This makes users life easier, since users do not have to set up 32-bit JRE on 64-bit Windows. Bhadrik Patel
Upgrading Informix Dynamic Server Version 10.00 to Versions 11.10 or 11.50:
When installing IDS 11.10 or 11.50, if you choose the option "Upgrade from the previous version", all the server binaries will be upgraded to the newer version(s) automatically.
Upgrading Informix Dynamic Server from Version 11.10 to 11.50:
When installing IDS 11.50 on Windows, direct upgrade from IDS 11.10 is not supported. If “Upgrade from the previous version” is selected the following message pops up. Installer stops beyond this point.
Since the support to install both 11.10 and 11.50 on the same machine exists, it is recommended to choose the option “Install into a default/different directory” (shown in the panel below) to install IDS 11.50. Note: This will not upgrade IDS 11.10 to IDS 11.50.
As part of providing a folder name to install the product, a completely new path must be supplied. If the folder selected already contains binaries from IDS 11.10, then the following message pops up.
User can respond to the question “Do you want to select another folder? “. Clicking “yes”, the installer returns to the destination panel where they can provide a different folder. Clicking No, the installer goes to finish panel
The recommended approach to upgrade to 11.50 is
- Uninstall IDS 11.10 using "Retain database, but remove server binaries option"
- Save all the Registry Keys in HKEY_LOCAL_MACHINE\SOFTWARE\Informix\OnLine\
- Install the later version (IDS 11.50) to the same path.
- Make sure that you do not initialize the server when installing.
- Copy the ONCONFIG file to the target and set parameters that are new for the current release.
- Bring the server up using Control Panel->Services or any other method without initializing.
OAT 2.21 has incorporated IDS Enterprise Replication monitoring, a plugin manager to allow customization of OAT functionality and an automated installer on Mac OS X. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.
New feature highlights of OpenAdmin Tool for IDS version 2.21 include:
- Mac OS X Automated Installer: Automatically setup Apache, PHP, I-Connect and OAT on Mac OS X. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.
- IDS Enterprise Replication Monitoring: Monitors Spool Disk usage, Send and Receive queues, Receiving/Apply statistics, Routing Topology, node details and much more. OAT Enterprise Replication is provided as a separate plugin. The latest automated installer will install the ER plugin for you, you can also manually place the ER plugin zip file under the OAT plugin_install directory and install it with the plugin manager (requires php zip extension).
- Plugin manager: A simple way to customize OAT functionality. You can download customized OAT plugins and install it with the plugin manager. Sample plugin code is also provided and you can follow the sample in creating your own plugins. Note that the plugin manager requires the php zip extension. The latest OAT2.21 automated installer includes the php zip extension, but OAT2.20 installer does not.
Download OAT version 2.21 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd .
Click here to see OAT Enterprise Replication at work!Leo Chan
Note if you have missed any articles in this series, check Introduction and Table of Contents
Setting up yourworkspace and connecting to IDS
Now that you have installed the Developer Workbench it is now time torun it. ON the first run, you should be asked to set up yourworkspace. This should give you a screen like the following:
Once you select this, the Workbench takes a little while to build yourworkspace. Once done however you get the classic EclipseWelcome screen. I have highlighted your workspace in the followingscreen shot , because that is the next place you want to go:
Once you click on the workbench you get the generic workbench. The nextstep at that point is to start a project , and establish a connection:
Setting up a new project to handle queries is fairly straightforward,and can be reached very quickly , all you have to do is select the DataDevelopment Project option as in the following screen shot:
Selecting the above puts you to the new Data Development projectscreen, as shown below:
As you can see, there are some DB2 flavored words here.Schema in DB2, is the IDS equivalent of a database, and as such, keepthe default radio button, as seen in the screen shot above, set. I would recommend that you giver your project a meaningfulname, but that is completely under you control. In my case I listed itas "Mark's Test project for Blog".
This brings us to the next page which assigns an existing databaseconnection, or creates a new one. Since this is our first time through,just hit the next button, as we will need to create a new connection.
The Connection parameters gives you several options for connections, aswell as several data server choices. Take a look at the below screenshot:
This is a standard Informix JDBC connection string at this point. Thescreen shot above differs slightly from what you see as the defaultsettings, as I have selected the "Informix JDBC driver" instead of the"IBM Data Server Driver for JDBC and SQLJ." In short I took the JDBCdriver instead of the JCC driver. The main reason is because JCC itselfis in beta, and I wanted to use something I knew how to connect withand trace. JCC also requires a DRDA listener thread, and I don't have one set up for the instance I am using. Update : Check out Establishing JCC connections in 11.10 if you want to use JCC connections.
Fill the remaining fields above with the normal Informix connectionsettings, and test you connection. Once you connection is working,click on the next button.
This final screen allows you to filter your schema. Here is the screenshot :
Please note: The default is only for users who have IDS on a windowsplatform. Since schema id is actually a user id in IDS, filtering onINFORMIX means that no information of any kind will return for anydatabase objects if your IDS instance is on a Unix platform. The reasonof course is the user id is case sensitive, and informix != INFORMIX.Quite honestly I suggest you just disable filtering for now, when Icover the the Database explorer I will show you how to change thefilter how you want to.
As soon as you have selected your filter objects, you will finally seethe workbench laid out, and ready to be used for a Data Developmentproject. Here is mine :
In the next article I will discuss the Database Explorer Window in thelower left hand corner. Mark Jamison
Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. Ideally, the data requirements are recorded as a conceptual model that is not platform-specific (the logical data model
) with associated definitions for the entities and attributes. This logical model can serve as the ‘information hub’ to help define and enforce consistency across enterprise data. From the logical model, multiple database-specific implementations can be generated (via physical models
InfoSphere Data Architect (IDA) - formerly called Rational Data Architect (RDA) – is a great modeling tool for any shop that includes Informix Dyanmic Server databases in its mix. Even if you don’t have a ‘data architect’, this product can help DBAs communicate with developers and also help you keep tabs on what is and isn’t “real” any more (by comparing the model of what you think you have with what your database actually does have).
I recently updated a tutorial on using IDA
specifically from the perspective of using it with IDS. There are several enhancements to the model elements for IDS in IDA 7.5.1:
- You can set/view extra properties for Table elements (extent size, raw table type, lock level)
- Enhancements to constraints to support modes (enabled/disabled/ filtering)
- Trigger enhancements suited to IDS syntax including support for INSTEAD OF triggers on views
- Generate DDL enhancements for all of the above
Anyway, if you get a chance, download the IDA trial and check out the latest version of the tutorial and let me know what you think. Link to IDA trial download Link to an IDA demo
CombiningDay 1 and Day 2 activities. It has been a busy 2 days with some greatannouncements. We started off with Arvind announcing xC4 and most importantlyour storage optimization i.e. compression feature and warehouse feature.
Compressioncan save you as much as 80% of disk space and the demo at the upgrade kioskshowed a compression rate of 73 %. Check out our compression site at: http://www-01.ibm.com/software/data/informix/compression/
This sitefeatures a ROI tool and a white paper that is a must read for all. It clearlyarticulates to the value proposition of compression and how easy it is to setupand use. The OAT sessions and the demos showcased this feature and ease of usevery well and these sessions were amongst the popular ones.
Thewarehouse announcement is another exciting announcement that went out and thisincludes tooling that greatly simplify your warehouse design and deployment andaids in your BI decisions. The demo sessions highlighting the warehouse featurewas also a great attraction. Check out our warehouse site at: http://www-01.ibm.com/software/data/informix/warehouse/
The sitefeatures a white paper that is a great read…. We had another great keynote withJerry and Kevin who walked us through the Informix roadmap. I am sure thissession gave everyone an idea of where Informix is heading and how each of youcan help with driving key features within the product as well. The rest of theday as usual had some great sessions and I just did not want to highlight oneover the other. Let me just state that there were almost 25 great sessions thattook place through the day.
OnTuesday, Ananth gave his keynote on cloud computing and how it can be acritical success factor in your IT plans. This is an exciting space that IBM isin and the prospects and the potential seems endless in terms of its useespecially with its cost effectiveness and ease of deployment. Guy's blog hassome great entries on this and I would encourage each and every one to read upon the same http://www.ibm.com/developerworks/blogs/page/gbowerman
Thesessions again were great and the feedback has been astounding in terms of thequality of the sessions. There is a reason why the IIUG conference is anattractive proposition and why you should make this a yearly event on yourcalendar.
Oh btw,there is an interesting contest going on that Advanced Data Tools is sponsoringcalled the Fastest DBA where the goal is to take an SQL statement that runs inabout 30 min and make it run faster. The last I checked the time to beat was0.54 sec or somewhere around this … We definitely need more of these at futureconferences …
Thecertification room was receiving its regular stream of test takers as well andatleast the ones I saw came out smiling which is good :)
Tomorrowis the last day of the conference and has been filled with some excitingsessions. The key note is a Q&A session that will raise some gooddiscussions about marketing, support, sales and dev … will sum up theconference tomorrow… enjoy the rest of the conference….
Suppose you have an application which depends on IDS. Your business requirement is application must starts after a successful IDS memory initialization. Previously there was no way you can validate the return code of oninit process to make a decision whether or not IDS initialize successfully. So, if you have a script that automatically starts IDS and the application respectively, it possible the application may start even though IDS failed to initialize.
A new oninit option has introduced to IDS 11.50 that generate a return code. Based on the return code you can customize the script and automate startup process.
The 'oninit -w' command generates following return codes:
- 0 - when success
- 1 - when initialization fails or exceed the timeout value
The 'oninit -w' command forces IDS to wait until it successfully initializes before returning to a shell prompt. You can also provide an addition argument for timeout value with '-w' option. Without any timeout value with '-w' option, IDS will use the default value e.g. 10 minutes. If IDS cannot initialize within the timeout period, oninit generates return code 1 and writes following error message to the online.log file:
Warning: wait time expired
The syntax of new command as follows:
You can use the '-w' option with combination of any other oninit initialization options.
Couple of points to remember:
- In a high-availability environment, you can only use the 'oninit -w' command on primary server; it is not valid on secondary servers.
- The oninit command returns success when sysmaster, sysutils, sysuser and sysadmin are successfully created.
Sanjit Chakraborty[Read More]
We would appreciate your help in building a more robust, efficient, and client-focused IBM Electronic Support system by filling out this questionnaire:
It should take only approximately five to ten minutes to complete, but the results will help us make your IBM Support experience the best it can be.
Thank you for your valuable input!
The Enterprise IBM Electronic Support team
Here we will look at each of the XML functions given in Part 1 with examples.genxml()/genxmlclob()
You can use these functions to create an XML row element for each row of a SQL query result. Each column in the rowis an attribute of the row element. genxml() is used for returned row values that are LVARCHAR(32739) or less. Forlarger values, you should use genxmlclob(), which returns a CLOB.
These functions process the rows without any specific order. If the order of the rows is important, you can use thederived table queries to get the result set in the correct order, and then apply the functions on the result set.
select genxml(row (customer_num, fname), "row") from customer;
Output from the above sql:
<row customer_num="101" fname="Ludwig "/><row customer_num="102" fname="Carole "/><row customer_num="103" fname="Philip "/><row customer_num="104" fname="Anthony "/>
To order the results on fname, use the following sql:
select genxml(row(num,name),"row") from (select customer_num, fname from customer order by fname) as vt(num, name);
To select all columns in the table, use the following sql:
select genxml(customer, "row") from customer;genxmlelem()/genxmlelemclob()
These functions return each column value as separate elements, in contrast to genxml(), which returns column valuesas attributes of the row element.
select genxmlelem(row(customer_num,fname), "cust") from customer;
Output from the above sql:
<cust><row><customer_num>101</customer_num><fname>Ludwig </fname></row><row><customer_num>102</customer_num><fname>Carole </fname></row><row><customer_num>103</customer_num><fname>Philip </fname></row><row><customer_num>104</customer_num><fname>Anthony </fname></row></cust>
You can select all columns in a table by passing the table name as the first argument as shown below:
select genxmlelemclob(customer, "cust") from customer;genxmlschema() & genxmlschemaclob()
They are like genxml() but generate full XML schema including XML header and data. An XML header specifies documentproperties such as the document encoding, the document type definition(DTD), and XML stylesheet(XSL). Thefollowing example shows a select using genxmlschema and its output:
select genxmlschema(customer, "cust") from customer;
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?> xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.ibm.com" xmlns="http://www.ibm.com" ElementFormDefault="qualified"> <xs:element name="cust"> <xs:complexType> <xs:sequence> <xs:element name="customer_num" type="xs:serial"/> <xs:element name="fname" type="xs:char(15)"/> <xs:element name="lname" type="xs:char(15)"/> <xs:element name="company" type="xs:char(20)"/> ...genxmlquery() & genxmlqueryclob()
They are versatile functions that take a SQL query as argument and return the result set in XML.
execute function genxmlquery('cust','SELECT customer_num FROM customer');
Output from the above sql:
<cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row></cust>genxmlqueryhdr() & genxmlqueryhdrclob()
These functions return the same data as genxmlquery() but with an XML header.
execute function genxmlqueryhdr('cust','SELECT customer_num FROM customer');
Output from the above sql:
<?xml version="1.0" encoding="en_US.819" ?><xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"><cust><row><customer_num>101</customer_num></row><row><customer_num>102</customer_num></row><row><customer_num>103</customer_num></row><row><customer_num>104</customer_num></row>extract() and extractxmlclob()
These functions evaluate an XPATH expression on a XML column, document, or string. These functions are identicalexcept that extractxmlclob() returns a CLOB instead of LVARCHAR. They are used to return an XML fragment of theevaluated XML column, document, or string. They are compatible with the Oracle extract()function.
execute function extract('<name><first>fred</first></name>','/name/first');
Output from the above sql:
(expression) <first>fred</first>extractvalue() and extractxmlclobvalue()
These functions return the value of the XML node in contrast to extract(), which returns the XML node. They are compatible with the Oracle extractvalue() function.
execute function extractvalue('<name><first>fred</first></name>','/name/first');
Output from the above sql is just the name without the XML tags:
This function returns 1 if the specified XML node exists in an XML document. It is compatible with Oracle exists() function
Ouput from the above sql:
Parse an XML document or fragment to determine whether it is well formed. This function returns an XML document orfragment if the input XML is well formed else an error is returned.
SELECT idsxmlparse('<purchaseOrder poNo="124356"><customerName>ABC Enterprises</customerName><itemNo>F123456</itemNo></purchaseOrder>') AS PO FROM systables where tabid = 1;
Output from the above sql:
<purchaseOrder poNo="124356"><customerName>ABC Enterprises<itemNo>F123456</purchaseOrder>
Note that the string in the quotes should not have any carriage returns.[Read More
Onbar backup order on Cheetah
1> Root dbsapce , SBLOBspace, BLOB space, and normal dbspace onbar backup order.
Rootdbspace is always first, then SBLOB space, BLOB space, normal dbspace is the last one.
Onstat –d output as following:
IBM Informix Dynamic Server Version 11.10.UC1 -- On-Line -- Up 00:19:47 -- 44672 Kbytes
address number flags fchunk nchunks pgsize flags owner name
44cdf7f0 1 0x60001 1 1 2048 N B informix rootdbs
45ca6ea8 2 0x40001 2 1 2048 N B informix ddbs1
45c26ea8 3 0x40001 3 1 2048 N B informix ddbs2
45ca7a00 4 0x48001 4 1 2048 N SB informix sbdbs
45bbeeb0 5 0x60011 5 1 8192 N BB informix bdbs
5 active, 2047 maximum
Note: For BLOB chunks, the number of free pages shown is out of date.
Run 'onstat -d update' for current stats.
address chunk/dbs offset size free bpages flags pathname
44cdf950 1 1 0 50000 15421 PO-B /home/informix/cheetah/dsk/rootdbs
45c98e38 2 2 0 5000 4947 PO-B ./dsk/ddbs1
45ca73b0 3 3 0 5000 3449 PO-B ./dsk/ddbs2
45ca7b60 4 4 0 5000 4587 4587 POSB ./dsk/sbdbs
Metadata 360 268 360
45bb0e48 5 5 0 5000 ~1250 1250 POBB ./dsk/bdbs
5 active, 32766 maximum
After onbar –b –w ,and the backup order can be showed in the bar_act.log infor as following:
2007-06-15 04:41:10 5218 5216 /home/informix/cheetah/bin/onbar_d -b -w
2007-06-15 04:41:10 5218 5216 Archive started on rootdbs, sbdbs, bdbs, ddbs2, ddbs1 (Requested Level 0).
2007-06-15 04:41:11 5218 5216 Begin level 0 backup rootdbs.
2007-06-15 04:41:11 5218 5216 Successfully connected to Storage Manager.
2007-06-15 04:41:11 5218 5216 Completed level 0 backup rootdbs (Storage Manager copy ID: 751421824 0).
2007-06-15 04:41:12 5223 5218 Process 5223 5218 successfully forked.
2007-06-15 04:41:12 5223 5218 Begin level 0 backup sbdbs.
2007-06-15 04:41:12 5223 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5224 5218 Process 5224 5218 successfully forked.
2007-06-15 04:41:12 5224 5218 Begin level 0 backup bdbs.
2007-06-15 04:41:12 5224 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5225 5218 Process 5225 5218 successfully forked.
2007-06-15 04:41:12 5225 5218 Begin level 0 backup ddbs2.
2007-06-15 04:41:12 5226 5218 Process 5226 5218 successfully forked.
2007-06-15 04:41:12 5226 5218 Begin level 0 backup ddbs1.
2> Normal dbspace backup order
The onbar –b –w backup on Normal dbsapces doesn’t depend on the order which the dbspace is created but depend on the size is used on that dbsapce.
In the above example, the ddbs2’s usage is bigger than the ddbs1’s usage, the onbar backup’s order is the ddbs2 first and then the ddbs1 without following the create order.
Derived tables in the FROM Clauseof Queries
TheSELECT statement can now include syntax that complies with ISO/IEC9075:1992, the SQL-92 standard, to specify a full select subquery inthe FROM clause as a data source for the query. These subqueries arecalled derived tables or table expressions, they can be simple, UNION,or joined subqueries, including OUTER joins, and can include the ORDERBY clause. In addition, AS correlation specifications in the FROMclause can declare temporary names for columns within the query.Informix-extension syntax, such as the FUNCTION keyword with iteratorfunctions or the TABLE (MULTISET (SELECT ...)) keywords forcollection-derived tables, can now be replaced in the FROM clause bySQL-92 syntax. This feature expands the capability of Informix DynamicServer to run without modification queries that are interoperable onother database servers that support industry-standard SQL syntax.
You can find examples using derived tables in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/sel_sql99.sql.
Optimizer Directives in ANSI-Compliant Joined Queries
Earlier IDS versions supported optimizer directives inInformix-extension joined queries, but not in queries that usedANSI/ISO syntax to specify joins. For both inline directives andexternal directives, this release extends support in ANSI/ISO joinedqueries to the following classes of optimizer directives:
- Access-method directives (FULL, AVOID_FULL, INDEX,AVOID_INDEX, INDEX_SJ, AVOID_INDEX_SJ)
- Explain-mode directives (EXPLAIN, AVOID_EXECUTE)
- Optimization-goal directives (ALL_ROWS, FIRST_ROWS).
The join-order directive (ORDERED) is supported only inANSI/ISO-compliant LEFT OUTER joins and INNER joins. Because ofordering requirements for OUTER joins, in ANSI-compliant joined queriesthat specify the RIGHT OUTER JOIN or FULL OUTER JOIN keywords, theORDERED join-order directive is ignored, but it is listed underDirectives Not Followed in the sqexplain.out file.
This feature does the not support the join-method directives (USE_NL,AVOID_NL, USE_HASH, AVOID_HASH, /BUILD, and /PROBE) in ANSI/ISO joinedqueries, except in cases where the optimizer rewrites the query so thatit is no longer uses the ANSI/ISO syntax.Trigger Enhancements
Several new features expand the syntax and the functionality oftriggers on tables and on views:
- You can now define multiple INSERT, DELETE, UPDATE, andSELECT triggers on a table and multiple INSTEAD OF triggers for theview.
- When a table, view, or column list has multiple triggersfor a DML event type, Informix Dynamic Server executes all BEFOREtriggered actions before the FOR EACH ROW actions, and executes all FOREACH ROW actions before the AFTER actions.
- You can create SPL procedures that refer to applicable OLDand NEW trigger correlated values. Within the procedure you can accessapplicable OLD and NEW values and modify the NEW values: e.g. using LETstatements. From a FOR EACH ROW trigger action, you can execute thisSPL procedure [syntax: execute procedure foo() with trigger references].
- New Boolean operators (DELETING, INSERTING, SELECTING, andUPDATING) can be used in procedures executed from trigger actionstatements. These test whether the currently executing triggered actionwas triggered by the specified type of DML event and return a booleanvalue. The IF statement of SPL and the CASE expression of SQL canspecify these operators as the condition in a trigger routine.
These features make it easier to incorporate IDS triggers on tables andon views within a heterogeneous information management system wheremultiple applications need to share the table or view.
You can find examples using multiple triggers in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/cr_trig.sql.Index Self-Join Query Plans
Inearlier Informix Dynamic Server versions, queries of tables withcomposite indexes performed inefficiently if the ratio of duplicatevalues to the number of distinct values was much higher for the leadingcolumns than for subsequent columns of the index. A new feature of thequery optimizer supports a new type of index scan, called an indexself-join path, that uses only subsets of the full range of a compositeindex. The table is logically joined to itself, and the more selectivenon-leading index keys are applied as index bound filters to eachunique combination of the leading key values. By default, the optimizerconsiders this type of scan.
The optimizer also supports two newjoin-method directives, INDEX_SJ and AVOID_INDEX_SJ. The INDEX_SJdirective forces an index self-join path using the specified index, orchoosing the least costly index in a list of indexes, even if datadistribution statistics are not available for the leading index keycolumns. The AVOID_INDEX_SJ directive prevents a self-join path for thespecified index or indexes. This feature can improve query performanceon tables with composite indexes.Enhanced Concurrency withCommitted Read Isolation
In Committed Read isolation level, exclusive row-level locks held byother sessions can cause SQL operations to fail when attempting to readdata in the locked rows. This release introduces a new LASTCOMMITTED keyword option to the SET ISOLATION COMMITTED READ statementto reduce the risk of locking conflicts when attempting to read atable. This new syntax instructs IDS to return the most recentlycommitted version of the rows, even if another concurrent session holdsan exclusive row-level lock. This behavior can be extended to the DirtyRead, Read Uncommitted, and Read Committed isolation levels by settingthe new USELASTCOMMITTED configuration parameter or through new optionsto the SET ENVIRONMENT statement.
This feature supports B-tree indexes and functional indexes, but notR-tree indexes. It does not support tables that are being accessed byDataBlade modules, tables with columns of collection data types, tablescreated using a Virtual Table Interface, tables with page-levellocking, tables with exclusive table-level locks, unlogged tables, ortables in databases with no transaction logging.
Enhanced Data Types andUDR Support in Cross-Server Distributed Queries
Earlier releases of IDS restricted the remote execution of UDRs indatabases of other IDS instances to SPL routines that the EXECUTEFUNCTION or EXECUTE PROCEDURE statement invoke explicitly, and to SPLroutines that queries and other DML operations invoked implicitly.
This release extends support for UDRs in cross-database andcross-server distributed operations to most contexts where a UDR isvalid in the local database. In addition, external routines written inthe C or Java languages are now valid in any distributed operationwhere an SPL routine is valid. This release also extends thedata types that are valid as parameters or return values ofcross-server UDRs, which were formerly restricted to non-opaquebuilt-in SQL data types, by supporting these additional data types:
– DISTINCT of built-in types that are not opaque
– DISTINCT of BOOLEAN
– DISTINCT of LVARCHAR
– DISTINCT of the DISTINCT types listed above.
These data types can be returned by SPL, C, or Java language UDRs thatuse these data types as parameters or as return values, if the UDRs aredefined in all the participating databases. Any implicit or explicitcasts defined over these data types must be duplicated across all theparticipating Dynamic Server instances. The DISTINCT data types musthave exactly the same data type hierarchy defined in all databases thatparticipate in the distributed query.
This feature does not relax existing restrictions on other opaque andDISTINCT types or on large-object, serial, and collection data types inlocally or remotely executed SPL routines or external routines.
To All Informix users, I would like to encourage you to take a look at our new IBM Data Studio Developer 2.1
offering. There are significant enhancements we think that IDS developers will like.
We now have new tools to view update statistics and perform object management tasks easily, such as specifying data partitions (either round robin or expression-based) and raw tables. Also, trigger support has been greatly improved to take advantage of the ability in IDS to have before, after, and ‘for each’ triggers, whereas only ‘for each’ was supported previously. These capabilities are available at no charge. Guy Bowerman talks a bit more about these features in his blog
In Data Studio Developer, which includes the base tooling and all the value-add capabilities, we also have many new features that IDS developers can take advantage of.
Using our previous releases
, IDS developers could improve productivity by using the SQL content assist and validation capabilities in the Java editor and to generate applications quickly using the pureQuery API. You could capture SQL from any Java application and correlate which SQL was issued to the database and from where in the application. Using Data Studio, you could gain insight into which database tables and columns were used by the SQL. Developers and DBAs could easily isolate poorly performing queries in the application and work better together, to assess changes needed in the application as a result of database schema changes.
Data Studio Developer 2.1 has even more to offer for Informix developers and DBAs:.
- You can now understand more about how the SQL in your application is performing using visual displays of elapsed time per SQL statement, and then you can change the SQL that is issued without having to change the application. Use these features from Data Studio and pureQuery Runtime together to help develop enterprise-ready applications before throwing it over the wall into production.
- You can reduce or eliminate the risk of SQL injection because now it is possible to ensure that only SQL that was previously captured and approved is allowed to execute. This is available for dynamic SQL not just static, so by using with pureQuery Runtime, Informix applications can take advantage of this.
- You can share database connections, reducing the amount of communication required to share details of connection information between administrators and all the developers who need to use the databases. Here's an entire article just on this topic.
See my article
for more details on what's available for Informix developers and DBAs. We even have a set of videos
that you can check out that follow along with the article.
for Data Studio Developer 2.1 should be available soon.
This single package contains both the no-charge, perpetual features available and supported for IDS customers with up-to-date maintenance contracts, but also 30-day trial features of the value-added capabilities of the priced Data Studio Developer product. The trial features are optional during installation, but I think you’ll be happy if you give them a test drive.
In the meantime don’t hesitate to send me any feedback or suggestions that you may have. Then let me know either here or on the Data Studio Forum
what you think of the new release.
-- Sonali Surange
The Roles Folder, as it's name implies, gives you a view of all theroles created for the table, and a visual means to create new roles oralter existing roles. To get a quick look at how this folder works,let's create a test role that will have complete access to the customertable.
- Select create rule from the folder, by right clicking onthe folder, and following the menu options. It should look like thefollowing:
- This should now bring up the Data Object Editor window inthe center portion of your workspace. You will start in the Generaltab, and you will see that the Role name has been pre-filled, Let's change that to customer_role, so now you should seesomething like the following:
- Let's now make sure we grant privileges to customer forthis role, to do this move down to the Privilege tab, clickon the table tab, and the click on the "Grant New Privileges"which looks like a yellow star. That will bring up the Gant NewPrivileges window. While it looks like you can just type a table namehere, you actually have to press the "browse" button and select the customer table fromthe browser. Once you do that you should see the window as shown below:
Go ahead and check all the boxes listed, and then check OK.
- You will now be back in the main window. Just click on the Run DDL button, andthe customer_role role will be created
Now that you have a role in your Roles folder, let's look at what youcan actually do. You should see customer_role in the folder so rightclick on it, you should see the following:
As you can see you can do the following:
- Alter the role
- Drop the role
- Generate the DDL for the role.
- View Membership
The two not listed above (AnalyzeImpact
, and CompareWith
) are not actually used by roles. As I'm sure youhave noticed, you cannot add users to roles from the Role
Folder, thatwill come with our discussion of the Users
IDS 11.10 supports some of the PL/SQL syntax thus expanding the capability of IDS. It also facilitates easy migration of applications that use a lot of stored procedures and triggers written in PL/SQL.
This version supports the following syntax:GOTO statement and Label syntax
Restrictions on GOTO:
Loop .. End Loop syntax
- A GOTO statement cannot be used in an EXCEPTION block.
- Labels cannot be defined within an EXCEPTION BLOCK.
- A label must be unique within a stored procedure.
Example:WHILE LOOP… END LOOP
<> WHILE LOOP… END LOOP whloop_label
You can re-write the above while loop using a label like shown below:FOR LOOP… END LOOP
<> FOR LOOP… END LOOP for_label
Example:Expression based exit statement.
- EXIT WHEN <expr>;
- EXIT loop_label WHEN <expr>; -- Label Based Exit.
This is especially useful in nested loop statments. Here is an example of a nested loop statement using a label and exitwhen statements:
Please note that for while and for loops you can use both the old and new syntax.Suma Vinod
One of the big ideas recently proposed by IBM, and further proof of our commitment to IDS, has been the idea of a common setof clients tools (like a Java Common Client (JCC), for example). If you aregoing to have a common set of clients, or programming interfaces, itmight not be a bad idea to have a common development tool that willallow you to use these common clients, for both DB2 and IDS. Well IBMhas released in beta an Eclipse based common tool that allows that. Forthose who want to jump right in here is the link:IBMData Server Developer Workbench 9.5
As with many things IBM , since this is an open beta the actual name atthe time of release may change. I will be blogging on thisproduct over the next few weeks covering its capabilities from the IDSside of the tool. So far there will be 5 parts to this report, notcounting this introduction.
Part 1 - Installation for IDS
Part 2 - Setting up your workspace and connecting to IDS.
Part 3 - The Database Explorer window.
Part 4 - ER Diagramming
Part 5 - The SQL Builder and The SQL Editor Windows.
Part 6 - PureQuery and IDS Mark Jamison