Modified on by cheetahblog
Informix introduced the PSM (Primary Storage Manger); a new backup storage manager in version 12.10 that replaced the previous ISM (Informix Storage Manager). The PSM manages storage devices used for backup and restore requests that are issued by Onbar. This storage manager supports both serial and parallel processing for backup and restore requests. PSM is easier to set up and use, even in embedded environments. You just need to perform four simple steps to configure PSM and take an Onbar backup:
1) Decide where you will create the PSM catalog. The default location is $INFORMIXDIR/etc/psm.
2) Add devices for DBSPOOL and LOGPOOL
- onpsm -D add /backups/psm_spaces -g DBSPOOL -p HIGH -t FILE
- onpsm -D add /backups/psm_logs -g LOGPOOL -p HIGH -t FILE
3) Onconfig settings - you can keep the default values
- Set BAR_BSALIB_PATH to full path to libbsapsm.so
- Set PSM_CATALOG_PATH to full path to PSM catalog directory
- Set PSM_DBS_POOL DBSPOOL
- Set PSM_LOG_POOL LOGPOOL
4) Take an onbar backup
You use the Informix
onpsm utility to manage storage for Onbar backup and restore operations, including parallel backups, that use file devices (disks). The
onsmsync utility provides new commands that you can use to export backups to, and import them from, Informix PSM external device pools.
In IDS 11.10, you can configure session properties using sysdbopen() and sysdbclose() procedures, see Configuring Session Properties
. Theseprocedures are stored in sysprocedures system catalog. A memory cache is maintained for these procedures to facilitatefaster searching of procedure identifiers. A new onstat option is available to print statistical information about thismemory cache. You can determine how many users are accessing a database, the sysdb procedures defined for public andother users from this information. A sample output is shown:
$ onstat -g cac sproc
IBM Informix Dynamic Server Version 11.10.FC1 -- On-Line -- Up 4 days 20:30:27 -- 38912 Kbytes
Sysdbproc Cache: Number of lists : 31
list# refcnt dbname Public Procids Username User Procids UsrList# (open,close) (open,close)-------------------------------------------------------------------------------------7 0 sysmaster (-1, 0)16 1 testdb (345, 346) suma (347, 348) 14 informix (343, 344) 12218 0 sysutils (-1, 0)21 0 sysuser (-1, 0)
|list#||List number to which the database name hashes|
|refcnt||Number of sessions accessing this database|
|Public Procids||Procedure ids of public.sysdbopen() and public.sysdbclose(). The value will be (-1,0) if there is no public or user defined sysdb procedures for that database|
|Username||User name of any users who have their own sysdbopen() and sysdbclose() procedures for that database|
|User procids||Procedures ids of sysdb procedures for a particular user in a database|
|UsrList#||A user hash list to which the user belongs|
The number of lists is determined by the onconfig parameter DS_HASHSIZE. The maximum number of database entries for a list is determined by DS_POOLSIZE. This is a soft limit as the limit is checked only when you close a database. When you close a database, if the number of database entries for that list is greater than DS_POOLSIZE, IDS deletes all excess database entries in that list that has a reference count of 0.Suma Vinod
It is not uncommon that an inefficient SQL statement causing
overall performance problem. In that scenario, first you need to identify the
problematic SQL, next find out the user and application that executing the SQL.
Here is a glimpse to address the second part of the problem.
There are different possible ways to find out the user and
application running a particular SQL. The ‘onstat -g ses <sessionid>’
command is the simple way figure out the application that running a SQL. Do you
notice any thing new in the following ‘onstat -g ses <session id>’
IBM Informix Dynamic Server Version 11.70.FC5
-- On-Line -- Up 16:41:52 -- 1048576 Kbytes
session effective #RSAM total
user user tty pid
hostname threads memory memory
informix - 1 19387 garion 1
278528 217656 off
name rstcb flags
sqlexec 145629268 B--PX-- 13103
totalsize freesize #allocfrag #freefrag
V 145b99040 274432 60064
V 145b0d040 4096 808 1 1
free used name free used
0 6576 scb 0 144
0 13880 filetable 0
0 600 log 0 16536
scb sqscb optofc pdqpriority optcompind directives
142e211c0 145b53028 0
0 0 1
SQL Current Iso Lock SQL
Stmt type Database Lvl Mode ERR
ERR Vers Explain
INSERT stores NL
Not Wait 0 0
Current statement name : inscur
Current SQL statement (997) :
You are right. There is a new ‘Program’ section in above
onstat output. Now, you can use the ‘onstat -g ses <session id>’ command
to display the full path of the client program that used in a session. Use the
client program information to monitor or control the application access to the
database. One thing you should know that the program information is sent by the
client itself, and name of the program totally depends on the client.
Beginning with IDS version 11.10, Informix introduced several new onstat commands to enhance server maintenance and monitoring. Database Administrator can take advantage of these new onstat commands to perform their regular activities.
Following are list of new onstat commands:
Onstst Commands Description onstat -g his Prints SQL statement tracing information using SQLTRACE configuration parameter. For more information on SQL tracing, see following articles: Tracing SQL in Cheetah Part 1 Tracing SQL in Cheetah Part 2
onstat -g ckp Print checkpoint information and checkpoint related advisory. For more information on checkpoint advisory, see following article: Checkpoint Performance Advisory
onstat -g pqs Allows dynamically track a session's query plan. For more information see following article: onstat -g pqs onstat -g smx Prints Server Multiplexer group information for servers using SMX. Server Multiplexer Group (SMX) is a communications interface that supports encrypted multiplexed network connections between servers in high availability environments. SMX provides a reliable, secure, high-performance communication mechanism between database server instances.
onstat -g rss Prints Remote Standalone Secondary (RSS) server information. The output of this command differs slightly depending on whether the command is run on the primary server or on the RS secondary server.
onstat -g sds Print Shared Disk Secondary (SDS) server information
onstat -g ipl Shows index page logging status information
onstat -g cat Prints information from the Enterprise Replication global catalog. The global catalog contains a summary of information about the defined servers, replicates, and replicate sets on each of the servers within the enterprise.
onstat -g cdr config Prints the settings of Enterprise Replication configuration parameters and environment variables that can be set with the CDR_ENV configuration parameter.
For a complete list of onstat commands, see the IBM Informix Administrator's Reference Guide.
Introduction:Multiple installation of IBM Informix Dynamic Server is now enabled in IDS 11.50 xC2. With this, users will be able to install multiples copies of IDS on a single machine with the ability to create instance(s) per installation. For example, a user can have 11.50 xC1 and 11.50 xC2 co-existing on the same machine with instances related to each installation running on that machine.
With this new effort, a user that has an installation of IDS 11.50 can invoke setup.exe and will be presented with existing installation of IDS on the machine. The user can then make a choice between installing a new copy of IDS and maintaining an (one of the) existing copy(ies). If there is an existing installation of IDS 11.50 on a machine, a user that launches setup.exe in graphical mode will be presented a panel similar to the one below:
However, with an existing installation of IDS 11.50, launching setup.exe in silent mode will require a maintenance response file. Without a maintenance response file or with an installation response file, the installer will abort. To install a subsequent copy of IDS 11.50 in silent mode, the user should pass the ‘-multiple’ command line option to setup.exe.
In addition to the "-multiple" command line option, there are 2 command line options for maintenance namely "-path" and "-instnum". These 2 new command line options are used to launch maintenance mode for a specified installation. Since they perform the same functionality, they should be used interchangeably and not in conjunction with one another.
Usages of the newly implemented options:
This option can be used in conjunction with other installation command line options. E.g.
setup.exe -multiple -s -f1"C:\TEMP\server.ini" -f2"C:\TEMP\server.log"
setup.exe -path [installation path]
The user invokes setup.exe with the ‘-path’ option passing the installation path for which maintenance is required. This option can be used in conjunction with other maintenance command line options. E.g.
setup.exe -path [installation path] -s -f1"C:\uninst.ini"
setup -instnum [installation number]
The user invokes setup.exe with the ‘-instnum’ option passing installation number for which maintenance is required. Note that each installation is tagged with an installation number in shortcut menu (except for the first installation which doesn’t have a tag). This option can be used in conjunction with other maintenance command line options. E.g.
setup.exe -instnum [installation number] -s -f1"C:\uninst.ini"
For more information on installing IDS, visit http://publib.boulder.ibm.com/infocenter/idshelp/v115/index.jsp
In IDS 11 you can dynamically change
the Enterprise Replication configuration parameters. Thechangestakes effect immediately, while the replication is running. Following are the three commands to use, which are explainedindetail along with examples.
Use the cdr change config
command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1
For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7
Before changing the values you can confirm the existing values by doing:
$onstat -g cdr config | grep SUP
CDR_SUPPRESS_ATSRISWARN configuration setting:1-2,10
Now lets change the values to 2,3,4,5 and 7:$cdr change config "CDR_SUPPRESS_ATSRISWARN 2-5,7" WARNING:The value specifed updated in-memory only.
The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfrom
ONCONFIG will take effect. Now lets confirm the new values:$onstat -g cdr config | grep SUPCDR_SUPPRESS_ATSRISWARN: CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7
Use the cdr add config command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.
Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.$cdr add config "CDR_SUPPRESS_ATSRISWARN 10" WARNING:The value specifed updated in-memory only.
We can confirm the new values from "onstat -g cdr config" as following.$onstat -g cdr config | grep SUPCDR_SUPPRESS_ATSRISWARN: CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7,10
Use the cdr removeconfig
command to removethe
existingvalue of an Enterprise Replication configuration parameter
whilereplication is active. The value persists while replication
is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1
Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.$cdr remove config "CDR_SUPPRESS_ATSRISWARN 2-5" WARNING:The value specifed updated in-memory only.
Confirm your changes:$onstat -g cdr config | grep SUPCDR_SUPPRESS_ATSRISWARN: CDR_SUPPRESS_ATSRISWARN configuration setting: 7,10 NOTE
: Thecommands changeconfiguration parameters only in memory; they do not update theONCONFIG file. To update environment variables in the ONCONFIG file,use the CDR_ENV configuration parameter.
The following table shows which kind of changes are valid for eachEnterprise Replication configuration parameter.
| || || || |
|ConfigurationParameter ||cdr add config ||cdrchange config ||cdr remove config |
|CDR_DBSPACE || ||Yes || |
|CDR_DSLOCKWAIT || ||Yes || |
|CDR_ENV CDR_LOGDELTA || ||Yes || |
|CDR_ENV CDR_PERFLOG || ||Yes || |
|CDR_ENV CDR_RMSCALEFACT || ||Yes || |
|CDR_ENV CDR_ROUTER || ||Yes || |
|CDR_ENV CDRSITES_731 ||Yes ||Yes ||Yes |
|CDR_ENV CDRSITES_92X ||Yes ||Yes ||Yes |
|CDR_ENV CDRSITES_10X ||Yes ||Yes ||Yes |
|CDR_EVALTHREADS || ||Yes || |
|CDR_MAC_DYNAMIC_LOGS || ||Yes || |
|CDR_NIFCOMPRESS || ||Yes || |
|CDR_QDATA_SBSPACE ||Yes ||Yes ||Yes |
|CDR_QHDR_DBSPACE || ||Yes || |
|CDR_QUEUEMEM || ||Yes || |
|CDR_SERIAL || ||Yes || |
|CDR_SUPPRESS_ATSRISWARN ||Yes ||Yes ||Yes |
|ENCRYPT_CDR || ||Yes || |
|ENCRYPT_CIPHERS || ||Yes || |
|ENCRYPT_MAC ||Yes ||Yes ||Yes |
|ENCRYPT_MACFILE ||Yes ||Yes ||Yes |
|ENCRYPT_SWITCH || ||Yes || |
In IDS Cheetah release, you can encrypt communication between an HDR pair, to secure the transmission of data over unsecured networks, including the internet. After you enable encryption, the HDR primary encrypts the data before sending it to the HDR Secondary server. The HDR secondary server decrypts the data. Use new ONCONFIG parameter ENCRYPT_HDR to enable encryption between the HDR pair. You can also customized encryption using following parameters.
|Configuration Parameter ||Default value ||Comments/Description |
|ENCRYPT_HDR ||0 ||0 - disable, 1 - enable HDR encryption |
|ENCRYPT_CIPHERS ||allbut:<ecb> || Defines ciphers and modes that can be used by the current database session. |
The following ciphers are currently supported:
- des (64-bit key), des3 (Triple DES), desx (Extended DES, 128-bit key)
- aes/aes128 (128-bit key), aes192 (192-bit key), aes256 (256-bit key)
- bf-1 Blowfish (64-bit key), bf-2 (128-bit key), bf-3 (192-bit key)
|ENCRYPT_MAC ||medium ||Controls the level of message authentication code (MAC) generation. |
- off - does not use MAC generation.
- low - uses XOR folding on all messages.
- medium - uses SHA1 MAC generation for all messages greater than 20 bytes long
and XOR folding on smaller messages.
- high - uses SHA1 MAC generation on all messages.
|ENCRYPT_MACFILE ||builtin ||A list of the full path names of MAC key files. |
|ENCRYPT_SWITCH ||60,60 ||Defines the frequency (in minutes) at which ciphers, secret keys are renegotiated. |
- To use your own MAC key file
- Execute the following command to generate MAC Key file.
$INFORMIXDIR/bin/GenMacKey -o /usr/informix/etc/MacKey1.dat
- Copy MacKey1.dat over to the paired server
- Update ENCRYPT_MACFILE configuration parameter on both the servers as shown below
- NOTE - HDR and Enterprise Replication (ER) share the same encryption configuration parameters: ENCRYPT_CIPHERS, ENCRYPT_MAC, ENCRYPT_MACFILE and ENCRYPT_SWITCH.
Modified on by cheetahblog
There are new functions in Informix 12.10 designed to raise applicaiton compatibility with other vendors. Some of these are packaged in the excompat (External Compatibility) library, and within that library are functions for enabling tracing or logging of user routines. These are the DBMS_OUTPUT functions, which are:
System-defined routines available in the DBMS_OUTPUT package
Disables the message buffer.
Enables the message buffer.
Gets a line of text from the message buffer.
Gets one or more lines of text from the message buffer and places the text into a collection.
Puts an end-of-line character sequence in the message buffer.
Puts a string that includes no end-of-line character sequence in the message buffer.
Puts a single line that includes an end-of-line character sequence in the message buffer.
The following is a short example of how these might be used.
-- Setup example conditions
-- Register the compatibility library if not already done. Uncomment these lines
-- if the DBMS_OUTPUT routines are not found.
--EXECUTE FUNCTION sysbldprepare('excompat.*', 'drop');
--EXECUTE FUNCTION sysbldprepare('excompat.*', 'create');
-- Include tracing information on an event or a routine.
create trigger if not exists
customer_insert insert on customer
for each row (
execute procedure dbms_output_put_line(
'customer row inserted by session ' || dbinfo('sessionid')
-- For our purposes, it is useful to be able to fetch back a message buffer
-- line from an SQL call; so, creating a procedure to enable that.
drop function if exists read_trace_buffer();
create dba function
define buffer lvarchar(2000);
define line_found integer;
let buffer = '';
let line_found = 0;
execute procedure dbms_output_get_line(buffer, line_found);
-- enable tracing and test
-- Enablement of tracing can be enabled and disabled at runtime
insert into customer (lname, fname) values ('Barker', 'Bob');
-- Demonstrate that information was put in the message buffer by the
-- INSERT trigger.
On the call to read_trace_buffer(), something like the following should be returned.
(expression) customer row inserted by session 71
This is a simple example to demonstrate the setup and enablement of using the DBMS_OUTPUT messaging buffer.
Prior to Cheetah, a whole-system backup (onbar -b-w) was serial backup of all storage spaces andlogical logs based on a single checkpoint. That time is stored with thebackup information. Theadvantage of using a whole-system backup was that you canrestore the storage spaces with or without the logical logs.
Because the data in all storage spaces is consistent in a whole-systembackup, you do not need to restore the logical logs to make the dataconsistent. Level 0, 1, or 2 backups are supported.
One of the new features of Cheetah is Backup/Restore dbspacesin parallel with whole system functionality . With parallelbackup , multiple processes run simultaneously each processbacking up a different dbspace . In most cases, parallel backups complete fasterthan
serialbackups, which use only one process.
ONCONFIG parameter "BAR_MAX_BACKUP" isused for whole system backups/restores as well. TheBAR_MAX_BACKUP parameter specifies the maximum number of parallelprocesses that are allowed for each onbar command. Both UNIX andWindows support parallel backups. Although the database server defaultvalue for BAR_MAX_BACKUP is 4, the onconfig.std value is 0.
To specify parallel backups and restores, including parallel wholesystem backups and restores, set BAR_MAX_BACKUP to a value higher than1. For example, if you set BAR_MAX_BACKUP to 3 and execute anON–Bar command, ON–Bar will spawn the maximum 3 processes concurrently.
If BAR_MAX_BACKUP to 0, the system creates as manyON–Bar processes as needed. The number of ON–Barprocesses is limited only by the number of storage spaces or the amountof memory available to the database server, whichever is less.To perform a serial backup or restore, including a serialwhole system backup or restore, set BAR_MAX_BACKUP to 1.
Also in Cheetah, storage manager efficiency isimproved by improvingdbspace ordering . "Biggest" dbspaces are backed up first and the "smallest"last.Dbspace with most pages used is the "biggest" and will be backed upfirst. This ensures better parallelism, no matter how BAR_MAX_BACKUP isset or how many pages are to be backed up in different dbspaces. Neworder of backup/restore is also effective for non-wholesystembackup/restore. Restore is done in the same order as backup to ease storagemanager's access to stored objects.
To implement new ordering of dbspaces for backup and restore changes are made to the bar_instance table in the sysutils Database .
New column "ins_backup_order" (integer default 0 not null) is added to the table bar_instance .
Also changes are made to ixbar file :
New field with backup sequence integer (Last field in the line ) is added in ixbar file.
Value of the new field is always "0" (zero) for log backup objects.
Even if the BAR_MAX_BACKUP is set to 1 for serial backup,the dbspaces will be backed up using the new dbspace order .
Only 1 checkpoint for all dbspaces - just before backup of rootdbs .
Rootdbs is backed up first, without parallelism (no change here).
Before image processor threads "arcbackup2" are started at this time, one for each dbspace (more threads running in parallel.)
As each dbspace backup completes, the respective "arcbackup2" thread exits ( less "arcbackup2" threads as backup progresses. )
New order of backup. The order is based on the used-pages count at the start time of backup. Dbspace with most pages used is the "biggest" and will be backed up first.
Not much changed, only it is now parallel.
Rootdbs is restored first, without parallelism (no change here).
Restore dbspaces in same order as they were backed up. Use the value of "ins_backup_order" to determine correct order. If BAR_MAX_BACKUP is changed between backup and restore, the objects will still be restored in the same order, however the timing relative to each other may differ significantly. This could have negative effects on SM performance .Rashmi Chawak
In March an article on installing IDSAdmin in a XAMPP environment
was published on this blog. This article has just been updated by Inge Halilovic
with instructions and screen shots for the latest available version of OpenAdmin Tool downloadable from the Free Product Download Site
. The new version of OAT has improved graphics, simpler installation, new features such as support for Mach 11 topology, and several bug fixes.Guy Bowerman
- Automaticstatistics collection.
IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.
- Query statistics in EXPLAIN output.
You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN. The querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel. The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid
command. Following is an example.
- New SAMPLING SIZE option in UPDATE STATISTICS MEDIUM mode.
In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.
If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:
* The min value that you specifyimmediately after the SAMPLING SIZE keywords
* The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).
If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:
* The min value that you specifyimmediately after the SAMPLING SIZE keywords
* The sampling size that is required forthe specified percent of rows and for the specified confidence level.
If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.
For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.UPDATESTATISTICS MEDIUM FOR TABLE customer (city, state) SAMPLING SIZE 200 RESOLUTION 2 DISTRIBUTIONS ONLY;
Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.
- New FILETO option to redirect SET EXPLAIN output to a filethat you desire.
When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.
The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user. The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file. Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.SETEXPLAIN FILE TO '/tmp/explain.vijay'
These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.
In IDS 11.10, all optimizer directives including external directives can be used in queries using ANSI joins.Using
optimizer directives might enable you to force a favorable query execution path thus improving performance.
Here is a description of the different optimizer directives and how they work with ANSI join queries.
|Optimization Goal Directives||FIRST_ROWS, ALL_ROWS||Directives enabled for ANSI join queries|
|Access Method Directives||FULL, INDEX, AVOID_INDEX, AVOID_FULL,INDEX_SJ, AVOID_INDEX_SJ||Directives enabled for ANSI join queries|
|Explain-Mode Directives||EXPLAIN, AVOID_EXECUTE||Directives enabled for ANSI join queries|
|External Directives||Directives enabled for ANSI join queries|
|Join Method Directives||USE_HASH, USE_NL, AVOID_HASH, AVOID_NL, /BUILD, /PROBE||Directives enabled for ANSI Inner Joins|
Directives not enabled for ANSI Inner Joins in views
Directives enabled for ANSI Outer Joins that can be transformed into ANSI Inner joins
Directives disabled for all other ANSI Outer Joins
|Join Order Directives||Ordered||Directives enabled for ANSI Inner Joins|
Directives enabled for ANSI Left Outer joins if it doesn't conflict with the ordering requirement for Outer Joins
Directives disabled for all other ANSI Outer Joins
It is already known to us that onbar backs up some critical administrative files during storage space backup. These critical files are as follows:
- The onconfig file
- UNIX: The sqlhosts file
- The ON-Bar emergency boot file: ixbar.servernum
- The server boot file: oncfg_servername.servernum
There may be a situation when you need to restore one of these files, for example: replace disks, restore to a second computer system (imported restore) etc. However, previously there was no easy way to restore these files from onbar backup. You need to depend on the Informix Technical Support to perform this operation.
Starting with Inofrmix vesion 12.10.xC2, you can restore the critical files that you backed up with onbar utility. Onbar cold restore has an option now to restore these files or you can only restore these file without perform the storate space restore.
Use 'onbar -r -cf yes' to restore critical file during cold restore. Alternatively, you can use 'onbar -r -cf only' to extract critical files while Informix server is offline.
Modified on by cheetahblog
The best-fit Java class for an Informix DATETIME value is java.sql.Timestamp. Java.sql.Timestamp technically inherits from java.util.Date; although, there are some semantic differences which are not typical of inheritance relationships. All the constructors, getters, and setters which are not deprecated for Date and Timestamp use values based on Greenwich Mean Time (GMT). They are offsets in milliseconds from the epoch, January 1, 1970 00:00:00 GMT. So, it would be logically consistent for Timestamps to convert to DATETIMEs using the GMT reference frame by default. However, the JDBC Tutorial and Reference specifies that, when no Calendar is provided, Timestamp values are converted to the time zone of the JVM in which the application is running. The difficulties surrounding Timestamps and time zones are not limited to the scope of the Informix JDBC driver; they are generic to all of JDBC because that is the specification.
This model creates problems in a variety of ways. For distributed applications, the Date or Timestamp values might be streaming into the database from JDBC drivers located in different time zones. With multi-tier applications, you might find yourself contending with a user interface in time zone A, the JDBC driver in time zone B, and the database server in time zone C. The driver is the library where external types are converted to internal types; so, where it is running is where the local time zone comes into play. It can be difficult for either the front end or the database to know in what time zone the driver is operating. Dealing with the conversion from Timestamp in GMT to DATETIME in the JDBC driver locale has to be done in the application layer directly interfacing with the JDBC driver.
The conversion done by the driver applies in both directions, and this can lead to complications if anything other than the JDBC Timestamp class is used in conjunction with DATETIME values. If strings representing DATETIMEs or DATETIME literals are used anywhere in your application SQL (including CURRENT), the values will not be converted on the way in, but will be converted on the way out. Similarly, different APIs will not convert in either direction. If you insert new Timestamp(86400001) (January 2, 1970 00:00:00.001 GMT) through JDBC, and select it back from within the same JVM, you will get that same value back. So, you might expect that selecting the same row-column from, say, DBAcccess, would also give you the same value, but it will not because the value was converted to JVM time zone on the way in through JDBC, but not converted on the way out through ESQL/C. If you are in the U.S., you will get some time on January 1st, not even the same day. The reverse also applies, you may find yourself with values inserted through other APIs, which perform no conversion, that are converted to some other time zone when selected through JDBC.
DATETIMEs, of course, do not contain time zone information. There are basically two ways to deal with this, record the time zone information in another column along with the DATETIME or establish a convention that all the DATETIMES in the database are in the same time zone. Establishing a convention is the simpler approach both from an application implementation perspective, and in terms of being able to easily compare values in different rows, columns, or tables. If you know that every Java application which will ever access the data will operate in its own JVM, you can code your applications to set the default time zone of the JVM, but this is a problematic solution when more than one, independent application is running in the same JVM, such as servlets in a web server. It is unclear what would happen with different threads of execution changing the time zone for all other threads, but if they were actually able to do that, it would not be pretty.
At JDK 1.1 two things happened, all the getters and setters for java.util.Date where the meaning was ambiguous with respect to time zone were deprecated and the java.util.Calendar class was introduced. Along with the Calendar class itself, methods accepting a Calendar object were added to the JDBC interface, for example PreparedStatement.setTimestamp(int parameterIndex, Timestamp x, Calendar cal) and ResultSet.getTimestamp(int columnIndex, Calendar cal). In these methods, the Calendar object establishes the frame of reference used to convert from the GMT offset in the Timestamp object to or from a DATETIME value. If you create a constant Calendar object in your applications, and use it every time you read or write a Timestamp object to a DATETIME, the values DATETIME will remain consistent with each other, and the meaning of the value will not change dependent on the time zone of the JVM.
Using a GMT calendar would be efficient because it requires less operations to convert between the Java classes and the value in the database. Also, the value will not change if other APIs, like ODBC or ESQL/C, are used. Conceptually, what is stored in the database is January 1, 1970 00:00:00.000 GMT + tstamp.getTime() milliseconds.
Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
Coordinated Universal Time (UTC) is a close relative to GMT and is just as viable.
In this model, all date and time values in the database are on the same time line, and conversion to other time lines or calendar systems are handled at the user interface level. Using UTC or GMT for all DATETIME values provides a simple model to implement in applications. It also removes all the problems related to knowing the time zone of the JDBC driver runtime or if applications written in anything other than Java will ever be used to access the data.
From IDS 10.0x onwards, Enterprise Replication
(ER) supports alteroperations on a replicated table while replication is active however, RENAME
support was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.
When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate
. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER
. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database
is correctlyupdated.Vijay Lolabattu