Appropriate Content: Informix Documentation Team
IngeHalilovic 060000MPB8 Tags:  configuration quick_reference informix onconfig onstat 1,851 Visits
The Quick Reference Cards for onconfig.std and onstat have been updated for Informix 11.70. These cards are designed to be printed double-sided on legal-sized paper (8.5" by 14") and folded into a four-sided pamphlet. They look best printed in color, especially the onconfig.std card, which shows version numbers in red.
IngeHalilovic 060000MPB8 Tags:  11.7 highlights database features information_center information informix 1,869 Visits
Do you want to know exactly what's new for IBM® Informix® 11.70? Visit the new v11.70 information center.
For highlights of Informix 11.70, short descriptions of all the new features, and links to where the features are described in detail, see What's new in Informix.
Don't forget to subscribe to the RSS feed so that you'll know when we update the information center.
The v11.70 information center has the same collaboration features as the v11.50 information center. Find out how to comment on, rate, and watch topics.
The v11.70 information center works best with these browsers:
Basic text searching in Informix v11.50 became more versatile, configurable, and faster. See the What's New in Database Extensions for more information.
Process Multiple Basic Test Search Queries Simultaneously
If basic text search queries are slow because multiple users are running queries at the same time, you can add more BTS virtual processors so that queries run simultaneously, each in their own virtual processor. Previously, you could only create one BTS virtual processor and queries ran serially.
Control the Results of a Fuzzy Search with the Basic Text Search DataBlade® module
You can now specify the degree of similarity of search results in fuzzy searches when using the Basic Text Search DataBlade module. Specify a number between 0 and 1, where a higher value results in a higher degree of similarity. To limit results, specify a higher number. To maximize results, specify a lower number. The default degree of similarity is 0.5.
Map Characters for Indexing with the Basic Text Search DataBlade module
You can now map characters in your data to other characters during indexing with the Basic Text Search DataBlade module. For example, you can specify that letters with diacritical marks are indexed as the same letters without marks. You can also standardize inconsistent prefixes or delete character strings from indexed text. To use character maps, include the canonical_maps parameter when you create your bts index.
Default Boolean Operator in Basic Text Search Queries
You can now change the default Boolean
operator between search terms in Basic Text Search queries from OR to AND by
using the query_default_operator parameter when you create a bts index. The default
operator is represented by a blank space between terms. Many popular end-user
search engines use AND as the default operator between search terms, where
end-users expect the search results to contain all their search terms.
Storage for Temporary Basic Text Search Files
You can now specify that temporary files used by the Basic Text Search DataBlade module are stored in a separate sbspace from the one used to store the bts index. Separating temporary files from the bts index might improve query performance.
Track Basic Text Search Query Trends
You can now track what queries are run against your bts index by including the query_log parameter when you create a bts index. You can use query trends information to provide hints to end-users on popular queries or work on optimizing the most popular queries.
Fragment bts Indexes by Expressions
You can now fragment bts indexes by expressions into multiple sbspaces instead of a single sbspace.
Basic Text Search DataBlade module Supports High-Availability Clusters
You can now use the Basic Text Search DataBlade module to perform searches on high-availability cluster servers by creating indexes in sbspaces. Previously, the Basic Text Search DataBlade module only supported the creation of indexes in extspaces, and thus could not participate in any queries on high-availability secondary servers and in backup and restore operations.
Querying XML Attributes with the Basic Text DataBlade module
The Basic Text Search DataBlade module now supports searches on XML attributes in a document repository. The new all_xmlattrs parameter enables searches on all attributes that are contained in the XML tags or paths in a column that contains an XML document.
Support added for a user-defined stopword list
You can create a customized stopword list for frequently occurring words in your data or you can use the default stopword list.
Support added for XML-structured documents
You can use Basic Text Search XML index parameters to manipulate searches of XML data in different ways.
96QY_Pat_Smith 27000196QY 1,215 Visits
For Version 11.50xC6 of IBM Informix, we added a new Migration Guide chapter on high-availability cluster migration. This information helps you coordinate the migration of all servers in a high-availability cluster or, if necessary, revert all servers in a cluster to the previous version of the server. Chapter topics include information on the additional steps you must perform when preparing to migrate, migrating, and reverting clusters.
If you use high-availability clusters, refer to the information when you upgrade to a new version of Informix, PID, or fix pack, migrate or, if necessary, when you revert to the previous version.
For more information, see High-availability cluster migration.
Informix added a couple of key warehousing features in 11.50.xC5 and 11.50.xC6:
Load and Unload Data with External Tables
IDS supports external tables. You can read and write from a source that is external to the database server. External tables provide an SQL interface to data in text files managed by the operating system or to data from a FIFO device. To create external tables, use the CREATE EXTERNAL TABLE statement. Use the existing DROP TABLE statement to drop an external table.
See the Guide to SQL: Syntax.
Loading Data into a Warehouse with the MERGE Statement
Instead of using separate UPDATE and INSERT statements to load data from an OLTP database into a database warehouse environment, use the new MERGE statement, which can combine UPDATE and INSERT operations into a single SQL statement.
The MERGE statement can merge records from a table, view, or query (the source) with the records in a local table (the target). You can specify a logical condition that MERGE applies to a join of the source and target objects.
The MERGE statement supports Update and Insert triggers on the target table. Any constraints on the target table are enforced in MERGE operations.
See the Guide to SQL: Syntax.
Retrieving Data by Using Hierarchical Queries
You can now retrieve data from a table by using hierarchical queries, which maintain the relationship between the data.
The SELECT statement of Informix now supports START WITH .. CONNECT BY syntax for recursively querying a table in which a hierarchy of parent-child relationships exist. The syntax can define recursive queries that reflect the topology of the data hierarchy.
This implementation of hierarchical queries uses extensions to the ISO standard for SQL.
See the Guide to SQL: Syntax.
Informix Warehouse Feature
The Informix Warehouse Feature provides an integrated platform for the design and administration of data warehousing applications.
The core of Informix Warehouse Feature is the SQL Warehousing Tool. It includes an application development component in the Informix Warehouse Feature client and an administration component on the Informix Warehouse Feature server.
Informix Warehouse Feature client includes the Design Studio, which provides a common design environment for creating physical data models, SQL data flows and control flows. Design Studio is built on the Eclipse Workbench and automatically generates SQL that is based on visual operator flows that you model in the Design Studio. The library of SQL operators covers the in-database data operations that are typically needed to move data between database tables.
96QY_Pat_Smith 27000196QY 1,777 Visits
If you install version 11.50xC6 or a later version of Informix and then upgrade to another new Informix version or fix pack, you can enable the server to use the onrestorept utility to restore the server to a consistent, pre-upgrade state if the upgrade fails. If you enable the server to use this utility, you can undo changes made during the upgrade in minutes (and in some cases, in seconds). Previously, if a fix pack upgrade failed, you had to restore the database by using a level-0 archive.
You use the new CONVERSION_GUARD and RESTORE_POINT_DIR configuration parameters, which are documented in the IBM Informix Administrator's Reference, to specify information that the onrestorept utility can use if an upgrade fails.
For information on the onrestorept utility and setting up the server for it, see the following topics in the IBM Informix Migration Guide:
If you replicate your data with Enterprise Replication, you might need to repair inconsistencies in the data. Instead of syncing all the data, you can check the data to find inconsistencies and then sync only those records. The cdr check replicate and cdr check replicateset commands, which were introduced in Informix 10.0, had several new options added in Informix 11.50 to improve the speed of consistency checking and repairing data.
You can increase the speed of a consistency check on a replicate by indexing a new shadow column, ifx_replcheck. You add the ifx_replcheck column to your replicated table using the WITH REPLCHECK clause and create a unique index on the ifx_replcheck column and your primary key columns. You can also alter an existing table to add the ifx_replcheck column. The replicated table must also have the CRCOLS shadow columns. You cannot perform a table-level restore on a table that contains the ifx_replcheck column.
By default, inconsistent rows are rechecked for up to five seconds, which might not be enough time for replicated transactions to be applied on the target server. You can specify the number of seconds to spend on rechecking the consistency of inconsistent rows. Rechecking prevents transactions that are in progress from being listed as inconsistent in the consistency report. You can use the --inprogress option of the cdr check replicate and cdr check replicateset commands to specify the maximum number of seconds to perform rechecking.
You can reduce the duration of a consistency check by performing a check in parallel and by controlling the amount of data that is checked. You can specify a time from which to check updated rows by using the --since option. You can specify a subset of a table to check by using the --where option. You can prevent the checking of large objects by using the --skipLOB option.
You can increase the speed of a consistency check on a replicate set by performing the operation on each replicate in parallel. Specify the number of parallel processes to use for processing a replicate set by using the --process option with the cdr check replicateset command.
Use these options to customize your consistency checking and repairing.
For more information, see:
96QY_Pat_Smith 27000196QY 1,222 Visits
A few weeks ago, I mentioned that in Version 11.50 you can use SQL administration API commands to compress row data in a table or in one or more table fragments.
In Version 11.50, you can also compress indexes by merging two partially used index pages if the data on those pages totals a set level. You can specify the index compression level by modifying the value of the compression field of the BTSCANNER configuration parameter, by running an onmode -C command with a compression value, or by running an SQL administration API function with the SET INDEX COMPRESSION argument. For more information on this and other new performance-related features, see the "What's new in performance" topic in the Introduction of the Version 11.50 Performance Guide.
For more information on SQL administration API commands, see the "SQL administration API" section of the Administrator's Reference. The "SQL administration API portal: Arguments by functional category" topic, which is in the "SQL administration API" section of the Administrator's Reference, is a handy list of all of the SQL administration API commands you can use.
IBM Informix Survey for Continuous Availability White Paper
Complete the Survey to win an Apple iPad! - 1 week left
Every response matters. - Start Survey here!
IBMR Informix is the database software voted #1 in customer satisfaction.
Clients choose Informix because it is reliable, low cost, and hassle free.
Solution providers choose Informix for its best-of-breed embeddability.
Yet, there are some people that still don't 'get' Informix, or realize the
many benefits of deploying it. Help us gather data to support this claim.
- Informix is exceptionally hardware efficient, which means that (in the
REAL world) you need to spend MUCH less on hardware to get the same
performance as other products.
- Informix is exceptionally reliable, which means that (in the REAL world)
you don't need to pay lots of people to make sure it stays 'up'.
- Informix is exceptionally scalable, which means that (in the REAL world)
it can be idling one moment and then processing thousands of transactions
the next with no apparent stress.
Advanced DataTools is working with Oninit, to gather data about what happens
in the REAL world to support these assertions with empirical evidence in the
The data collected will be used to compile a report that will be made
available to every CTO, IT Director and IT Manager. Along with this, they
will receive a list of all the major application vendors that are now
porting their applications to Informix V11.5 and a document outlining the
key reasons to choose Informix.
Every response matters. - Start Survey here!
Win an Apple iPad. One randomly selected participant in the data collection
phase of this research will win an Apple iPad, provided by Advanced
DataTools Corporation, an Advanced IBM Informix Business Partner.
Note: Public Sector Employees are not eligible
96QY_Pat_Smith 27000196QY 1,748 Visits
If you have Informix Version 11.50, you can use SQL administration API commands to compress row data in a table or in one or more table fragments. You can also use SQL administration API commands to consolidate free space in a table or fragment, return this free space to the dbspace, and estimate the amount of space that is saved by compressing the data.
If you haven't tried compressing your data and want more information on this well-received feature, start by opening the "What's new" topic in the Administrator's Guide or Administrator's Reference and then navigate to the topics on compression.